Sample records for design scaling methods

  1. Wide band design on the scaled absorbing material filled with flaky CIPs

    NASA Astrophysics Data System (ADS)

    Xu, Yonggang; Yuan, Liming; Gao, Wei; Wang, Xiaobing; Liang, Zichang; Liao, Yi

    2018-02-01

    The scaled target measurement is an important method to get the target characteristic. Radar absorbing materials are widely used in the low detectable target, considering the absorbing material frequency dispersion characteristics, it makes designing and manufacturing scaled radar absorbing materials on the scaled target very difficult. This paper proposed a wide band design method on the scaled absorbing material of the thin absorption coating with added carbonyl iron particles. According to the theoretical radar cross section (RCS) of the plate, the reflection loss determined by the permittivity and permeability was chosen as the main design factor. Then, the parameters of the scaled absorbing materials were designed using the effective medium theory, and the scaled absorbing material was constructed. Finally, the full-size coating plate and scaled coating plates (under three different scale factors) were simulated; the RCSs of the coating plates were numerically calculated and measured at 4 GHz and a scale factor of 2. The results showed that the compensated RCS of the scaled coating plate was close to that of the full-size coating plate, that is, the mean deviation was less than 0.5 dB, and the design method for the scaled material was very effective.

  2. Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.

    1999-01-01

    Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.

  3. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  4. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    ERIC Educational Resources Information Center

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  5. Design and Performance of Insect-Scale Flapping-Wing Vehicles

    NASA Astrophysics Data System (ADS)

    Whitney, John Peter

    Micro-air vehicles (MAVs)---small versions of full-scale aircraft---are the product of a continued path of miniaturization which extends across many fields of engineering. Increasingly, MAVs approach the scale of small birds, and most recently, their sizes have dipped into the realm of hummingbirds and flying insects. However, these non-traditional biologically-inspired designs are without well-established design methods, and manufacturing complex devices at these tiny scales is not feasible using conventional manufacturing methods. This thesis presents a comprehensive investigation of new MAV design and manufacturing methods, as applicable to insect-scale hovering flight. New design methods combine an energy-based accounting of propulsion and aerodynamics with a one degree-of-freedom dynamic flapping model. Important results include analytical expressions for maximum flight endurance and range, and predictions for maximum feasible wing size and body mass. To meet manufacturing constraints, the use of passive wing dynamics to simplify vehicle design and control was investigated; supporting tests included the first synchronized measurements of real-time forces and three-dimensional kinematics generated by insect-scale flapping wings. These experimental methods were then expanded to study optimal wing shapes and high-efficiency flapping kinematics. To support the development of high-fidelity test devices and fully-functional flight hardware, a new class of manufacturing methods was developed, combining elements of rigid-flex printed circuit board fabrication with "pop-up book" folding mechanisms. In addition to their current and future support of insect-scale MAV development, these new manufacturing techniques are likely to prove an essential element to future advances in micro-optomechanics, micro-surgery, and many other fields.

  6. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  7. Symposium on Parallel Computational Methods for Large-scale Structural Analysis and Design, 2nd, Norfolk, VA, US

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)

    1993-01-01

    Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.

  8. Peridynamic Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy; Bond, Stephen D.; Littlewood, David John

    The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less

  9. Multiscale Modeling in the Clinic: Drug Design and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, Colleen E.; An, Gary; Cannon, William R.

    A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less

  10. Study on the millimeter-wave scale absorber based on the Salisbury screen

    NASA Astrophysics Data System (ADS)

    Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan

    2018-03-01

    In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.

  11. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL BASIS AND EXPERIMENTAL RESULTS FOR A CONSTANT DIFFUSIVITY

    EPA Science Inventory

    Granular activated carbon (GAC) is an effective treatment technique for the removal of some toxic organics from drinking water or wastewater, however, it can be a relatively expensive process, especially if it is designed improperly. A rapid method for the design of large-scale f...

  12. The VolturnUS 1:8 Floating Wind Turbine: Design, Construction, Deployment, Testing, Retrieval, and Inspection of the First Grid-Connected Offshore Wind Turbine in US

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagher, Habib; Viselli, Anthony; Goupee, Andrew

    Volume II of the Final Report for the DeepCwind Consortium National Research Program funded by US Department of Energy Award Number: DE-EE0003278.001 summarizes the design, construction, deployment, testing, numerical model validation, retrieval, and post-deployment inspection of the VolturnUS 1:8-scale floating wind turbine prototype deployed off Castine, Maine on June 2nd, 2013. The 1:8 scale VolturnUS design served as a de-risking exercise for a commercial multi-MW VolturnUS design. The American Bureau of Shipping Guide for Building and Classing Floating Offshore Wind Turbine Installations was used to design the prototype. The same analysis methods, design methods, construction techniques, deployment methods, mooring, andmore » anchoring planned for full-scale were used. A commercial 20kW grid-connected turbine was used and was the first offshore wind turbine in the US.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E.; Lenee-Bluhm, Pukha; Prudell, Joseph H.

    The most prudent path to a full-scale design, build and deployment of a wave energy conversion (WEC) system involves establishment of validated numerical models using physical experiments in a methodical scaling program. This Project provides essential additional rounds of wave tank testing at 1:33 scale and ocean/bay testing at a 1:7 scale, necessary to validate numerical modeling that is essential to a utility-scale WEC design and associated certification.

  14. Designing the Nuclear Energy Attitude Scale.

    ERIC Educational Resources Information Center

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  15. Method and apparatus for decoupled thermo-catalytic pollution control

    DOEpatents

    Tabatabaie-Raissi, Ali; Muradov, Nazim Z.; Martin, Eric

    2006-07-11

    A new method for design and scale-up of thermocatalytic processes is disclosed. The method is based on optimizing process energetics by decoupling of the process energetics from the DRE for target contaminants. The technique is applicable to high temperature thermocatalytic reactor design and scale-up. The method is based on the implementation of polymeric and other low-pressure drop support for thermocatalytic media as well as the multifunctional catalytic media in conjunction with a novel rotating fluidized particle bed reactor.

  16. Scale and scaling in agronomy and environmental sciences

    USDA-ARS?s Scientific Manuscript database

    Scale is of paramount importance in environmental studies, engineering, and design. The unique course covers the following topics: scale and scaling, methods and theories, scaling in soils and other porous media, scaling in plants and crops; scaling in landscapes and watersheds, and scaling in agro...

  17. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  18. [Discussion on development of four diagnostic information scale for clinical re-evaluation of postmarketing herbs].

    PubMed

    He, Wei; Xie, Yanming; Wang, Yongyan

    2011-12-01

    Post-marketing re-evaluation of Chinese herbs can well reflect Chinese medicine characteristics, which is the most easily overlooked the clinical re-evaluation content. Since little attention has been paid to this, study on the clinical trial design method was lost. It is difficult to improving the effectiveness and safety of traditional Chinese medicine. Therefore, more attention should be paid on re-evaluation of the clinical trial design method point about tcm syndrome such as the type of research program design, the study of Chinese medical information collection scale and statistical analysis methods, so as to improve the clinical trial design method study about tcm syndrome of Chinese herbs postmarketing re-evalutation status.

  19. Methodological Issues in Questionnaire Design.

    PubMed

    Song, Youngshin; Son, Youn Jung; Oh, Doonam

    2015-06-01

    The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.

  20. Scale factor measure method without turntable for angular rate gyroscope

    NASA Astrophysics Data System (ADS)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  1. State of the Art Methodology for the Design and Analysis of Future Large Scale Evaluations: A Selective Examination.

    ERIC Educational Resources Information Center

    Burstein, Leigh

    Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…

  2. Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation

    NASA Technical Reports Server (NTRS)

    Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.

  3. The Alzheimer's Disease Knowledge Scale: Development and Psychometric Properties

    ERIC Educational Resources Information Center

    Carpenter, Brian D.; Balsis, Steve; Otilingam, Poorni G.; Hanson, Priya K.; Gatz, Margaret

    2009-01-01

    Purpose: This study provides preliminary evidence for the acceptability, reliability, and validity of the new Alzheimer's Disease Knowledge Scale (ADKS), a content and psychometric update to the Alzheimer's Disease Knowledge Test. Design and Methods: Traditional scale development methods were used to generate items and evaluate their psychometric…

  4. Separate versus Concurrent Calibration Methods in Vertical Scaling.

    ERIC Educational Resources Information Center

    Karkee, Thakur; Lewis, Daniel M.; Hoskens, Machteld; Yao, Lihua; Haug, Carolyn

    Two methods to establish a common scale across grades within a content area using a common item design (separate and concurrent) have previously been studied under simulated conditions. Separate estimation is accomplished through separate calibration and grade-by-grade chained linking. Concurrent calibration established the vertical scale in a…

  5. A Unified Approach to IRT Scale Linking and Scale Transformations. Research Report. RR-04-09

    ERIC Educational Resources Information Center

    von Davier, Matthias; von Davier, Alina A.

    2004-01-01

    This paper examines item response theory (IRT) scale transformations and IRT scale linking methods used in the Non-Equivalent Groups with Anchor Test (NEAT) design to equate two tests, X and Y. It proposes a unifying approach to the commonly used IRT linking methods: mean-mean, mean-var linking, concurrent calibration, Stocking and Lord and…

  6. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  7. Apparatus for decoupled thermo-photocatalytic pollution control

    DOEpatents

    Tabatabaie-Raissi, Ali; Muradov, Nazim Z.; Martin, Eric

    2003-04-22

    A new method for design and scale-up of photocatalytic and thermocatalytic processes is disclosed. The method is based on optimizing photoprocess energetics by decoupling of the process energy efficiency from the DRE for target contaminants. The technique is applicable to photo-thermocatalytic reactor design and scale-up. At low irradiance levels, the method is based on the implementation of low pressure drop biopolymeric and synthetic polymeric support for titanium dioxide and other band-gap media. At high irradiance levels, the method utilizes multifunctional metal oxide aerogels and other media within a novel rotating fluidized particle bed reactor.

  8. Binary optical filters for scale invariant pattern recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Downie, John D.; Hine, Butler P.

    1992-01-01

    Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.

  9. Plant Disease Severity Assessment-How Rater Bias, Assessment Method, and Experimental Design Affect Hypothesis Testing and Resource Use Efficiency.

    PubMed

    Chiang, Kuo-Szu; Bock, Clive H; Lee, I-Hsuan; El Jarroudi, Moussa; Delfosse, Philippe

    2016-12-01

    The effect of rater bias and assessment method on hypothesis testing was studied for representative experimental designs for plant disease assessment using balanced and unbalanced data sets. Data sets with the same number of replicate estimates for each of two treatments are termed "balanced" and those with unequal numbers of replicate estimates are termed "unbalanced". The three assessment methods considered were nearest percent estimates (NPEs), an amended 10% incremental scale, and the Horsfall-Barratt (H-B) scale. Estimates of severity of Septoria leaf blotch on leaves of winter wheat were used to develop distributions for a simulation model. The experimental designs are presented here in the context of simulation experiments which consider the optimal design for the number of specimens (individual units sampled) and the number of replicate estimates per specimen for a fixed total number of observations (total sample size for the treatments being compared). The criterion used to gauge each method was the power of the hypothesis test. As expected, at a given fixed number of observations, the balanced experimental designs invariably resulted in a higher power compared with the unbalanced designs at different disease severity means, mean differences, and variances. Based on these results, with unbiased estimates using NPE, the recommended number of replicate estimates taken per specimen is 2 (from a sample of specimens of at least 30), because this conserves resources. Furthermore, for biased estimates, an apparent difference in the power of the hypothesis test was observed between assessment methods and between experimental designs. Results indicated that, regardless of experimental design or rater bias, an amended 10% incremental scale has slightly less power compared with NPEs, and that the H-B scale is more likely than the others to cause a type II error. These results suggest that choice of assessment method, optimizing sample number and number of replicate estimates, and using a balanced experimental design are important criteria to consider to maximize the power of hypothesis tests for comparing treatments using disease severity estimates.

  10. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  11. Similitude design for the vibration problems of plates and shells: A review

    NASA Astrophysics Data System (ADS)

    Zhu, Yunpeng; Wang, You; Luo, Zhong; Han, Qingkai; Wang, Deyou

    2017-06-01

    Similitude design plays a vital role in the analysis of vibration and shock problems encountered in large engineering equipment. Similitude design, including dimensional analysis and governing equation method, is founded on the dynamic similitude theory. This study reviews the application of similitude design methods in engineering practice and summarizes the major achievements of the dynamic similitude theory in structural vibration and shock problems in different fields, including marine structures, civil engineering structures, and large power equipment. This study also reviews the dynamic similitude design methods for thin-walled and composite material plates and shells, including the most recent work published by the authors. Structure sensitivity analysis is used to evaluate the scaling factors to attain accurate distorted scaling laws. Finally, this study discusses the existing problems and the potential of the dynamic similitude theory for the analysis of vibration and shock problems of structures.

  12. A Large-Scale Blended and Flipped Class: Class Design and Investigation of Factors Influencing Students' Intention to Learn

    ERIC Educational Resources Information Center

    Zhang, Yulei; Dang, Yan; Amer, Beverly

    2016-01-01

    This paper reports a study of a large-scale blended and flipped class and has two major parts. First, it presents the design of the class, i.e., a coordinated and multisection undergraduate introduction-to-computer-information-systems course. The detailed design of various teaching methods used in the class is presented, including a digital…

  13. Scaling Agile Methods for Department of Defense Programs

    DTIC Science & Technology

    2016-12-01

    concepts that drive the design of scaling frameworks, the contextual drivers that shape implementation, and widely known frameworks available today...Barlow probably governs some of the design choices you make. Barlow’s formula helps us understand the relationship between the outside diameter of a...encouraged to cross-train engineering staff and move away from a team structure where people focus on only one specialty, such as design

  14. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  15. Psychometric properties of an instrument to measure nursing students' quality of life.

    PubMed

    Chu, Yanxiang; Xu, Min; Li, Xiuyun

    2015-07-01

    It is important for clinical nursing teachers and managers to recognize the importance of nursing students' quality of life (QOL) since they are the source of future nurses. As yet, there is no quality of life evaluation scale (QOLES) specific to them. This study designed a quantitative instrument for evaluating QOL of nursing students. The study design was a descriptive survey with mixed methods including literature review, panel discussion, Delphi method, and statistical analysis. The data were collected from 880 nursing students from four teaching hospitals in Wuhan, China. The reliability and validity of the scale were tested through completion of the QOLES in a cluster sampling method. The total scale included 18 items in three domains: physical, psychological, and social functional. The cumulative contributing rate of the three common factors was 65.23%. Cronbach's alpha coefficient of the scale was 0.82. This scale had good reliability and validity to evaluate nursing students' QOL. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterizemore » the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.« less

  17. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  18. Aerodynamic design on high-speed trains

    NASA Astrophysics Data System (ADS)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  19. Selecting a proper design period for heliostat field layout optimization using Campo code

    NASA Astrophysics Data System (ADS)

    Saghafifar, Mohammad; Gadalla, Mohamed

    2016-09-01

    In this paper, different approaches are considered to calculate the cosine factor which is utilized in Campo code to expand the heliostat field layout and maximize its annual thermal output. Furthermore, three heliostat fields containing different number of mirrors are taken into consideration. Cosine factor is determined by considering instantaneous and time-average approaches. For instantaneous method, different design days and design hours are selected. For the time average method, daily time average, monthly time average, seasonally time average, and yearly time averaged cosine factor determinations are considered. Results indicate that instantaneous methods are more appropriate for small scale heliostat field optimization. Consequently, it is proposed to consider the design period as the second design variable to ensure the best outcome. For medium and large scale heliostat fields, selecting an appropriate design period is more important. Therefore, it is more reliable to select one of the recommended time average methods to optimize the field layout. Optimum annual weighted efficiency for heliostat fields (small, medium, and large) containing 350, 1460, and 3450 mirrors are 66.14%, 60.87%, and 54.04%, respectively.

  20. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  1. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  2. Psychometric Properties of the Scientific Inquiry Scale

    ERIC Educational Resources Information Center

    Ossa-Cornejo, Carlos; Díaz-Mujica, Alejandro; Aedo-Saravia, Jaime; Merino-Escobar, Jose M.; Bustos-Navarrete, Claudio

    2017-01-01

    Introduction: There are a few methods to study inquiry's abilities in Chile, despite its importance in science education. This study analyzes the psychometric properties of a Scientific Inquiry Scale in pedagogy students of two Chilean universities. Method: The study uses an instrumental design with 325 students from 3 pedagogy majors. As a…

  3. Materials-by-design: computation, synthesis, and characterization from atoms to structures

    NASA Astrophysics Data System (ADS)

    Yeo, Jingjie; Jung, Gang Seob; Martín-Martínez, Francisco J.; Ling, Shengjie; Gu, Grace X.; Qin, Zhao; Buehler, Markus J.

    2018-05-01

    In the 50 years that succeeded Richard Feynman’s exposition of the idea that there is ‘plenty of room at the bottom’ for manipulating individual atoms for the synthesis and manufacturing processing of materials, the materials-by-design paradigm is being developed gradually through synergistic integration of experimental material synthesis and characterization with predictive computational modeling and optimization. This paper reviews how this paradigm creates the possibility to develop materials according to specific, rational designs from the molecular to the macroscopic scale. We discuss promising techniques in experimental small-scale material synthesis and large-scale fabrication methods to manipulate atomistic or macroscale structures, which can be designed by computational modeling. These include recombinant protein technology to produce peptides and proteins with tailored sequences encoded by recombinant DNA, self-assembly processes induced by conformational transition of proteins, additive manufacturing for designing complex structures, and qualitative and quantitative characterization of materials at different length scales. We describe important material characterization techniques using numerous methods of spectroscopy and microscopy. We detail numerous multi-scale computational modeling techniques that complements these experimental techniques: DFT at the atomistic scale; fully atomistic and coarse-grain molecular dynamics at the molecular to mesoscale; continuum modeling at the macroscale. Additionally, we present case studies that utilize experimental and computational approaches in an integrated manner to broaden our understanding of the properties of two-dimensional materials and materials based on silk and silk-elastin-like proteins.

  4. Experimental design and quantitative analysis of microbial community multiomics.

    PubMed

    Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis

    2017-11-30

    Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.

  5. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold E. Jr.; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two new scaling methods based on Weber number were compared against a method based on Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel where the three methods of scaling were also tested and compared along with reference (altitude) icing conditions. In those tests, the Weber number-based scaling methods yielded results much closer to those observed at the reference icing conditions than the Reynolds number-based icing conditions. The test in the NASA IRT used a much larger, asymmetric airfoil with an ice protection system that more closely resembled designs used in commercial aircraft. Following the trends observed during the AIWT tests, the Weber number based scaling methods resulted in smaller runback ice than the Reynolds number based scaling, and the ice formed farther upstream. The results show that the new Weber number based scaling methods, particularly the Weber number with water loading scaling, continue to show promise for ice protection system development and evaluation in atmospheric icing tunnels.

  6. Probabilistic Multi-Scale, Multi-Level, Multi-Disciplinary Analysis and Optimization of Engine Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2000-01-01

    Aircraft engines are assemblies of dynamically interacting components. Engine updates to keep present aircraft flying safely and engines for new aircraft are progressively required to operate in more demanding technological and environmental requirements. Designs to effectively meet those requirements are necessarily collections of multi-scale, multi-level, multi-disciplinary analysis and optimization methods and probabilistic methods are necessary to quantify respective uncertainties. These types of methods are the only ones that can formally evaluate advanced composite designs which satisfy those progressively demanding requirements while assuring minimum cost, maximum reliability and maximum durability. Recent research activities at NASA Glenn Research Center have focused on developing multi-scale, multi-level, multidisciplinary analysis and optimization methods. Multi-scale refers to formal methods which describe complex material behavior metal or composite; multi-level refers to integration of participating disciplines to describe a structural response at the scale of interest; multidisciplinary refers to open-ended for various existing and yet to be developed discipline constructs required to formally predict/describe a structural response in engine operating environments. For example, these include but are not limited to: multi-factor models for material behavior, multi-scale composite mechanics, general purpose structural analysis, progressive structural fracture for evaluating durability and integrity, noise and acoustic fatigue, emission requirements, hot fluid mechanics, heat-transfer and probabilistic simulations. Many of these, as well as others, are encompassed in an integrated computer code identified as Engine Structures Technology Benefits Estimator (EST/BEST) or Multi-faceted/Engine Structures Optimization (MP/ESTOP). The discipline modules integrated in MP/ESTOP include: engine cycle (thermodynamics), engine weights, internal fluid mechanics, cost, mission and coupled structural/thermal, various composite property simulators and probabilistic methods to evaluate uncertainty effects (scatter ranges) in all the design parameters. The objective of the proposed paper is to briefly describe a multi-faceted design analysis and optimization capability for coupled multi-discipline engine structures optimization. Results are presented for engine and aircraft type metrics to illustrate the versatility of that capability. Results are also presented for reliability, noise and fatigue to illustrate its inclusiveness. For example, replacing metal rotors with composites reduces the engine weight by 20 percent, 15 percent noise reduction, and an order of magnitude improvement in reliability. Composite designs exist to increase fatigue life by at least two orders of magnitude compared to state-of-the-art metals.

  7. Small scale green infrastructure design to meet different urban hydrological criteria.

    PubMed

    Jia, Z; Tang, S; Luo, W; Li, S; Zhou, M

    2016-04-15

    As small scale green infrastructures, rain gardens have been widely advocated for urban stormwater management in the contemporary low impact development (LID) era. This paper presents a simple method that consists of hydrological models and the matching plots of nomographs to provide an informative and practical tool for rain garden sizing and hydrological evaluation. The proposed method considers design storms, infiltration rates and the runoff contribution area ratio of the rain garden, allowing users to size a rain garden for a specific site with hydrological reference and predict overflow of the rain garden under different storms. The nomographs provide a visual presentation on the sensitivity of different design parameters. Subsequent application of the proposed method to a case study conducted in a sub-humid region in China showed that, the method accurately predicted the design storms for the existing rain garden, the predicted overflows under large storm events were within 13-50% of the measured volumes. The results suggest that the nomographs approach is a practical tool for quick selection or assessment of design options that incorporate key hydrological parameters of rain gardens or other infiltration type green infrastructure. The graphic approach as displayed by the nomographs allow urban planners to demonstrate the hydrological effect of small scale green infrastructure and gain more support for promoting low impact development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Pharmacokinetic-Pharmacodynamic Modeling in Pediatric Drug Development, and the Importance of Standardized Scaling of Clearance.

    PubMed

    Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F

    2018-04-19

    Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.

  9. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  10. A New Method of Building Scale-Model Houses

    Treesearch

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  11. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  12. Development of a Drug Use Resistance Self-Efficacy (DURSE) Scale

    ERIC Educational Resources Information Center

    Carpenter, Carrie M.; Howard, Donna

    2009-01-01

    Objectives: To develop and evaluate psychometric properties of a new instrument, the drug use resistance self-efficacy (DURSE) scale, designed for young adolescents. Methods: Scale construction occurred in 3 phases: (1) initial development, (2) pilot testing of preliminary items, and (3) final scale administration among a sample of seventh graders…

  13. Reliability and Validity Testing of the Physical Resilience Measure

    ERIC Educational Resources Information Center

    Resnick, Barbara; Galik, Elizabeth; Dorsey, Susan; Scheve, Ann; Gutkin, Susan

    2011-01-01

    Objective: The purpose of this study was to test reliability and validity of the Physical Resilience Scale. Methods: A single-group repeated measure design was used and 130 older adults from three different housing sites participated. Participants completed the Physical Resilience Scale, Hardy-Gill Resilience Scale, 14-item Resilience Scale,…

  14. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  15. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  16. A similitude method and the corresponding blade design of a low-speed large-scale axial compressor rotor

    NASA Astrophysics Data System (ADS)

    Yu, Chenghai; Ma, Ning; Wang, Kai; Du, Juan; Van den Braembussche, R. A.; Lin, Feng

    2014-04-01

    A similitude method to model the tip clearance flow in a high-speed compressor with a low-speed model is presented in this paper. The first step of this method is the derivation of similarity criteria for tip clearance flow, on the basis of an inviscid model of tip clearance flow. The aerodynamic parameters needed for the model design are then obtained from a numerical simulation of the target high-speed compressor rotor. According to the aerodynamic and geometric parameters of the target compressor rotor, a large-scale low-speed rotor blade is designed with an inverse blade design program. In order to validate the similitude method, the features of tip clearance flow in the low-speed model compressor are compared with the ones in the high-speed compressor at both design and small flow rate points. It is found that not only the trajectory of the tip leakage vortex but also the interface between the tip leakage flow and the incoming main flow in the high-speed compressor match well with that of its low speed model. These results validate the effectiveness of the similitude method for the tip clearance flow proposed in this paper.

  17. The Impact of Test Dimensionality, Common-Item Set Format, and Scale Linking Methods on Mixed-Format Test Equating

    ERIC Educational Resources Information Center

    Öztürk-Gübes, Nese; Kelecioglu, Hülya

    2016-01-01

    The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…

  18. Diffractive elements for generating microscale laser beam patterns: a Y2K problem

    NASA Astrophysics Data System (ADS)

    Teiwes, Stephan; Krueger, Sven; Wernicke, Guenther K.; Ferstl, Margit

    2000-03-01

    Lasers are widely used in industrial fabrication for engraving, cutting and many other purposes. However, material processing at very small scales is still a matter of concern. Advances in diffractive optics could provide for laser systems that could be used for engraving or cutting of micro-scale patterns at high speeds. In our paper we focus on the design of diffractive elements which can be used for this special application. It is a common desire in material processing to apply 'discrete' as well as 'continuous' beam patterns. Especially, the latter case is difficult to handle as typical micro-scale patterns are characterized by bad band-limitation properties, and as speckles can easily occur in beam patterns. It is shown in this paper that a standard iterative design method usually fails to obtain diffractive elements that generate diffraction patterns with acceptable quality. Insights gained from an analysis of the design problems are used to optimize the iterative design method. We demonstrate applicability and success of our approach by the design of diffractive phase elements that generate a discrete and a continuous 'Y2K' pattern.

  19. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  20. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  1. The Effects of the Critical Ice Accretion on Airfoil and Wing Performance

    NASA Technical Reports Server (NTRS)

    Selig, Michael S.; Bragg, Michael B.; Saeed, Farooq

    1998-01-01

    In support of the NASA Lewis Modern Airfoils Ice Accretion Test Program, the University of Illinois at Urbana-Champaign provided expertise in airfoil design and aerodynamic analysis to determine the aerodynamic effect of ice accretion on modern airfoil sections. The effort has concentrated on establishing a design/testing methodology for "hybrid airfoils" or "sub-scale airfoils," that is, airfoils having a full-scale leading edge together with a specially designed and foreshortened aft section. The basic approach of using a full-scale leading edge with a foreshortened aft section was considered to a limited extent over 40 years ago. However, it was believed that the range of application of the method had not been fully exploited. Thus a systematic study was being undertaken to investigate and explore the range of application of the method so as to determine its overall potential.

  2. Innovative Method for Developing a Helium Pressurant Tank Suitable for the Upper Stage Flight Experiment

    NASA Technical Reports Server (NTRS)

    DeLay, Tom K.; Munafo, Paul (Technical Monitor)

    2001-01-01

    The AFRL USFE project is an experimental test bed for new propulsion technologies. It will utilize ambient temperature fuel and oxidizers (Kerosene and Hydrogen peroxide). The system is pressure fed, not pump fed, and will utilize a helium pressurant tank to drive the system. Mr. DeLay has developed a method for cost effectively producing a unique, large pressurant tank that is not commercially available. The pressure vessel is a layered composite structure with an electroformed metallic permeation barrier. The design/process is scalable and easily adaptable to different configurations with minimal cost in tooling development 1/3 scale tanks have already been fabricated and are scheduled for testing. The full-scale pressure vessel (50" diameter) design will be refined based on the performance of the sub-scale tank. The pressure vessels have been designed to operate at 6,000 psi. a PV/W of 1.92 million is anticipated.

  3. A Watershed-Scale Survey for Stream-Foraging Birds in Northern California

    Treesearch

    Sherri L. Miller; C. John Ralph

    2005-01-01

    Our objective was to develop a survey technique and watershed-scale design to monitor trends of population size and habitat associations in stream-foraging birds. The resulting methods and design will be used to examine the efficacy of quantifying the association of stream and watershed quality with bird abundance. We surveyed 60 randomly selected 2-km stream reaches...

  4. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  5. Multi-scale occupancy estimation and modelling using multiple detection methods

    USGS Publications Warehouse

    Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.

    2008-01-01

    Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.

  6. Multi-Scale Scattering Transform in Music Similarity Measuring

    NASA Astrophysics Data System (ADS)

    Wang, Ruobai

    Scattering transform is a Mel-frequency spectrum based, time-deformation stable method, which can be used in evaluating music similarity. Compared with Dynamic time warping, it has better performance in detecting similar audio signals under local time-frequency deformation. Multi-scale scattering means to combine scattering transforms of different window lengths. This paper argues that, multi-scale scattering transform is a good alternative of dynamic time warping in music similarity measuring. We tested the performance of multi-scale scattering transform against other popular methods, with data designed to represent different conditions.

  7. Validity and Reliability of the Teamwork Scale for Youth

    ERIC Educational Resources Information Center

    Lower, Leeann M.; Newman, Tarkington J.; Anderson-Butcher, Dawn

    2017-01-01

    Purpose: This study examines the psychometric properties of the Teamwork Scale for Youth, an assessment designed to measure youths' perceptions of their teamwork competency. Methods: The Teamwork Scale for Youth was administered to a sample of 460 youths. Confirmatory factor analyses examined the factor structure and measurement invariance of the…

  8. Development of fire test methods for airplane interior materials

    NASA Technical Reports Server (NTRS)

    Tustin, E. A.

    1978-01-01

    Fire tests were conducted in a 737 airplane fuselage at NASA-JSC to characterize jet fuel fires in open steel pans (simulating post-crash fire sources and a ruptured airplane fuselage) and to characterize fires in some common combustibles (simulating in-flight fire sources). Design post-crash and in-flight fire source selections were based on these data. Large panels of airplane interior materials were exposed to closely-controlled large scale heating simulations of the two design fire sources in a Boeing fire test facility utilizing a surplused 707 fuselage section. Small samples of the same airplane materials were tested by several laboratory fire test methods. Large scale and laboratory scale data were examined for correlative factors. Published data for dangerous hazard levels in a fire environment were used as the basis for developing a method to select the most desirable material where trade-offs in heat, smoke and gaseous toxicant evolution must be considered.

  9. Linear regulator design for stochastic systems by a multiple time scales method

    NASA Technical Reports Server (NTRS)

    Teneketzis, D.; Sandell, N. R., Jr.

    1976-01-01

    A hierarchically-structured, suboptimal controller for a linear stochastic system composed of fast and slow subsystems is considered. The controller is optimal in the limit as the separation of time scales of the subsystems becomes infinite. The methodology is illustrated by design of a controller to suppress the phugoid and short period modes of the longitudinal dynamics of the F-8 aircraft.

  10. Moderate-resolution data and gradient nearest neighbor imputation for regional-national risk assessment

    Treesearch

    Kenneth B. Jr. Pierce; C. Kenneth Brewer; Janet L. Ohmann

    2010-01-01

    This study was designed to test the feasibility of combining a method designed to populate pixels with inventory plot data at the 30-m scale with a new national predictor data set. The new national predictor data set was developed by the USDA Forest Service Remote Sensing Applications Center (hereafter RSAC) at the 250-m scale. Gradient Nearest Neighbor (GNN)...

  11. Development and Psychometric Evaluation of the Reasons for Living-Older Adults Scale: A Suicide Risk Assessment Inventory

    ERIC Educational Resources Information Center

    Edelstein, Barry A.; Heisel, Marnin J.; McKee, Deborah R.; Martin, Ronald R.; Koven, Lesley P.; Duberstein, Paul R.; Britton, Peter C.

    2009-01-01

    Purpose: The purposes of these studies were to develop and initially evaluate the psychometric properties of the Reasons for Living Scale-Older Adult version (RFL-OA), an older adults version of a measure designed to assess reasons for living among individuals at risk for suicide. Design and Methods: Two studies are reported. Study 1 involved…

  12. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  13. Minimum impulse thruster valve design and development

    NASA Technical Reports Server (NTRS)

    Huftalen, Richard L.; Platt, Andrea L.; Parker, Morgan J.; Yankura, George A.

    2003-01-01

    The design and development of a minimum impulse thruster valve was conducted, by Moog, under contract by NASA's Jet Propulsion Laboratory, California Institute of Technology, for deep space propulsion systems. The effort was focused on applying known solenoid design techniques scaled to provide a 1 -millisecond response capability for monopropellant, hydrazine ACS thruster applications. The valve has an extended operating temperature range of 20(deg)F to +350(deg)F with a total mass of less than 25 grams and nominal power draw of 7 watts. The design solution resulted in providing a solenoid valve that is one-tenth the scale of the standard product line. The valve has the capability of providing a mass flow rate of 0.0009 pounds per second hydrazine. The design life of 1,000,000 cycles was demonstrated both dry and wet. Not all design factors scaled as expected and proved to be the focus of the final development effort. These included the surface interactions, hydrodynamics and driver electronics. The resulting solution applied matured design approaches to minimize the program risk with innovative methods to address the impacts of scale.

  14. The role of mixed methods in improved cookstove research.

    PubMed

    Stanistreet, Debbi; Hyseni, Lirije; Bashin, Michelle; Sadumah, Ibrahim; Pope, Daniel; Sage, Michael; Bruce, Nigel

    2015-01-01

    The challenge of promoting access to clean and efficient household energy for cooking and heating is a critical issue facing low- and middle-income countries today. Along with clean fuels, improved cookstoves (ICSs) continue to play an important part in efforts to reduce the 4 million annual premature deaths attributed to household air pollution. Although a range of ICSs are available, there is little empirical evidence on appropriate behavior change approaches to inform adoption and sustained used at scale. Specifically, evaluations using either quantitative or qualitative methods provide an incomplete picture of the challenges in facilitating ICS adoption. This article examines how studies that use the strengths of both these approaches can offer important insights into behavior change in relation to ICS uptake and scale-up. Epistemological approaches, study design frameworks, methods of data collection, analytical approaches, and issues of validity and reliability in the context of mixed methods ICS research are examined, and the article presents an example study design from an evaluation study in Kenya incorporating a nested approach and a convergent case oriented design. The authors discuss the benefits and methodological challenges of mixed-methods approaches in the context of researching behavior change and ICS use recognizing that such methods represent relatively uncharted territory. The authors propose that more published examples are needed to provide frameworks for other researchers seeking to apply mixed methods in this context and suggest a comprehensive research agenda is required that incorporates integrated mixed-methods approaches, to provide best evidence for future scale-up.

  15. Review of design optimization methods for turbomachinery aerodynamics

    NASA Astrophysics Data System (ADS)

    Li, Zhihui; Zheng, Xinqian

    2017-08-01

    In today's competitive environment, new turbomachinery designs need to be not only more efficient, quieter, and ;greener; but also need to be developed at on much shorter time scales and at lower costs. A number of advanced optimization strategies have been developed to achieve these requirements. This paper reviews recent progress in turbomachinery design optimization to solve real-world aerodynamic problems, especially for compressors and turbines. This review covers the following topics that are important for optimizing turbomachinery designs. (1) optimization methods, (2) stochastic optimization combined with blade parameterization methods and the design of experiment methods, (3) gradient-based optimization methods for compressors and turbines and (4) data mining techniques for Pareto Fronts. We also present our own insights regarding the current research trends and the future optimization of turbomachinery designs.

  16. A behavioral-level HDL description of SFQ logic circuits for quantitative performance analysis of large-scale SFQ digital systems

    NASA Astrophysics Data System (ADS)

    Matsuzaki, F.; Yoshikawa, N.; Tanaka, M.; Fujimaki, A.; Takai, Y.

    2003-10-01

    Recently many single flux quantum (SFQ) logic circuits containing several thousands of Josephson junctions have been designed successfully by using digital domain simulation based on the hard ware description language (HDL). In the present HDL-based design of SFQ circuits, a structure-level HDL description has been used, where circuits are made up of basic gate cells. However, in order to analyze large-scale SFQ digital systems, such as a microprocessor, more higher-level circuit abstraction is necessary to reduce the circuit simulation time. In this paper we have investigated the way to describe functionality of the large-scale SFQ digital circuits by a behavior-level HDL description. In this method, the functionality and the timing of the circuit block is defined directly by describing their behavior by the HDL. Using this method, we can dramatically reduce the simulation time of large-scale SFQ digital circuits.

  17. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.

  18. Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.

    2017-12-01

    Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.

  19. Electrokinetic remediation prefield test methods

    NASA Technical Reports Server (NTRS)

    Hodko, Dalibor (Inventor)

    2000-01-01

    Methods for determining the parameters critical in designing an electrokinetic soil remediation process including electrode well spacing, operating current/voltage, electroosmotic flow rate, electrode well wall design, and amount of buffering or neutralizing solution needed in the electrode wells at operating conditions are disclosed These methods are preferably performed prior to initiating a full scale electrokinetic remediation process in order to obtain efficient remediation of the contaminants.

  20. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures

    NASA Astrophysics Data System (ADS)

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A.; Park, Jiwoong

    2017-10-01

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides--which represent one- and three-atom-thick two-dimensional building blocks, respectively--have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  1. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures.

    PubMed

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A; Park, Jiwoong

    2017-10-12

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides-which represent one- and three-atom-thick two-dimensional building blocks, respectively-have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  2. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    PubMed

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  3. Rasch Analysis of the Geriatric Depression Scale--Short Form

    ERIC Educational Resources Information Center

    Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.

    2009-01-01

    Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…

  4. Development of a Scale to Measure Lifelong Learning

    ERIC Educational Resources Information Center

    Kirby, John R.; Knapper, Christopher; Lamon, Patrick; Egnatoff, William J.

    2010-01-01

    Primary objective: to develop a scale to measure students' disposition to engage in lifelong learning. Research design, methods and procedures: using items that reflected the components of lifelong learning, we constructed a 14-item scale that was completed by 309 university and vocational college students, who also completed a measure of deep and…

  5. Development and Validation of the Spanish-English Language Proficiency Scale (SELPS)

    ERIC Educational Resources Information Center

    Smyk, Ekaterina; Restrepo, M. Adelaida; Gorin, Joanna S.; Gray, Shelley

    2013-01-01

    Purpose: This study examined the development and validation of a criterion-referenced Spanish-English Language Proficiency Scale (SELPS) that was designed to assess the oral language skills of sequential bilingual children ages 4-8. This article reports results for the English proficiency portion of the scale. Method: The SELPS assesses syntactic…

  6. Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.

    PubMed

    Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen

    2017-02-01

    Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.

  7. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  8. Design, construction, and evaluation of a 1:8 scale model binaural manikin.

    PubMed

    Robinson, Philip; Xiang, Ning

    2013-03-01

    Many experiments in architectural acoustics require presenting listeners with simulations of different rooms to compare. Acoustic scale modeling is a feasible means to create accurate simulations of many rooms at reasonable cost. A critical component in a scale model room simulation is a receiver that properly emulates a human receiver. For this purpose, a scale model artificial head has been constructed and tested. This paper presents the design and construction methods used, proper equalization procedures, and measurements of its response. A headphone listening experiment examining sound externalization with various reflection conditions is presented that demonstrates its use for psycho-acoustic testing.

  9. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  10. Methods of Scientific Research: Teaching Scientific Creativity at Scale

    NASA Astrophysics Data System (ADS)

    Robbins, Dennis; Ford, K. E. Saavik

    2016-01-01

    We present a scaling-up plan for AstroComNYC's Methods of Scientific Research (MSR), a course designed to improve undergraduate students' understanding of science practices. The course format and goals, notably the open-ended, hands-on, investigative nature of the curriculum are reviewed. We discuss how the course's interactive pedagogical techniques empower students to learn creativity within the context of experimental design and control of variables thinking. To date the course has been offered to a limited numbers of students in specific programs. The goals of broadly implementing MSR is to reach more students and early in their education—with the specific purpose of supporting and improving retention of students pursuing STEM careers. However, we also discuss challenges in preserving the effectiveness of the teaching and learning experience at scale.

  11. Design and Test of an Improved Crashworthiness Small Composite Airframe

    NASA Technical Reports Server (NTRS)

    Terry, James E.; Hooper, Steven J.; Nicholson, Mark

    2002-01-01

    The purpose of this small business innovative research (SBIR) program was to evaluate the feasibility of developing small composite airplanes with improved crashworthiness. A combination of analysis and half scale component tests were used to develop an energy absorbing airframe. Four full scale crash tests were conducted at the NASA Impact Dynamics Research Facility, two on a hard surface and two onto soft soil, replicating earlier NASA tests of production general aviation airplanes. Several seat designs and restraint systems including both an air bag and load limiting shoulder harnesses were tested. Tests showed that occupant loads were within survivable limits with the improved structural design and the proper combination of seats and restraint systems. There was no loss of cabin volume during the events. The analysis method developed provided design guidance but time did not allow extending the analysis to soft soil impact. This project demonstrated that survivability improvements are possible with modest weight penalties. The design methods can be readily applied by airplane designers using the examples in this report.

  12. Robust Scale Transformation Methods in IRT True Score Equating under Common-Item Nonequivalent Groups Design

    ERIC Educational Resources Information Center

    He, Yong

    2013-01-01

    Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…

  13. Scaling of counter-current imbibition recovery curves using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  14. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  15. Small Independent Action Force (SIAF), Vegetation Classification Study

    DTIC Science & Technology

    1976-03-01

    CONTENTS I. INTRODUCTION 8 II. BACKGBCUND and PORPOSE 10 III. METHOD 16 A. EXPERIMENTAL DESIGN 16 B. SUBJECTS .’ 17 C. APPARATUS 17 D. STIMULUS...reliability of subjects will be obtained. 15 III. METHOD A. EXPERIMENTAL DESIGN . The experiment involved a continous stream of stimuli. Phase 1 stimuli...the attribute to be scaled. The subjecr must designate one of the pair as greater. No equality judgments are permitted. In order to obtain data from

  16. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  17. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian

    2017-01-01

    In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.

  18. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  19. Enabling Microfluidics: From Clean Rooms to Makerspaces

    DTIC Science & Technology

    2016-09-30

    anyone can make 133 and rapidly scale to bulk manufacturing . To enable others to take part in this type of product 134 design and development, we...cost molds for a fee; however, the 77 design process is slowed down waiting for molds to be manufactured and shipped. While 78 PDMS devices may be...finished prototype into a commercial product . An example of a rapid 101 prototyping method amenable to scaled-up manufacturing is laser cutting. Figure

  20. Understanding Market Segments and Competition in the Private Military Industry

    DTIC Science & Technology

    2009-12-01

    Likert scale, and importance scale) and “open-end” (unstructured) questions ( Kotler , 2009, p.137). We applied parts of the Tailored Design Method... Kotler , 2008, p. 249). Item 2 and Items 8-10 of the questionnaire attempted to capture the market niche/s that the respondents’ reportedly... Kotler , 2009, p. 137). These questions were designed using Likert-type items to assess the participant’s level of importance or level of

  1. Identifying core habitat and connectivity for focal species in the interior cedar-hemlock forest of North America to complete a conservation area design

    Treesearch

    Lance Craighead; Baden Cross

    2007-01-01

    To identify the remaining areas of the Interior Cedar- Hemlock Forest of North America and prioritize them for conservation planning, the Craighead Environmental Research Institute has developed a 2-scale method for mapping critical habitat utilizing 1) a broad-scale model to identify important regional locations as the basis for a Conservation Area Design (CAD), and 2...

  2. Discrete choice experiments of pharmacy services: a systematic review.

    PubMed

    Vass, Caroline; Gray, Ewan; Payne, Katherine

    2016-06-01

    Background Two previous systematic reviews have summarised the application of discrete choice experiments to value preferences for pharmacy services. These reviews identified a total of twelve studies and described how discrete choice experiments have been used to value pharmacy services but did not describe or discuss the application of methods used in the design or analysis. Aims (1) To update the most recent systematic review and critically appraise current discrete choice experiments of pharmacy services in line with published reporting criteria and; (2) To provide an overview of key methodological developments in the design and analysis of discrete choice experiments. Methods The review used a comprehensive strategy to identify eligible studies (published between 1990 and 2015) by searching electronic databases for key terms related to discrete choice and best-worst scaling (BWS) experiments. All healthcare choice experiments were then hand-searched for key terms relating to pharmacy. Data were extracted using a published checklist. Results A total of 17 discrete choice experiments eliciting preferences for pharmacy services were identified for inclusion in the review. No BWS studies were identified. The studies elicited preferences from a variety of populations (pharmacists, patients, students) for a range of pharmacy services. Most studies were from a United Kingdom setting, although examples from Europe, Australia and North America were also identified. Discrete choice experiments for pharmacy services tended to include more attributes than non-pharmacy choice experiments. Few studies reported the use of qualitative research methods in the design and interpretation of the experiments (n = 9) or use of new methods of analysis to identify and quantify preference and scale heterogeneity (n = 4). No studies reported the use of Bayesian methods in their experimental design. Conclusion Incorporating more sophisticated methods in the design of pharmacy-related discrete choice experiments could help researchers produce more efficient experiments which are better suited to valuing complex pharmacy services. Pharmacy-related discrete choice experiments could also benefit from more sophisticated analytical techniques such as investigations into scale and preference heterogeneity. Employing these sophisticated methods for both design and analysis could extend the usefulness of discrete choice experiments to inform health and pharmacy policy.

  3. Intercomparison Project on Parameterizations of Large-Scale Dynamics for Simulations of Tropical Convection

    NASA Astrophysics Data System (ADS)

    Sobel, A. H.; Wang, S.; Bellon, G.; Sessions, S. L.; Woolnough, S.

    2013-12-01

    Parameterizations of large-scale dynamics have been developed in the past decade for studying the interaction between tropical convection and large-scale dynamics, based on our physical understanding of the tropical atmosphere. A principal advantage of these methods is that they offer a pathway to attack the key question of what controls large-scale variations of tropical deep convection. These methods have been used with both single column models (SCMs) and cloud-resolving models (CRMs) to study the interaction of deep convection with several kinds of environmental forcings. While much has been learned from these efforts, different groups' efforts are somewhat hard to compare. Different models, different versions of the large-scale parameterization methods, and experimental designs that differ in other ways are used. It is not obvious which choices are consequential to the scientific conclusions drawn and which are not. The methods have matured to the point that there is value in an intercomparison project. In this context, the Global Atmospheric Systems Study - Weak Temperature Gradient (GASS-WTG) project was proposed at the Pan-GASS meeting in September 2012. The weak temperature gradient approximation is one method to parameterize large-scale dynamics, and is used in the project name for historical reasons and simplicity, but another method, the damped gravity wave (DGW) method, will also be used in the project. The goal of the GASS-WTG project is to develop community understanding of the parameterization methods currently in use. Their strengths, weaknesses, and functionality in models with different physics and numerics will be explored in detail, and their utility to improve our understanding of tropical weather and climate phenomena will be further evaluated. This presentation will introduce the intercomparison project, including background, goals, and overview of the proposed experimental design. Interested groups will be invited to join (it will not be too late), and preliminary results will be presented.

  4. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2012-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  5. A Multiscale, Nonlinear, Modeling Framework Enabling the Design and Analysis of Composite Materials and Structures

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2011-01-01

    A framework for the multiscale design and analysis of composite materials and structures is presented. The ImMAC software suite, developed at NASA Glenn Research Center, embeds efficient, nonlinear micromechanics capabilities within higher scale structural analysis methods such as finite element analysis. The result is an integrated, multiscale tool that relates global loading to the constituent scale, captures nonlinearities at this scale, and homogenizes local nonlinearities to predict their effects at the structural scale. Example applications of the multiscale framework are presented for the stochastic progressive failure of a SiC/Ti composite tensile specimen and the effects of microstructural variations on the nonlinear response of woven polymer matrix composites.

  6. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  8. Effects of Learning Style and Training Method on Computer Attitude and Performance in World Wide Web Page Design Training.

    ERIC Educational Resources Information Center

    Chou, Huey-Wen; Wang, Yu-Fang

    1999-01-01

    Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…

  9. Performance Prediction Relationships for AM2 Airfield Matting Developed from Full-Scale Accelerated Testing and Laboratory Experimentation

    DTIC Science & Technology

    2018-01-01

    work, the prevailing methods used to predict the performance of AM2 were based on the CBR design procedure for flexible pavements using a small number...suitable for design and evaluation frameworks currently used for airfield pavements and matting systems. DISCLAIMER: The contents of this report...methods used to develop the equivalency curves equated the mat-surfaced area to an equivalent thickness of flexible pavement using the CBR design

  10. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  11. Estimating scaled treatment effects with multiple outcomes.

    PubMed

    Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita

    2017-01-01

    In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.

  12. Measuring the Experience and Perception of Suffering

    ERIC Educational Resources Information Center

    Schulz, Richard; Monin, Joan K.; Czaja, Sara J.; Lingler, Jennifer H.; Beach, Scott R.; Martire, Lynn M.; Dodds, Angela; Hebert, Randy S.; Zdaniuk, Bozena; Cook, Thomas B.

    2010-01-01

    Purpose: Assess psychometric properties of scales developed to assess experience and perception of physical, psychological, and existential suffering in older individuals. Design and Methods: Scales were administered to 3 populations of older persons and/or their family caregivers: individuals with Alzheimer's disease (AD) and their family…

  13. Preliminary measurement of the noise from the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1985-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.

  14. Horvitz-Thompson survey sample methods for estimating large-scale animal abundance

    USGS Publications Warehouse

    Samuel, M.D.; Garton, E.O.

    1994-01-01

    Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey objectives and optimize decisions related to survey bias and variance. Finally, managers and researchers involved in the survey design process must realize that obtaining the best survey results requires an interactive and recursive process of survey design, execution, analysis and redesign. Survey refinements will be possible as further knowledge is gained on the actual abundance and distribution of the population and on the most efficient techniques for detection animals.

  15. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  16. People’s Preferences of Urban Design Qualities for Walking on a Commercial Street

    NASA Astrophysics Data System (ADS)

    Ernawati, J.; Surjono; Sudarmo, B. S.

    2018-03-01

    This research aims to explore people’s preferences of urban design qualities for walking on a commercial street, with Kawi Street located in a commercial neighborhood in the town of Malang Indonesia as the case study. Based on a literature review, this study used eight urban design qualities, i.e., enclosure, legibility, human scale, transparency, complexity, coherence, linkage, and imageability. This study applied a survey research method using a self-administered paper-pencil questionnaire applying two measurement techniques: Likert scale was used to explore people’s evaluations of urban design qualities of the street, while multiple-rating scales were used to measure people’s preference for walking on the street. One hundred and ten people randomly selected as respondents. Regression analysis was employed to explore the influence of urban design qualities on people preference for walking. Results indicated four urban design qualities that affect people’s choice for walking on a commercial street, i.e., transparency, coherence, linkage, and imageability. Implications of the findings will be discussed in the paper.

  17. Greased Lightning (GL-10) Performance Flight Research: Flight Data Report

    NASA Technical Reports Server (NTRS)

    McSwain, Robert G.; Glaab, Louis J.; Theodore, Colin R.; Rhew, Ray D. (Editor); North, David D. (Editor)

    2017-01-01

    Modern aircraft design methods have produced acceptable designs for large conventional aircraft performance. With revolutionary electronic propulsion technologies fueled by the growth in the small UAS (Unmanned Aerial Systems) industry, these same prediction models are being applied to new smaller, and experimental design concepts requiring a VTOL (Vertical Take Off and Landing) capability for ODM (On Demand Mobility). A 50% sub-scale GL-10 flight model was built and tested to demonstrate the transition from hover to forward flight utilizing DEP (Distributed Electric Propulsion)[1][2]. In 2016 plans were put in place to conduct performance flight testing on the 50% sub-scale GL-10 flight model to support a NASA project called DELIVER (Design Environment for Novel Vertical Lift Vehicles). DELIVER was investigating the feasibility of including smaller and more experimental aircraft configurations into a NASA design tool called NDARC (NASA Design and Analysis of Rotorcraft)[3]. This report covers the performance flight data collected during flight testing of the GL-10 50% sub-scale flight model conducted at Beaver Dam Airpark, VA. Overall the flight test data provides great insight into how well our existing conceptual design tools predict the performance of small scale experimental DEP concepts. Low fidelity conceptual design tools estimated the (L/D)( sub max)of the GL-10 50% sub-scale flight model to be 16. Experimentally measured (L/D)( sub max) for the GL-10 50% scale flight model was 7.2. The aerodynamic performance predicted versus measured highlights the complexity of wing and nacelle interactions which is not currently accounted for in existing low fidelity tools.

  18. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  19. Innovations for Evaluation Research: Multiform Protocols, Visual Analog Scaling, and the Retrospective Pretest-Posttest Design.

    PubMed

    Chang, Rong; Little, Todd D

    2018-06-01

    In this article, we review three innovative methods: multiform protocols, visual analog scaling, and the retrospective pretest-posttest design that can be used in evaluation research. These three techniques have been proposed for decades, but unfortunately, they are still not utilized readily in evaluation research. Our goal is to familiarize researchers with these underutilized research techniques that could reduce personnel effort and costs for data collection while producing better inferences for a study. We begin by discussing their applications and special unique features. We then discuss each technique's strengths and limitations and offer practical tips on how to better implement these methods in evaluation research. We then showcase two recent empirical studies that implement these methods in real-world evaluation research applications.

  20. Multi-scale ecosystem monitoring: an application of scaling data to answer multiple ecological questions

    USDA-ARS?s Scientific Manuscript database

    Background/Question/Methods Standardized monitoring data collection efforts using a probabilistic sample design, such as in the Bureau of Land Management’s (BLM) Assessment, Inventory, and Monitoring (AIM) Strategy, provide a core suite of ecological indicators, maximize data collection efficiency,...

  1. Computational methods of robust controller design for aerodynamic flutter suppression

    NASA Technical Reports Server (NTRS)

    Anderson, L. R.

    1981-01-01

    The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.

  2. Redefining thermal regimes to design reserves for coral reefs in the face of climate change.

    PubMed

    Chollett, Iliana; Enríquez, Susana; Mumby, Peter J

    2014-01-01

    Reef managers cannot fight global warming through mitigation at local scale, but they can use information on thermal patterns to plan for reserve networks that maximize the probability of persistence of their reef system. Here we assess previous methods for the design of reserves for climate change and present a new approach to prioritize areas for conservation that leverages the most desirable properties of previous approaches. The new method moves the science of reserve design for climate change a step forwards by: (1) recognizing the role of seasonal acclimation in increasing the limits of environmental tolerance of corals and ameliorating the bleaching response; (2) using the best proxy for acclimatization currently available; (3) including information from several bleaching events, which frequency is likely to increase in the future; (4) assessing relevant variability at country scales, where most management plans are carried out. We demonstrate the method in Honduras, where a reassessment of the marine spatial plan is in progress.

  3. Redefining Thermal Regimes to Design Reserves for Coral Reefs in the Face of Climate Change

    PubMed Central

    Chollett, Iliana; Enríquez, Susana; Mumby, Peter J.

    2014-01-01

    Reef managers cannot fight global warming through mitigation at local scale, but they can use information on thermal patterns to plan for reserve networks that maximize the probability of persistence of their reef system. Here we assess previous methods for the design of reserves for climate change and present a new approach to prioritize areas for conservation that leverages the most desirable properties of previous approaches. The new method moves the science of reserve design for climate change a step forwards by: (1) recognizing the role of seasonal acclimation in increasing the limits of environmental tolerance of corals and ameliorating the bleaching response; (2) using the best proxy for acclimatization currently available; (3) including information from several bleaching events, which frequency is likely to increase in the future; (4) assessing relevant variability at country scales, where most management plans are carried out. We demonstrate the method in Honduras, where a reassessment of the marine spatial plan is in progress. PMID:25333380

  4. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  5. Acoustic Treatment Design Scaling Methods. Volume 5; Analytical and Experimental Data Correlation

    NASA Technical Reports Server (NTRS)

    Chien, W. E.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The primary purpose of the study presented in this volume is to present the results and data analysis of in-duct transmission loss measurements. Transmission loss testing was performed on full-scale, 1/2-scale, and 115-scale treatment panel samples. The objective of the study was to compare predicted and measured transmission loss for full-scale and subscale panels in an attempt to evaluate the variations in suppression between full- and subscale panels which were ostensibly of equivalent design. Generally, the results indicated an unsatisfactory agreement between measurement and prediction, even for full-scale. This was attributable to difficulties encountered in obtaining sufficiently accurate test results, even with extraordinary care in calibrating the instrumentation and performing the test. Test difficulties precluded the ability to make measurements at frequencies high enough to be representative of subscale liners. It is concluded that transmission loss measurements without ducts and data acquisition facilities specifically designed to operate with the precision and complexity required for high subscale frequency ranges are inadequate for evaluation of subscale treatment effects.

  6. Fabrication and evaluation of advanced titanium structural panels for supersonic cruise aircraft

    NASA Technical Reports Server (NTRS)

    Payne, L.

    1977-01-01

    Flightworthy primary structural panels were designed, fabricated, and tested to investigate two advanced fabrication methods for titanium alloys. Skin-stringer panels fabricated using the weldbraze process, and honeycomb-core sandwich panels fabricated using a diffusion bonding process, were designed to replace an existing integrally stiffened shear panel on the upper wing surface of the NASA YF-12 research aircraft. The investigation included ground testing and Mach 3 flight testing of full-scale panels, and laboratory testing of representative structural element specimens. Test results obtained on full-scale panels and structural element specimens indicate that both of the fabrication methods investigated are suitable for primary structural applications on future civil and military supersonic cruise aircraft.

  7. Safe Life Propulsion Design Technologies (3rd Generation Propulsion Research and Technology)

    NASA Technical Reports Server (NTRS)

    Ellis, Rod

    2000-01-01

    The tasks outlined in this viewgraph presentation on safe life propulsion design technologies (third generation propulsion research and technology) include the following: (1) Ceramic matrix composite (CMC) life prediction methods; (2) Life prediction methods for ultra high temperature polymer matrix composites for reusable launch vehicle (RLV) airframe and engine application; (3) Enabling design and life prediction technology for cost effective large-scale utilization of MMCs and innovative metallic material concepts; (4) Probabilistic analysis methods for brittle materials and structures; (5) Damage assessment in CMC propulsion components using nondestructive characterization techniques; and (6) High temperature structural seals for RLV applications.

  8. A case report of evaluating a large-scale health systems improvement project in an uncontrolled setting: a quality improvement initiative in KwaZulu-Natal, South Africa.

    PubMed

    Mate, Kedar S; Ngidi, Wilbroda Hlolisile; Reddy, Jennifer; Mphatswe, Wendy; Rollins, Nigel; Barker, Pierre

    2013-11-01

    New approaches are needed to evaluate quality improvement (QI) within large-scale public health efforts. This case report details challenges to large-scale QI evaluation, and proposes solutions relying on adaptive study design. We used two sequential evaluative methods to study a QI effort to improve delivery of HIV preventive care in public health facilities in three districts in KwaZulu-Natal, South Africa, over a 3-year period. We initially used a cluster randomised controlled trial (RCT) design. During the RCT study period, tensions arose between intervention implementation and evaluation design due to loss of integrity of the randomisation unit over time, pressure to implement changes across the randomisation unit boundaries, and use of administrative rather than functional structures for the randomisation. In response to this loss of design integrity, we switched to a more flexible intervention design and a mixed-methods quasiexperimental evaluation relying on both a qualitative analysis and an interrupted time series quantitative analysis. Cluster RCT designs may not be optimal for evaluating complex interventions to improve implementation in uncontrolled 'real world' settings. More flexible, context-sensitive evaluation designs offer a better balance of the need to adjust the intervention during the evaluation to meet implementation challenges while providing the data required to evaluate effectiveness. Our case study involved HIV care in a resource-limited setting, but these issues likely apply to complex improvement interventions in other settings.

  9. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  10. Detrended fluctuation analysis as a regression framework: Estimating dependence at different scales

    NASA Astrophysics Data System (ADS)

    Kristoufek, Ladislav

    2015-02-01

    We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential nonstationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance, environmental science, and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales.

  11. Prospective cohort studies of newly marketed medications: using covariate data to inform the design of large-scale studies.

    PubMed

    Franklin, Jessica M; Rassen, Jeremy A; Bartels, Dorothee B; Schneeweiss, Sebastian

    2014-01-01

    Nonrandomized safety and effectiveness studies are often initiated immediately after the approval of a new medication, but patients prescribed the new medication during this period may be substantially different from those receiving an existing comparator treatment. Restricting the study to comparable patients after data have been collected is inefficient in prospective studies with primary collection of outcomes. We discuss design and methods for evaluating covariate data to assess the comparability of treatment groups, identify patient subgroups that are not comparable, and decide when to transition to a large-scale comparative study. We demonstrate methods in an example study comparing Cox-2 inhibitors during their postmarketing period (1999-2005) with nonselective nonsteroidal anti-inflammatory drugs (NSAIDs). Graphical checks of propensity score distributions in each treatment group showed substantial problems with overlap in the initial cohorts. In the first half of 1999, >40% of patients were in the region of nonoverlap on the propensity score, and across the study period this fraction never dropped below 10% (the a priori decision threshold for transitioning to the large-scale study). After restricting to patients with no prior NSAID use, <1% of patients were in the region of nonoverlap, indicating that a large-scale study could be initiated in this subgroup and few patients would need to be trimmed from analysis. A sequential study design that uses pilot data to evaluate treatment selection can guide the efficient design of large-scale outcome studies with primary data collection by focusing on comparable patients.

  12. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  13. Fabrication of nano-scale Cu bond pads with seal design in 3D integration applications.

    PubMed

    Chen, K N; Tsang, C K; Wu, W W; Lee, S H; Lu, J Q

    2011-04-01

    A method to fabricate nano-scale Cu bond pads for improving bonding quality in 3D integration applications is reported. The effect of Cu bonding quality on inter-level via structural reliability for 3D integration applications is investigated. We developed a Cu nano-scale-height bond pad structure and fabrication process for improved bonding quality by recessing oxides using a combination of SiO2 CMP process and dilute HF wet etching. In addition, in order to achieve improved wafer-level bonding, we introduced a seal design concept that prevents corrosion and provides extra mechanical support. Demonstrations of these concepts and processes provide the feasibility of reliable nano-scale 3D integration applications.

  14. Anthropogenic-based regional-scale factors most consistently explain plot-level exotic diversity in grasslands

    USDA-ARS?s Scientific Manuscript database

    Invasion is viewed as a dominant threat to Earth’s biological diversity, but evidence linking the accumulation of exotic species to the simultaneous suppression of native diversity is equivocal, relying heavily on data from studies using different methods and designs. Fine-scale studies often descr...

  15. The complex-scaled multiconfigurational spin-tensor electron propagator method for low-lying shape resonances in Be-, Mg- and Ca-

    NASA Astrophysics Data System (ADS)

    Tsogbayar, Tsednee; Yeager, Danny L.

    2017-01-01

    We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.

  16. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  17. Construction of multi-scale consistent brain networks: methods and applications.

    PubMed

    Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming

    2015-01-01

    Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.

  18. Comparing three sampling techniques for estimating fine woody down dead biomass

    Treesearch

    Robert E. Keane; Kathy Gray

    2013-01-01

    Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...

  19. Personalised Information Services Using a Hybrid Recommendation Method Based on Usage Frequency

    ERIC Educational Resources Information Center

    Kim, Yong; Chung, Min Gyo

    2008-01-01

    Purpose: This paper seeks to describe a personal recommendation service (PRS) involving an innovative hybrid recommendation method suitable for deployment in a large-scale multimedia user environment. Design/methodology/approach: The proposed hybrid method partitions content and user into segments and executes association rule mining,…

  20. Laboratory validation of four black carbon measurement methods for the determination of non-volatile particulate matter (PM) mass emissions . . .

    EPA Science Inventory

    A laboratory-scale experimental program was designed to standardize each of four black carbon measurement methods, provide appropriate quality assurance/control procedures for these techniques, and compare measurements made by these methods to a NIST traceable standard (filter gr...

  1. Section Preequating under the Equivalent Groups Design without IRT

    ERIC Educational Resources Information Center

    Guo, Hongwen; Puhan, Gautam

    2014-01-01

    In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…

  2. Schinus terebinthifolius countercurrent chromatography (Part II): Intra-apparatus scale-up and inter-apparatus method transfer.

    PubMed

    Costa, Fernanda das Neves; Vieira, Mariana Neves; Garrard, Ian; Hewitson, Peter; Jerz, Gerold; Leitão, Gilda Guimarães; Ignatova, Svetlana

    2016-09-30

    Countercurrent chromatography (CCC) is being widely used across the world for purification of various materials, especially in natural product research. The predictability of CCC scale-up has been successfully demonstrated using specially designed instruments of the same manufacturer. The reality is that the most of CCC users do not have access to such instruments and do not have enough experience to transfer methods from one CCC column to another. This unique study of three international teams is based on innovative approach to simplify the scale-up between different CCC machines using fractionation of Schinus terebinthifolius berries dichloromethane extract as a case study. The optimized separation methodology, recently developed by the authors (Part I), was repeatedly performed on CCC columns of different design available at most research laboratories across the world. Hexane - ethyl acetate - methanol - water (6:1:6:1, v/v/v/v) was used as solvent system with masticadienonic and 3β-masticadienolic acids as target compounds to monitor stationary phase retention and calculate peak resolution. It has been demonstrated that volumetric, linear and length scale-up transfer factors based on column characteristics can be directly applied to different i.d., volume and length columns independently on instrument make in an intra-apparatus scale-up and inter-apparatus method transfer. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Investigation of Vapor Cooling Enhancements for Applications on Large Cryogenic Systems

    NASA Technical Reports Server (NTRS)

    Ameen, Lauren; Zoeckler, Joseph

    2017-01-01

    The need to demonstrate and evaluate the effectiveness of heat interception methods for use on a relevant cryogenic propulsion stage at a system level has been identified. Evolvable Cryogenics (eCryo) Structural Heat Intercept, Insulation and Vibration Evaluation Rig (SHIIVER) will be designed with vehicle specific geometries (SLS Exploration Upper Stage (EUS) as guidance) and will be subjected to simulated space environments. One method of reducing structure-born heat leak being investigated utilizes vapor-based heat interception. Vapor-based heat interception could potentially reduce heat leak into liquid hydrogen propulsion tanks, increasing potential mission length or payload capability. Due to the high number of unknowns associated with the heat transfer mechanism and integration of vapor-based heat interception on a realistic large-scale skirt design, a sub-scale investigation was developed. The sub-project effort is known as the Small-scale Laboratory Investigation of Cooling Enhancements (SLICE). The SLICE aims to study, design, and test sub-scale multiple attachments and flow configuration concepts for vapor-based heat interception of structural skirts. SLICE will focus on understanding the efficiency of the heat transfer mechanism to the boil-off hydrogen vapor by varying the fluid network designs and configurations. Various analyses were completed in MATLAB, Excel VBA, and COMSOL Multiphysics to understand the optimum flow pattern for heat transfer and fluid dynamics. Results from these analyses were used to design and fabricate test article subsections of a large forward skirt with vapor cooling applied. The SLICE testing is currently being performed to collect thermal mechanical performance data on multiple skirt heat removal designs while varying inlet vapor conditions necessary to intercept a specified amount of heat for a given system. Initial results suggest that applying vapor-cooling provides a 50 heat reduction in conductive heat transmission along the skirt to the tank. The information obtained by SLICE will be used by the SHIIVER engineering team to design and implement vapor-based heat removal technology into the SHIIVER forward skirt hardware design.

  4. New methods of MR image intensity standardization via generalized scale

    NASA Astrophysics Data System (ADS)

    Madabhushi, Anant; Udupa, Jayaram K.

    2005-04-01

    Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.

  5. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    PubMed

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  6. Sachem: a chemical cartridge for high-performance substructure search.

    PubMed

    Kratochvíl, Miroslav; Vondrášek, Jiří; Galgonek, Jakub

    2018-05-23

    Structure search is one of the valuable capabilities of small-molecule databases. Fingerprint-based screening methods are usually employed to enhance the search performance by reducing the number of calls to the verification procedure. In substructure search, fingerprints are designed to capture important structural aspects of the molecule to aid the decision about whether the molecule contains a given substructure. Currently available cartridges typically provide acceptable search performance for processing user queries, but do not scale satisfactorily with dataset size. We present Sachem, a new open-source chemical cartridge that implements two substructure search methods: The first is a performance-oriented reimplementation of substructure indexing based on the OrChem fingerprint, and the second is a novel method that employs newly designed fingerprints stored in inverted indices. We assessed the performance of both methods on small, medium, and large datasets containing 1, 10, and 94 million compounds, respectively. Comparison of Sachem with other freely available cartridges revealed improvements in overall performance, scaling potential and screen-out efficiency. The Sachem cartridge allows efficient substructure searches in databases of all sizes. The sublinear performance scaling of the second method and the ability to efficiently query large amounts of pre-extracted information may together open the door to new applications for substructure searches.

  7. Experimental evaluation of four ground-motion scaling methods for dynamic response-history analysis of nonlinear structures

    USGS Publications Warehouse

    O'Donnell, Andrew P.; Kurama, Yahya C.; Kalkan, Erol; Taflanidis, Alexandros A.

    2017-01-01

    This paper experimentally evaluates four methods to scale earthquake ground-motions within an ensemble of records to minimize the statistical dispersion and maximize the accuracy in the dynamic peak roof drift demand and peak inter-story drift demand estimates from response-history analyses of nonlinear building structures. The scaling methods that are investigated are based on: (1) ASCE/SEI 7–10 guidelines; (2) spectral acceleration at the fundamental (first mode) period of the structure, Sa(T1); (3) maximum incremental velocity, MIV; and (4) modal pushover analysis. A total of 720 shake-table tests of four small-scale nonlinear building frame specimens with different static and dynamic characteristics are conducted. The peak displacement demands from full suites of 36 near-fault ground-motion records as well as from smaller “unbiased” and “biased” design subsets (bins) of ground-motions are included. Out of the four scaling methods, ground-motions scaled to the median MIV of the ensemble resulted in the smallest dispersion in the peak roof and inter-story drift demands. Scaling based on MIValso provided the most accurate median demands as compared with the “benchmark” demands for structures with greater nonlinearity; however, this accuracy was reduced for structures exhibiting reduced nonlinearity. The modal pushover-based scaling (MPS) procedure was the only method to conservatively overestimate the median drift demands.

  8. Quality by design: scale-up of freeze-drying cycles in pharmaceutical industry.

    PubMed

    Pisano, Roberto; Fissore, Davide; Barresi, Antonello A; Rastelli, Massimo

    2013-09-01

    This paper shows the application of mathematical modeling to scale-up a cycle developed with lab-scale equipment on two different production units. The above method is based on a simplified model of the process parameterized with experimentally determined heat and mass transfer coefficients. In this study, the overall heat transfer coefficient between product and shelf was determined by using the gravimetric procedure, while the dried product resistance to vapor flow was determined through the pressure rise test technique. Once model parameters were determined, the freeze-drying cycle of a parenteral product was developed via dynamic design space for a lab-scale unit. Then, mathematical modeling was used to scale-up the above cycle in the production equipment. In this way, appropriate values were determined for processing conditions, which allow the replication, in the industrial unit, of the product dynamics observed in the small scale freeze-dryer. This study also showed how inter-vial variability, as well as model parameter uncertainty, can be taken into account during scale-up calculations.

  9. Applications of three-dimensional (3D) printing for microswimmers and bio-hybrid robotics.

    PubMed

    Stanton, M M; Trichet-Paredes, C; Sánchez, S

    2015-04-07

    This article will focus on recent reports that have applied three-dimensional (3D) printing for designing millimeter to micrometer architecture for robotic motility. The utilization of 3D printing has rapidly grown in applications for medical prosthetics and scaffolds for organs and tissue, but more recently has been implemented for designing mobile robotics. With an increase in the demand for devices to perform in fragile and confined biological environments, it is crucial to develop new miniaturized, biocompatible 3D systems. Fabrication of materials at different scales with different properties makes 3D printing an ideal system for creating frameworks for small-scale robotics. 3D printing has been applied for the design of externally powered, artificial microswimmers and studying their locomotive capabilities in different fluids. Printed materials have also been incorporated with motile cells for bio-hybrid robots capable of functioning by cell contraction and swimming. These 3D devices offer new methods of robotic motility for biomedical applications requiring miniature structures. Traditional 3D printing methods, where a structure is fabricated in an additive process from a digital design, and non-traditional 3D printing methods, such as lithography and molding, will be discussed.

  10. A Watershed Scale Life Cycle Assessment Framework for Hydrologic Design

    NASA Astrophysics Data System (ADS)

    Tavakol-Davani, H.; Tavakol-Davani, PhD, H.; Burian, S. J.

    2017-12-01

    Sustainable hydrologic design has received attention from researchers with different backgrounds, including hydrologists and sustainability experts, recently. On one hand, hydrologists have been analyzing ways to achieve hydrologic goals through implementation of recent environmentally-friendly approaches, e.g. Green Infrastructure (GI) - without quantifying the life cycle environmental impacts of the infrastructure through the ISO Life Cycle Assessment (LCA) method. On the other hand, sustainability experts have been applying the LCA to study the life cycle impacts of water infrastructure - without considering the important hydrologic aspects through hydrologic and hydraulic (H&H) analysis. In fact, defining proper system elements for a watershed scale urban water sustainability study requires both H&H and LCA specialties, which reveals the necessity of performing an integrated, interdisciplinary study. Therefore, the present study developed a watershed scale coupled H&H-LCA framework to bring the hydrology and sustainability expertise together to contribute moving the current wage definition of sustainable hydrologic design towards onto a globally standard concept. The proposed framework was employed to study GIs for an urban watershed in Toledo, OH. Lastly, uncertainties associated with the proposed method and parameters were analyzed through a robust Monte Carlo simulation using parallel processing. Results indicated the necessity of both hydrologic and LCA components in the design procedure in order to achieve sustainability.

  11. Efficient species-level monitoring at the landscape scale

    Treesearch

    Barry R. Noon; Larissa L. Bailey; Thomas D. Sisk; Kevin S. McKelvey

    2012-01-01

    Monitoring the population trends of multiple animal species at a landscape scale is prohibitively expensive. However, advances in survey design, statistical methods, and the ability to estimate species presence on the basis of detection­nondetection data have greatly increased the feasibility of species-level monitoring. For example, recent advances in monitoring make...

  12. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  13. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  14. Disadvantages of the Horsfall-Barratt Scale for estimating severity of citrus canker

    USDA-ARS?s Scientific Manuscript database

    Direct visual estimation of disease severity to the nearest percent was compared to using the Horsfall-Barratt (H-B) scale. Data from a simulation model designed to sample two diseased populations were used to investigate the probability of the two methods to reject a null hypothesis (H0) using a t-...

  15. Societal Factors Impacting Child Welfare: Validating the Perceptions of Child Welfare Scale

    ERIC Educational Resources Information Center

    Auerbach, Charles; Zeitlin, Wendy; Augsberger, Astraea; McGowan, Brenda G.; Claiborne, Nancy; Lawrence, Catherine K.

    2015-01-01

    Objective: This research examines the psychometric properties of the Perceptions of Child Welfare Scale (PCWS). This instrument is designed to assess child welfare workers' understanding of how society views their role and their work. Methods: Confirmatory factor analysis (CFA) was utilized to analyze data on 538 child welfare workers. Results:…

  16. The Children's Perceived Locus of Causality Scale for Physical Education

    ERIC Educational Resources Information Center

    Pannekoek, Linda; Piek, Jan P.; Hagger, Martin S.

    2014-01-01

    A mixed methods design was applied to evaluate the application of the Perceived Locus of Causality scale (PLOC) to preadolescent samples in physical education settings. Subsequent to minor item adaptations to accommodate the assessment of younger samples, qualitative pilot tests were performed (N = 15). Children's reports indicated the need…

  17. Evaluation of ground motion scaling methods for analysis of structural systems

    USGS Publications Warehouse

    O'Donnell, A. P.; Beltsar, O.A.; Kurama, Y.C.; Kalkan, E.; Taflanidis, A.A.

    2011-01-01

    Ground motion selection and scaling comprises undoubtedly the most important component of any seismic risk assessment study that involves time-history analysis. Ironically, this is also the single parameter with the least guidance provided in current building codes, resulting in the use of mostly subjective choices in design. The relevant research to date has been primarily on single-degree-of-freedom systems, with only a few studies using multi-degree-of-freedom systems. Furthermore, the previous research is based solely on numerical simulations with no experimental data available for the validation of the results. By contrast, the research effort described in this paper focuses on an experimental evaluation of selected ground motion scaling methods based on small-scale shake-table experiments of re-configurable linearelastic and nonlinear multi-story building frame structure models. Ultimately, the experimental results will lead to the development of guidelines and procedures to achieve reliable demand estimates from nonlinear response history analysis in seismic design. In this paper, an overview of this research effort is discussed and preliminary results based on linear-elastic dynamic response are presented. ?? ASCE 2011.

  18. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  19. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  20. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  1. Design and Analysis of Subscale and Full-Scale Buckling-Critical Cylinders for Launch Vehicle Technology Development

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Lovejoy, Andrew E.; Thornburgh, Robert P.; Rankin, Charles

    2012-01-01

    NASA s Shell Buckling Knockdown Factor (SBKF) project has the goal of developing new analysis-based shell buckling design factors (knockdown factors) and design and analysis technologies for launch vehicle structures. Preliminary design studies indicate that implementation of these new knockdown factors can enable significant reductions in mass and mass-growth in these vehicles. However, in order to validate any new analysis-based design data or methods, a series of carefully designed and executed structural tests are required at both the subscale and full-scale levels. This paper describes the design and analysis of three different orthogrid-stiffeNed metallic cylindrical-shell test articles. Two of the test articles are 8-ft-diameter, 6-ft-long test articles, and one test article is a 27.5-ft-diameter, 20-ft-long Space Shuttle External Tank-derived test article.

  2. Acoustic Treatment Design Scaling Methods. Volume 2; Advanced Treatment Impedance Models for High Frequency Ranges

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.; Kwan, H. W.

    1999-01-01

    The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.

  3. Food powders flowability characterization: theory, methods, and applications.

    PubMed

    Juliano, Pablo; Barbosa-Cánovas, Gustavo V

    2010-01-01

    Characterization of food powders flowability is required for predicting powder flow from hoppers in small-scale systems such as vending machines or at the industrial scale from storage silos or bins dispensing into powder mixing systems or packaging machines. This review covers conventional and new methods used to measure flowability in food powders. The method developed by Jenike (1964) for determining hopper outlet diameter and hopper angle has become a standard for the design of bins and is regarded as a standard method to characterize flowability. Moreover, there are a number of shear cells that can be used to determine failure properties defined by Jenike's theory. Other classic methods (compression, angle of repose) and nonconventional methods (Hall flowmeter, Johanson Indicizer, Hosokawa powder tester, tensile strength tester, powder rheometer), used mainly for the characterization of food powder cohesiveness, are described. The effect of some factors preventing flow, such as water content, temperature, time consolidation, particle composition and size distribution, is summarized for the characterization of specific food powders with conventional and other methods. Whereas time-consuming standard methods established for hopper design provide flow properties, there is yet little comparative evidence demonstrating that other rapid methods may provide similar flow prediction.

  4. Vibration Response Predictions for Heavy Panel Mounted Components from Panel Acreage Environment Specifications

    NASA Technical Reports Server (NTRS)

    Harrison, Phillip; Frady, Greg; Duvall, Lowery; Fulcher, Clay; LaVerde, Bruce

    2010-01-01

    The development of new launch vehicles in the Aerospace industry often relies on response measurements taken from previously developed vehicles during various stages of liftoff and ascent, and from wind tunnel models. These measurements include sound pressure levels, dynamic pressures in turbulent boundary layers and accelerations. Rigorous statistical scaling methods are applied to the data to derive new environments and estimate the performance of new skin panel structures. Scaling methods have proven to be reliable, particularly for designs similar to the vehicles used as the basis for scaling, and especially in regions of smooth acreage without exterior protuberances or heavy components mounted to the panel. To account for response attenuation of a panel-mounted component due to its apparent mass at higher frequencies, the vibroacoustics engineer often reduces the acreage vibration according to a weight ratio first suggested by Barrett. The accuracy of the reduction is reduced with increased weight of the panel-mounted component, and does not account for low-frequency amplification of the component/panel response as a system. A method is proposed that combines acreage vibration from scaling methods with finite element analysis to account for the frequency-dependent dynamics of heavy panel-mounted components. Since the acreage and mass-loaded skins respond to the same dynamic input pressure, such pressure may be eliminated in favor of a frequency-dependent scaling function applied to the acreage vibration to predict the mass-loaded panel response. The scaling function replaces the Barrett weight ratio, and contains all of the dynamic character of the loaded and unloaded skin panels. The solution simplifies for spatially uncorrelated and fully correlated input pressures. Since the prediction uses finite element models of the loaded and unloaded skins, a rich suite of response data are available to the design engineer, including interface forces, stress and strain, as well as acceleration and displacement. An extension of the method is also developed to incorporate the effect of a local protuberance near a heavy component. Acreage environments from traditional scaling methods with and without protuberance effects serve as the basis for the extension. Authors:

  5. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  6. Fabrication of self-rolling geodesic objects and photonic crystal tubes

    NASA Astrophysics Data System (ADS)

    Danescu, A.; Regreny, Ph; Cremillieu, P.; Leclercq, J.-L.

    2018-07-01

    This paper presents a stress engineering method that allows the design and fabrication of the analogs of single-wall nanotubes in the class of photonic crystals. The macroscopic shape of the final object is obtained through the stress relaxation of a pre-stressed multilayer planar design. We illustrate the extent of the proposed method by various single-layer and multilayer photonic crystals tubes and micron-scale objects with 5-fold symmetry.

  7. Fabrication of self-rolling geodesic objects and photonic crystal tubes.

    PubMed

    Danescu, A; Regreny, Ph; Cremillieu, P; Leclercq, J-L

    2018-07-13

    This paper presents a stress engineering method that allows the design and fabrication of the analogs of single-wall nanotubes in the class of photonic crystals. The macroscopic shape of the final object is obtained through the stress relaxation of a pre-stressed multilayer planar design. We illustrate the extent of the proposed method by various single-layer and multilayer photonic crystals tubes and micron-scale objects with 5-fold symmetry.

  8. Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Benford, Andrew; Tinker, Michael L.

    2004-01-01

    The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.

  9. Engineering design of sub-micron topographies for simultaneously adherent and reflective metal-polymer interfaces

    NASA Technical Reports Server (NTRS)

    Brown, Christopher A.

    1993-01-01

    The approach of the project is to base the design of multi-function, reflective topographies on the theory that topographically dependent phenomena react with surfaces and interfaces at certain scales. The first phase of the project emphasizes the development of methods for understanding the sizes of topographic features which influence reflectivity. Subsequent phases, if necessary, will address the scales of interaction for adhesion and manufacturing processes. A simulation of the interaction of electromagnetic radiation, or light, with a reflective surface is performed using specialized software. Reflectivity of the surface as a function of scale is evaluated and the results from the simulation are compared with reflectivity measurements made on multi-function, reflective surfaces.

  10. Design Based Research Methodology for Teaching with Technology in English

    ERIC Educational Resources Information Center

    Jetnikoff, Anita

    2015-01-01

    Design based research (DBR) is an appropriate method for small scale educational research projects involving collaboration between teachers, students and researchers. It is particularly useful in collaborative projects where an intervention is implemented and evaluated in a grounded context. The intervention can be technological, or a new program…

  11. Large-Scale, Three–Dimensional, Free–Standing, and Mesoporous Metal Oxide Networks for High–Performance Photocatalysis

    PubMed Central

    Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng

    2013-01-01

    Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595

  12. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    PubMed

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  13. Rank Determination of Mental Functions by 1D Wavelets and Partial Correlation.

    PubMed

    Karaca, Y; Aslan, Z; Cattani, C; Galletta, D; Zhang, Y

    2017-01-01

    The main aim of this paper is to classify mental functions by the Wechsler Adult Intelligence Scale-Revised tests with a mixed method based on wavelets and partial correlation. The Wechsler Adult Intelligence Scale-Revised is a widely used test designed and applied for the classification of the adults cognitive skills in a comprehensive manner. In this paper, many different intellectual profiles have been taken into consideration to measure the relationship between the mental functioning and psychological disorder. We propose a method based on wavelets and correlation analysis for classifying mental functioning, by the analysis of some selected parameters measured by the Wechsler Adult Intelligence Scale-Revised tests. In particular, 1-D Continuous Wavelet Analysis, 1-D Wavelet Coefficient Method and Partial Correlation Method have been analyzed on some Wechsler Adult Intelligence Scale-Revised parameters such as School Education, Gender, Age, Performance Information Verbal and Full Scale Intelligence Quotient. In particular, we will show that gender variable has a negative but a significant role on age and Performance Information Verbal factors. The age parameters also has a significant relation in its role on Performance Information Verbal and Full Scale Intelligence Quotient change.

  14. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  15. Molecular Precision at Micrometer Length Scales: Hierarchical Assembly of DNA-Protein Nanostructures.

    PubMed

    Schiffels, Daniel; Szalai, Veronika A; Liddle, J Alexander

    2017-07-25

    Robust self-assembly across length scales is a ubiquitous feature of biological systems but remains challenging for synthetic structures. Taking a cue from biology-where disparate molecules work together to produce large, functional assemblies-we demonstrate how to engineer microscale structures with nanoscale features: Our self-assembly approach begins by using DNA polymerase to controllably create double-stranded DNA (dsDNA) sections on a single-stranded template. The single-stranded DNA (ssDNA) sections are then folded into a mechanically flexible skeleton by the origami method. This process simultaneously shapes the structure at the nanoscale and directs the large-scale geometry. The DNA skeleton guides the assembly of RecA protein filaments, which provides rigidity at the micrometer scale. We use our modular design strategy to assemble tetrahedral, rectangular, and linear shapes of defined dimensions. This method enables the robust construction of complex assemblies, greatly extending the range of DNA-based self-assembly methods.

  16. Spatial adaptive sampling in multiscale simulation

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.

    2014-07-01

    In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.

  17. Efficient implicit LES method for the simulation of turbulent cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egerer, Christian P., E-mail: christian.egerer@aer.mw.tum.de; Schmidt, Steffen J.; Hickel, Stefan

    2016-07-01

    We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flowmore » field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.« less

  18. Scaling of Two-Phase Flows to Partial-Earth Gravity

    NASA Technical Reports Server (NTRS)

    Hurlbert, Kathryn M.; Witte, Larry C.

    2003-01-01

    A report presents a method of scaling, to partial-Earth gravity, of parameters that describe pressure drops and other characteristics of two-phase (liquid/ vapor) flows. The development of the method was prompted by the need for a means of designing two-phase flow systems to operate on the Moon and on Mars, using fluid-properties and flow data from terrestrial two-phase-flow experiments, thus eliminating the need for partial-gravity testing. The report presents an explicit procedure for designing an Earth-based test bed that can provide hydrodynamic similarity with two-phase fluids flowing in partial-gravity systems. The procedure does not require prior knowledge of the flow regime (i.e., the spatial orientation of the phases). The method also provides for determination of pressure drops in two-phase partial-gravity flows by use of a generalization of the classical Moody chart (previously applicable to single-phase flow only). The report presents experimental data from Mars- and Moon-activity experiments that appear to demonstrate the validity of this method.

  19. Repeatability of riparian vegetation sampling methods: how useful are these techniques for broad-scale, long-term monitoring?

    Treesearch

    Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner

    2004-01-01

    Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...

  20. Parameter Studies, time-dependent simulations and design with automated Cartesian methods

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael

    2005-01-01

    Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.

  1. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  2. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  3. Simultaneous Synthesis of Treatment Effects and Mapping to a Common Scale: An Alternative to Standardisation

    ERIC Educational Resources Information Center

    Ades, A. E.; Lu, Guobing; Dias, Sofia; Mayo-Wilson, Evan; Kounali, Daphne

    2015-01-01

    Objective: Trials often may report several similar outcomes measured on different test instruments. We explored a method for synthesising treatment effect information both within and between trials and for reporting treatment effects on a common scale as an alternative to standardisation Study design: We applied a procedure that simultaneously…

  4. Scaling up STEM Academies Statewide: Implementation, Network Supports, and Early Outcomes

    ERIC Educational Resources Information Center

    Young, Viki; House, Ann; Sherer, David; Singleton, Corinne; Wang, Haiwen; Klopfenstein, Kristin

    2016-01-01

    This chapter presents a case study of scaling up the T-STEM initiative in Texas. Data come from the four-year longitudinal evaluation of the Texas High School Project (THSP). The evaluation studied the implementation and impact of T-STEM and the other THSP reforms using a mixed-methods design, including qualitative case studies; principal,…

  5. Status of JUPITER Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, T.; Shirakata, K.; Kinjo, K.

    To obtain the data necessary for evaluating the nuclear design method of a large-scale fast breeder reactor, criticality tests with a large- scale homogeneous reactor were conducted as part of a joint research program by Japan and the U.S. Analyses of the tests are underway in both countries. The purpose of this paper is to describe the status of this project.

  6. A practical approach for the scale-up of roller compaction process.

    PubMed

    Shi, Weixian; Sprockel, Omar L

    2016-09-01

    An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Applications of mixed-methods methodology in clinical pharmacy research.

    PubMed

    Hadi, Muhammad Abdul; Closs, S José

    2016-06-01

    Introduction Mixed-methods methodology, as the name suggests refers to mixing of elements of both qualitative and quantitative methodologies in a single study. In the past decade, mixed-methods methodology has gained popularity among healthcare researchers as it promises to bring together the strengths of both qualitative and quantitative approaches. Methodology A number of mixed-methods designs are available in the literature and the four most commonly used designs in healthcare research are: the convergent parallel design, the embedded design, the exploratory design, and the explanatory design. Each has its own unique advantages, challenges and procedures and selection of a particular design should be guided by the research question. Guidance on designing, conducting and reporting mixed-methods research is available in the literature, so it is advisable to adhere to this to ensure methodological rigour. When to use it is best suited when the research questions require: triangulating findings from different methodologies to explain a single phenomenon; clarifying the results of one method using another method; informing the design of one method based on the findings of another method, development of a scale/questionnaire and answering different research questions within a single study. Two case studies have been presented to illustrate possible applications of mixed-methods methodology. Limitations Possessing the necessary knowledge and skills to undertake qualitative and quantitative data collection, analysis, interpretation and integration remains the biggest challenge for researchers conducting mixed-methods studies. Sequential study designs are often time consuming, being in two (or more) phases whereas concurrent study designs may require more than one data collector to collect both qualitative and quantitative data at the same time.

  8. Guidelines for Genome-Scale Analysis of Biological Rhythms.

    PubMed

    Hughes, Michael E; Abruzzi, Katherine C; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M Fernanda; Chen, Zheng; Chiu, Joanna C; Cox, Juergen; Crowell, Alexander M; DeBruyne, Jason P; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J; Duffield, Giles E; Dunlap, Jay C; Eckel-Mahan, Kristin; Esser, Karyn A; FitzGerald, Garret A; Forger, Daniel B; Francey, Lauren J; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H; Herzel, Hanspeter; Herzog, Erik D; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J; Hurley, Jennifer M; de la Iglesia, Horacio O; Johnson, Carl; Kay, Steve A; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A; Li, Jiajia; Li, Xiaodong; Liu, Andrew C; Loros, Jennifer J; Martino, Tami A; Menet, Jerome S; Merrow, Martha; Millar, Andrew J; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N; Olmedo, Maria; Nusinow, Dmitri A; Ptáček, Louis J; Rand, David; Reddy, Akhilesh B; Robles, Maria S; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D; Rund, Samuel S C; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J; Storch, Kai-Florian; Takahashi, Joseph S; Ueda, Hiroki R; Wang, Han; Weitz, Charles; Westermark, Pål O; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B

    2017-10-01

    Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding "big data" that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them.

  9. Guidelines for Genome-Scale Analysis of Biological Rhythms

    PubMed Central

    Hughes, Michael E.; Abruzzi, Katherine C.; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M. Fernanda; Chen, Zheng; Chiu, Joanna C.; Cox, Juergen; Crowell, Alexander M.; DeBruyne, Jason P.; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J.; Duffield, Giles E.; Dunlap, Jay C.; Eckel-Mahan, Kristin; Esser, Karyn A.; FitzGerald, Garret A.; Forger, Daniel B.; Francey, Lauren J.; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S.; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H.; Herzel, Hanspeter; Herzog, Erik D.; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J.; Hurley, Jennifer M.; de la Iglesia, Horacio O.; Johnson, Carl; Kay, Steve A.; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A.; Li, Jiajia; Li, Xiaodong; Liu, Andrew C.; Loros, Jennifer J.; Martino, Tami A.; Menet, Jerome S.; Merrow, Martha; Millar, Andrew J.; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N.; Olmedo, Maria; Nusinow, Dmitri A.; Ptáček, Louis J.; Rand, David; Reddy, Akhilesh B.; Robles, Maria S.; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D.; Rund, Samuel S.C.; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J.; Storch, Kai-Florian; Takahashi, Joseph S.; Ueda, Hiroki R.; Wang, Han; Weitz, Charles; Westermark, Pål O.; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B.

    2017-01-01

    Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding “big data” that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them. PMID:29098954

  10. Central composite design with the help of multivariate curve resolution in loadability optimization of RP-HPLC to scale-up a binary mixture.

    PubMed

    Taheri, Mohammadreza; Moazeni-Pourasil, Roudabeh Sadat; Sheikh-Olia-Lavasani, Majid; Karami, Ahmad; Ghassempour, Alireza

    2016-03-01

    Chromatographic method development for preparative targets is a time-consuming and subjective process. This can be particularly problematic because of the use of valuable samples for isolation and the large consumption of solvents in preparative scale. These processes could be improved by using statistical computations to save time, solvent and experimental efforts. Thus, contributed by ESI-MS, after applying DryLab software to gain an overview of the most effective parameters in separation of synthesized celecoxib and its co-eluted compounds, design of experiment software that relies on multivariate modeling as a chemometric approach was used to predict the optimized touching-band overloading conditions by objective functions according to the relationship between selectivity and stationary phase properties. The loadability of the method was investigated on the analytical and semi-preparative scales, and the performance of this chemometric approach was approved by peak shapes beside recovery and purity of products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A procedure for scaling sensory attributes based on multidimensional measurements: application to sensory sharpness of kitchen knives

    NASA Astrophysics Data System (ADS)

    Takatsuji, Toshiyuki; Tanaka, Ken-ichi

    1996-06-01

    A procedure is derived by which sensory attributes can be scaled as a function of various physical and/or chemical properties of the object to be tested. This procedure consists of four successive steps: (i) design and experiment, (ii) fabrication of specimens according to the design parameters, (iii) assessment of a sensory attribute using sensory evaluation and (iv) derivation of the relationship between the parameters and the sensory attribute. In these steps an experimental design using orthogonal arrays, analysis of variance and regression analyses are used strategically. When a specimen with the design parameters cannot be physically fabricated, an alternative specimen having parameters closest to the design is selected from a group of specimens which can be physically made. The influence of the deviation of actual parameters from the desired ones is also discussed. A method of confirming the validity of the regression equation is also investigated. The procedure is applied to scale the sensory sharpness of kitchen knives as a function of the edge angle and the roughness of the cutting edge.

  12. High Speed Civil Transport Design Using Collaborative Optimization and Approximate Models

    NASA Technical Reports Server (NTRS)

    Manning, Valerie Michelle

    1999-01-01

    The design of supersonic aircraft requires complex analysis in multiple disciplines, posing, a challenge for optimization methods. In this thesis, collaborative optimization, a design architecture developed to solve large-scale multidisciplinary design problems, is applied to the design of supersonic transport concepts. Collaborative optimization takes advantage of natural disciplinary segmentation to facilitate parallel execution of design tasks. Discipline-specific design optimization proceeds while a coordinating mechanism ensures progress toward an optimum and compatibility between disciplinary designs. Two concepts for supersonic aircraft are investigated: a conventional delta-wing design and a natural laminar flow concept that achieves improved performance by exploiting properties of supersonic flow to delay boundary layer transition. The work involves the development of aerodynamics and structural analyses, and integration within a collaborative optimization framework. It represents the most extensive application of the method to date.

  13. Lidar Based Emissions Measurement at the Whole Facility Scale: Method and Error Analysis

    USDA-ARS?s Scientific Manuscript database

    Particulate emissions from agricultural sources vary from dust created by operations and animal movement to the fine secondary particulates generated from ammonia and other emitted gases. The development of reliable facility emission data using point sampling methods designed to characterize regiona...

  14. COMMUNITY-ORIENTED DESIGN AND EVALUATION PROCESS FOR SUSTAINABLE INFRASTRUCTURE

    EPA Science Inventory

    We met our first objective by completing the physical infrastructure of the La Fortuna-Tule water and sanitation project using the CODE-PSI method. This physical component of the project was important in providing a real, relevant, community-scale test case for the methods ...

  15. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  16. A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model

    PubMed Central

    Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin

    2009-01-01

    A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542

  17. Description of the US Army small-scale 2-meter rotor test system

    NASA Technical Reports Server (NTRS)

    Phelps, Arthur E., III; Berry, John D.

    1987-01-01

    A small-scale powered rotor model was designed for use as a research tool in the exploratory testing of rotors and helicopter models. The model, which consists of a 29 hp rotor drive system, a four-blade fully articulated rotor, and a fuselage, was designed to be simple to operate and maintain in wind tunnels of moderate size and complexity. Two six-component strain-gauge balances are used to provide independent measurement of the rotor and fuselage aerodynamic loads. Commercially available standardized hardware and equipment were used to the maximum extent possible, and specialized parts were designed so that they could be fabricated by normal methods without using highly specialized tooling. The model was used in a hover test of three rotors having different planforms and in a forward flight investigation of a 21-percent-scale model of a U.S. Army scout helicopter equipped with a mast-mounted sight.

  18. A quantitative approach to evaluating caring in nursing simulation.

    PubMed

    Eggenberger, Terry L; Keller, Kathryn B; Chase, Susan K; Payne, Linda

    2012-01-01

    This study was designed to test a quantitative method of measuring caring in the simulated environment. Since competency in caring is central to nursing practice, ways of including caring concepts in designing scenarios and in evaluation of performance need to be developed. Coates' Caring Efficacy scales were adapted for simulation and named the Caring Efficacy Scale-Simulation Student Version (CES-SSV) and Caring Efficacy Scale-Simulation Faculty Version (CES-SFV). A correlational study was designed to compare student self-ratings with faculty ratings on caring efficacy during an adult acute simulation experience with traditional and accelerated baccalaureate students in a nursing program grounded in caring theory. Student self-ratings were significantly correlated with objective ratings (r = 0.345, 0.356). Both the CES-SSV and the CES-SFV were found to have excellent internal consistency and significantly correlated interrater reliability. They were useful in measuring caring in the simulated learning environment.

  19. The PedsQL Multidimensional Fatigue Scale in pediatric rheumatology: reliability and validity.

    PubMed

    Varni, James W; Burwinkle, Tasha M; Szer, Ilona S

    2004-12-01

    . The PedsQL (Pediatric Quality of Life Inventory) is a modular instrument designed to measure health related quality of life (HRQOL) in children and adolescents ages 2-18 years. The recently developed 18-item PedsQL Multidimensional Fatigue Scale was designed to measure fatigue in pediatric patients and comprises the General Fatigue Scale (6 items), Sleep/Rest Fatigue Scale (6 items), and Cognitive Fatigue Scale (6 items). The PedsQL 4.0 Generic Core Scales were developed as the generic core measure to be integrated with the PedsQL Disease-Specific Modules. The PedsQL 3.0 Rheumatology Module was designed to measure pediatric rheumatology-specific HRQOL. Methods. The PedsQL Multidimensional Fatigue Scale, Generic Core Scales, and Rheumatology Module were administered to 163 children and 154 parents (183 families accrued overall) recruited from a pediatric rheumatology clinic. Results. Internal consistency reliability for the PedsQL Multidimensional Fatigue Scale Total Score (a = 0.95 child, 0.95 parent report), General Fatigue Scale (a = 0.93 child, 0.92 parent), Sleep/Rest Fatigue Scale (a = 0.88 child, 0.90 parent), and Cognitive Fatigue Scale (a = 0.93 child, 0.96 parent) were excellent for group and individual comparisons. The validity of the PedsQL Multidimensional Fatigue Scale was confirmed through hypothesized intercorrelations with dimensions of generic and rheumatology-specific HRQOL. The PedsQL Multidimensional Fatigue Scale distinguished between healthy children and children with rheumatic diseases as a group, and was associated with greater disease severity. Children with fibromyalgia manifested greater fatigue than children with other rheumatic diseases. The results confirm the initial reliability and validity of the PedsQL Multidimensional Fatigue Scale in pediatric rheumatology.

  20. Aeroacoustic prediction of turbulent free shear flows

    NASA Astrophysics Data System (ADS)

    Bodony, Daniel Joseph

    2005-12-01

    For many people living in the immediate vicinity of an active airport the noise of jet aircraft flying overhead can be a nuisance, if not worse. Airports, which are held accountable for the noise they produce, and upcoming international noise limits are pressuring the major airframe and jet engine manufacturers to bring quieter aircraft into service. However, component designers need a predictive tool that can estimate the sound generated by a new configuration. Current noise prediction techniques are almost entirely based on previously collected experimental data and are applicable only to evolutionary, not revolutionary, changes in the basic design. Physical models of final candidate designs must still be built and tested before a single design is selected. By focusing on the noise produced in the jet engine exhaust at take-off conditions, the prediction of sound generated by turbulent flows is addressed. The technique of large-eddy simulation is used to calculate directly the radiated sound produced by jets at different operating conditions. Predicted noise spectra agree with measurements for frequencies up to, and slightly beyond, the peak frequency. Higher frequencies are missed, however, due to the limited resolution of the simulations. Two methods of estimating the 'missing' noise are discussed. In the first a subgrid scale noise model, analogous to a subgrid scale closure model, is proposed. In the second method the governing equations are expressed in a wavelet basis from which simplified time-dependent equations for the subgrid scale fluctuations can be derived. These equations are inexpensively integrated to yield estimates of the subgrid scale fluctuations with proper space-time dynamics.

  1. Evaluating the Cognition, Behavior, and Social Profile of an Adolescent With Learning Disabilities and Assessing the Effectiveness of an Individualized Educational Program

    PubMed Central

    Tabitha Louis, Preeti; Arnold Emerson, Isaac

    2014-01-01

    Objective: The present study seeks to outline a holistic assessment method that was used in understanding problems experienced by an adolescent boy and in designing and implementing an individualized educational program. Methods: An adolescent child referred for concerns in learning was screened for learning disability using standardized inventories and test batteries. The Connors Parent and Teacher Rating Scales (short forms), Wechsler's Intelligence Scale for Children (WISC), the Vineland Social Maturity Scale (VSMS), and the Kinetic Family Drawing (KFD) test were used to assess the behavior, cognition, and social profile of the child. An individualized educational program was designed and this intervention was provided for 6 months by using parents as co-therapists. Participant and parent interview schedules were used in identifying underlying issues of concern. The child was reassessed 6 months after the intervention was provided. Results: Findings on the Connors Parent Rating Scale revealed scores that were greater than the 50th percentile on the domains of inattention and cognitive problems. On the Connors Teacher Rating Scale, we observed scores greater than the 50th percentile on the hyperactivity, cognitive problems, and the inattention domains. The WISC revealed that the child had a "Dull Normal" Intellectual functioning and there was also a deficit of 2 years on the social skills as tested by the Vineland Social Maturity Scale (VSMS). The Kinetic Family Drawing Test revealed negative emotions within the child. Post intervention, we noticed a remarkable improvement in the scores across all domains of behavior, social, and cognitive functioning. Conclusion: Designing an individualized education program that is tailored to the specific needs of the child and using parents as co-therapists proved to be an effective intervention. PMID:25053954

  2. Modal-pushover-based ground-motion scaling procedure

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  3. Teaching Quality in Math Class: The Development of a Scale and the Analysis of Its Relationship with Engagement and Achievement.

    PubMed

    Leon, Jaime; Medina-Garrido, Elena; Núñez, Juan L

    2017-01-01

    Math achievement and engagement declines in secondary education; therefore, educators are faced with the challenge of engaging students to avoid school failure. Within self-determination theory, we address the need to assess comprehensively student perceptions of teaching quality that predict engagement and achievement. In study one we tested, in a sample of 548 high school students, a preliminary version of a scale to assess nine factors: teaching for relevance, acknowledge negative feelings, participation encouragement, controlling language, optimal challenge, focus on the process, class structure, positive feedback, and caring. In the second study, we analyzed the scale's reliability and validity in a sample of 1555 high school students. The scale showed evidence of reliability, and with regard to criterion validity, at the classroom level, teaching quality was a predictor of behavioral engagement, and higher grades were observed in classes where students, as a whole, displayed more behavioral engagement. At the within level, behavioral engagement was associated with achievement. We not only provide a reliable and valid method to assess teaching quality, but also a method to design interventions, these could be designed based on the scale items to encourage students to persist and display more engagement on school duties, which in turn bolsters student achievement.

  4. Small-Scale Design Experiments as Working Space for Larger Mobile Communication Challenges

    ERIC Educational Resources Information Center

    Lowe, Sarah; Stuedahl, Dagny

    2014-01-01

    In this paper, a design experiment using Instagram as a cultural probe is submitted as a method for analyzing the challenges that arise when considering the implementation of social media within a distributed communication space. It outlines how small, iterative investigations can reveal deeper research questions relevant to the education of…

  5. Practical guidelines to select and scale earthquake records for nonlinear response history analysis of structures

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2010-01-01

    Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  6. Fast correlation method for passive-solar design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wray, W.O.; Biehl, F.A.; Kosiewicz, C.E.

    1982-01-01

    A passive-solar design manual for single-family detached residences and dormitory-type buildings is being developed. The design procedure employed in the manual is a simplification of the original monthly solar load ratio (SLR) method. The new SLR correlations involve a single constant for each system. The correlation constant appears as a scale factor permitting the use of a universal performance curve for all passive systems. Furthermore, by providing location-dependent correlations between the annual solar heating fraction (SHF) and the minimum monthly SHF, we have eliminated the need to perform an SLR calculation for each month of the heating season.

  7. Validity of contents of a paediatric critical comfort scale using mixed methodology.

    PubMed

    Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A

    Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.

  8. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    NASA Astrophysics Data System (ADS)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  9. Multi-scale Slip Inversion Based on Simultaneous Spatial and Temporal Domain Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Liu, W.; Yao, H.; Yang, H. Y.

    2017-12-01

    Finite fault inversion is a widely used method to study earthquake rupture processes. Some previous studies have proposed different methods to implement finite fault inversion, including time-domain, frequency-domain, and wavelet-domain methods. Many previous studies have found that different frequency bands show different characteristics of the seismic rupture (e.g., Wang and Mori, 2011; Yao et al., 2011, 2013; Uchide et al., 2013; Yin et al., 2017). Generally, lower frequency waveforms correspond to larger-scale rupture characteristics while higher frequency data are representative of smaller-scale ones. Therefore, multi-scale analysis can help us understand the earthquake rupture process thoroughly from larger scale to smaller scale. By the use of wavelet transform, the wavelet-domain methods can analyze both the time and frequency information of signals in different scales. Traditional wavelet-domain methods (e.g., Ji et al., 2002) implement finite fault inversion with both lower and higher frequency signals together to recover larger-scale and smaller-scale characteristics of the rupture process simultaneously. Here we propose an alternative strategy with a two-step procedure, i.e., firstly constraining the larger-scale characteristics with lower frequency signals, and then resolving the smaller-scale ones with higher frequency signals. We have designed some synthetic tests to testify our strategy and compare it with the traditional one. We also have applied our strategy to study the 2015 Gorkha Nepal earthquake using tele-seismic waveforms. Both the traditional method and our two-step strategy only analyze the data in different temporal scales (i.e., different frequency bands), while the spatial distribution of model parameters also shows multi-scale characteristics. A more sophisticated strategy is to transfer the slip model into different spatial scales, and then analyze the smooth slip distribution (larger scales) with lower frequency data firstly and more detailed slip distribution (smaller scales) with higher frequency data subsequently. We are now implementing the slip inversion using both spatial and temporal domain wavelets. This multi-scale analysis can help us better understand frequency-dependent rupture characteristics of large earthquakes.

  10. Global river flood hazard maps: hydraulic modelling methods and appropriate uses

    NASA Astrophysics Data System (ADS)

    Townend, Samuel; Smith, Helen; Molloy, James

    2014-05-01

    Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some appropriate uses of global scale hazard maps and explore how this new approach can be invaluable in areas of the world where flood hazard and risk have not previously been assessed.

  11. Rotorcraft noise

    NASA Technical Reports Server (NTRS)

    Huston, R. J. (Compiler)

    1982-01-01

    The establishment of a realistic plan for NASA and the U.S. helicopter industry to develop a design-for-noise methodology, including plans for the identification and development of promising noise reduction technology was discussed. Topics included: noise reduction techniques, scaling laws, empirical noise prediction, psychoacoustics, and methods of developing and validing noise prediction methods.

  12. Being Outside Learning about Science Is Amazing: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Weibel, Michelle L.

    2011-01-01

    This study used a convergent parallel mixed methods design to examine teachers' environmental attitudes and concerns about an outdoor educational field trip. Converging both quantitative data (Environmental Attitudes Scale and teacher demographics) and qualitative data (Open-Ended Statements of Concern and interviews) facilitated interpretation.…

  13. Quake Final Video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Critical infrastructures of the world are at constant risks for earthquakes. Most of these critical structures are designed using archaic, seismic, simulation methods that were built from early digital computers from the 1970s. Idaho National Laboratory’s Seismic Research Group are working to modernize the simulation methods through computational research and large-scale laboratory experiments.

  14. Multi-scale Material Appearance

    NASA Astrophysics Data System (ADS)

    Wu, Hongzhi

    Modeling and rendering the appearance of materials is important for a diverse range of applications of computer graphics - from automobile design to movies and cultural heritage. The appearance of materials varies considerably at different scales, posing significant challenges due to the sheer complexity of the data, as well the need to maintain inter-scale consistency constraints. This thesis presents a series of studies around the modeling, rendering and editing of multi-scale material appearance. To efficiently render material appearance at multiple scales, we develop an object-space precomputed adaptive sampling method, which precomputes a hierarchy of view-independent points that preserve multi-level appearance. To support bi-scale material appearance design, we propose a novel reflectance filtering algorithm, which rapidly computes the large-scale appearance from small-scale details, by exploiting the low-rank structures of Bidirectional Visible Normal Distribution Functions and pre-rotated Bidirectional Reflectance Distribution Functions in the matrix formulation of the rendering algorithm. This approach can guide the physical realization of appearance, as well as the modeling of real-world materials using very sparse measurements. Finally, we present a bi-scale-inspired high-quality general representation for material appearance described by Bidirectional Texture Functions. Our representation is at once compact, easily editable, and amenable to efficient rendering.

  15. Combined Use of Self-Efficacy Scale for Oral Health Behaviour and Oral Health Questionnaire: A Pilot Study

    ERIC Educational Resources Information Center

    Soutome, Sakiko; Kajiwara, Kazumi; Oho, Takahiko

    2012-01-01

    Objective: To examine whether the combined use of a task-specific self-efficacy scale for oral health behaviour (SEOH) and an oral health questionnaire (OHQ) would be useful for evaluating subjects' behaviours and cognitions. Design: Questionnaires. Methods: One hundred and eighty-five students completed the SEOH and OHQ. The 30-item OHQ uses a…

  16. Full-Scale Experimental Verification of Soft-Story-Only Retrofits of Wood-Frame Buildings using Hybrid Testing

    Treesearch

    Elaina Jennings; John W. van de Lindt; Ershad Ziaei; Pouria Bahmani; Sangki Park; Xiaoyun Shao; Weichiang Pang; Douglas Rammer; Gary Mochizuki; Mikhail Gershfeld

    2015-01-01

    The FEMA P-807 Guidelines were developed for retrofitting soft-story wood-frame buildings based on existing data, and the method had not been verified through full-scale experimental testing. This article presents two different retrofit designs based directly on the FEMA P-807 Guidelines that were examined at several different seismic intensity levels. The...

  17. Measurement of Latent Variables with Different Rating Scales: Testing Reliability and Measurement Equivalence by Varying the Verbalization and Number of Categories

    ERIC Educational Resources Information Center

    Menold, Natalja; Tausch, Anja

    2016-01-01

    Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used,…

  18. Development of Psychosocial Scales for Evaluating the Impact of a Culinary Nutrition Education Program on Cooking and Healthful Eating

    ERIC Educational Resources Information Center

    Condrasky, Margaret D.; Williams, Joel E.; Catalano, Patricia Michaud; Griffin, Sara F.

    2011-01-01

    Objective: Develop scales to assess the impact of the "Cooking with a Chef" program on several psychosocial constructs. Methods: Cross-sectional design in which parents and caregivers were recruited from child care settings (Head Start, faith-based, public elementary schools), and cooks were recruited from church and school kitchens. Analysis…

  19. A Comparison of Rubrics and Graded Category Rating Scales with Various Methods Regarding Raters' Reliability

    ERIC Educational Resources Information Center

    Dogan, C. Deha; Uluman, Müge

    2017-01-01

    The aim of this study was to determine the extent at which graded-category rating scales and rubrics contribute to inter-rater reliability. The research was designed as a correlational study. Study group consisted of 82 students attending sixth grade and three writing course teachers in a private elementary school. A performance task was…

  20. Application of optimized multiscale mathematical morphology for bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao

    2017-04-01

    In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.

  1. Wind-tunnel simulation of store jettison with the aid of magnetic artificial gravity

    NASA Technical Reports Server (NTRS)

    Stephens, T.; Adams, R.

    1972-01-01

    A method employed in the simulation of jettison of stores from aircraft involving small scale wind-tunnel drop tests from a model of the parent aircraft is described. Proper scaling of such experiments generally dictates that the gravitational acceleration should ideally be a test variable. A method of introducing a controllable artificial component of gravity by magnetic means has been proposed. The use of a magnetic artificial gravity facility based upon this idea, in conjunction with small scale wind-tunnel drop tests, would improve the accuracy of simulation. A review of the scaling laws as they apply to the design of such a facility is presented. The design constraints involved in the integration of such a facility with a wind tunnel are defined. A detailed performance analysis procedure applicable to such a facility is developed. A practical magnet configuration is defined which is capable of controlling the strength and orientation of the magnetic artificial gravity field in the vertical plane, thereby allowing simulation of store jettison from a diving or climbing aircraft. The factors involved in the choice between continuous or intermittent operation of the facility, and the use of normal or superconducting magnets, are defined.

  2. Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1991-01-01

    The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.

  3. Exploring factors that influence work analysis data: A meta-analysis of design choices, purposes, and organizational context.

    PubMed

    DuVernet, Amy M; Dierdorff, Erich C; Wilson, Mark A

    2015-09-01

    Work analysis is fundamental to designing effective human resource systems. The current investigation extends previous research by identifying the differential effects of common design decisions, purposes, and organizational contexts on the data generated by work analyses. The effects of 19 distinct factors that span choices of descriptor, collection method, rating scale, and data source, as well as project purpose and organizational features, are explored. Meta-analytic results cumulated from 205 articles indicate that many of these variables hold significant consequences for work analysis data. Factors pertaining to descriptor choice, collection method, rating scale, and the purpose for conducting the work analysis each showed strong associations with work analysis data. The source of the work analysis information and organizational context in which it was conducted displayed fewer relationships. Findings can be used to inform choices work analysts make about methodology and postcollection evaluations of work analysis information. (c) 2015 APA, all rights reserved).

  4. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  5. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  6. Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study

    NASA Technical Reports Server (NTRS)

    1988-01-01

    A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.

  7. Astronomical Distance Determination in the Space Age. Secondary Distance Indicators

    NASA Astrophysics Data System (ADS)

    Czerny, Bożena; Beaton, Rachael; Bejger, Michał; Cackett, Edward; Dall'Ora, Massimo; Holanda, R. F. L.; Jensen, Joseph B.; Jha, Saurabh W.; Lusso, Elisabeta; Minezaki, Takeo; Risaliti, Guido; Salaris, Maurizio; Toonen, Silvia; Yoshii, Yuzuru

    2018-02-01

    The formal division of the distance indicators into primary and secondary leads to difficulties in description of methods which can actually be used in two ways: with, and without the support of the other methods for scaling. Thus instead of concentrating on the scaling requirement we concentrate on all methods of distance determination to extragalactic sources which are designated, at least formally, to use for individual sources. Among those, the Supernovae Ia is clearly the leader due to its enormous success in determination of the expansion rate of the Universe. However, new methods are rapidly developing, and there is also a progress in more traditional methods. We give a general overview of the methods but we mostly concentrate on the most recent developments in each field, and future expectations.

  8. Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle

    NASA Technical Reports Server (NTRS)

    Benford, Andrew

    2003-01-01

    The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.

  9. Scaling Deep Learning on GPU and Knights Landing clusters

    DOE PAGES

    You, Yang; Buluc, Aydin; Demmel, James

    2017-09-26

    The speed of deep neural networks training has become a big bottleneck of deep learning research and development. For example, training GoogleNet by ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. To handle large datasets, they need to fetch data from either CPU memory or remote processors. We use both self-hosted Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From an algorithm aspect, current distributed machine learningmore » systems are mainly designed for cloud systems. These methods are asynchronous because of the slow network and high fault-tolerance requirement on cloud systems. We focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. Original EASGD used round-robin method for communication and updating. The communication is ordered by the machine rank ID, which is inefficient on HPC clusters. First, we redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD, and Hogwild EASGD are faster \\textcolor{black}{than} their existing counterparts (Async SGD, Async MSGD, and Hogwild SGD, resp.) in all the comparisons. Finally, we design Sync EASGD, which ties for the best performance among all the methods while being deterministic. In addition to the algorithmic improvements, we use some system-algorithm codesign techniques to scale up the algorithms. By reducing the percentage of communication from 87% to 14%, our Sync EASGD achieves 5.3x speedup over original EASGD on the same platform. We get 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation.« less

  10. Scaling Deep Learning on GPU and Knights Landing clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Buluc, Aydin; Demmel, James

    The speed of deep neural networks training has become a big bottleneck of deep learning research and development. For example, training GoogleNet by ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. To handle large datasets, they need to fetch data from either CPU memory or remote processors. We use both self-hosted Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From an algorithm aspect, current distributed machine learningmore » systems are mainly designed for cloud systems. These methods are asynchronous because of the slow network and high fault-tolerance requirement on cloud systems. We focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. Original EASGD used round-robin method for communication and updating. The communication is ordered by the machine rank ID, which is inefficient on HPC clusters. First, we redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD, and Hogwild EASGD are faster \\textcolor{black}{than} their existing counterparts (Async SGD, Async MSGD, and Hogwild SGD, resp.) in all the comparisons. Finally, we design Sync EASGD, which ties for the best performance among all the methods while being deterministic. In addition to the algorithmic improvements, we use some system-algorithm codesign techniques to scale up the algorithms. By reducing the percentage of communication from 87% to 14%, our Sync EASGD achieves 5.3x speedup over original EASGD on the same platform. We get 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation.« less

  11. User-Centered Design for Psychosocial Intervention Development and Implementation

    PubMed Central

    Lyon, Aaron R.; Koerner, Kelly

    2018-01-01

    The current paper articulates how common difficulties encountered when attempting to implement or scale-up evidence-based treatments are exacerbated by fundamental design problems, which may be addressed by a set of principles and methods drawn from the contemporary field of user-centered design. User-centered design is an approach to product development that grounds the process in information collected about the individuals and settings where products will ultimately be used. To demonstrate the utility of this perspective, we present four design concepts and methods: (a) clear identification of end users and their needs, (b) prototyping/rapid iteration, (c) simplifying existing intervention parameters/procedures, and (d) exploiting natural constraints. We conclude with a brief design-focused research agenda for the developers and implementers of evidence-based treatments. PMID:29456295

  12. Cost-Driven Design of a Large Scale X-Plane

    NASA Technical Reports Server (NTRS)

    Welstead, Jason R.; Frederic, Peter C.; Frederick, Michael A.; Jacobson, Steven R.; Berton, Jeffrey J.

    2017-01-01

    A conceptual design process focused on the development of a low-cost, large scale X-plane was developed as part of an internal research and development effort. One of the concepts considered for this process was the double-bubble configuration recently developed as an advanced single-aisle class commercial transport similar in size to a Boeing 737-800 or Airbus A320. The study objective was to reduce the contractor cost from contract award to first test flight to less than $100 million, and having the first flight within three years of contract award. Methods and strategies for reduced cost are discussed.

  13. Composite turbine blade design options for Claude (open) cycle OTEC power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penney, T R

    1985-11-01

    Small-scale turbine rotors made from composites offer several technical advantages for a Claude (open) cycle ocean thermal energy conversion (OTEC) power system. Westinghouse Electric Corporation has designed a composite turbine rotor/disk using state-of-the-art analysis methods for large-scale (100-MW/sub e/) open cycle OTEC applications. Near-term demonstrations using conventional low-pressure turbine blade shapes with composite material would achieve feasibility and modern credibility of the open cycle OTEC power system. Application of composite blades for low-pressure turbo-machinery potentially improves the reliability of conventional metal blades affected by stress corrosion.

  14. The PedsQL in pediatric cancer: reliability and validity of the Pediatric Quality of Life Inventory Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module.

    PubMed

    Varni, James W; Burwinkle, Tasha M; Katz, Ernest R; Meeske, Kathy; Dickinson, Paige

    2002-04-01

    The Pediatric Quality of Life Inventory (PedsQL) is a modular instrument designed to measure health-related quality of life (HRQOL) in children and adolescents ages 2-18 years. The PedsQL 4.0 Generic Core Scales are multidimensional child self-report and parent proxy-report scales developed as the generic core measure to be integrated with the PedsQL disease specific modules. The PedsQL Multidimensional Fatigue Scale was designed to measure fatigue in pediatric patients. The PedsQL 3.0 Cancer Module was designed to measure pediatric cancer specific HRQOL. The PedsQL Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module were administered to 339 families (220 child self-reports; 337 parent proxy-reports). Internal consistency reliability for the PedsQL Generic Core Total Scale Score (alpha = 0.88 child, 0.93 parent report), Multidimensional Fatigue Total Scale Score (alpha = 0.89 child, 0.92 parent report) and most Cancer Module Scales (average alpha = 0.72 child, 0.87 parent report) demonstrated reliability acceptable for group comparisons. Validity was demonstrated using the known-groups method. The PedsQL distinguished between healthy children and children with cancer as a group, and among children on-treatment versus off-treatment. The validity of the PedsQL Multidimensional Fatigue Scale was further demonstrated through hypothesized intercorrelations with dimensions of generic and cancer specific HRQOL. The results demonstrate the reliability and validity of the PedsQL Generic Core Scales, Multidimensional Fatigue Scale, and Cancer Module in pediatric cancer. The PedsQL may be utilized as an outcome measure in clinical trials, research, and clinical practice. Copyright 2002 American Cancer Society.

  15. Computational Design and Analysis of a Transonic Natural Laminar Flow Wing for a Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Lynde, Michelle N.; Campbell, Richard L.

    2017-01-01

    A natural laminar flow (NLF) wind tunnel model has been designed and analyzed for a wind tunnel test in the National Transonic Facility (NTF) at the NASA Langley Research Center. The NLF design method is built into the CDISC design module and uses a Navier-Stokes flow solver, a boundary layer profile solver, and stability analysis and transition prediction software. The NLF design method alters the pressure distribution to support laminar flow on the upper surface of wings with high sweep and flight Reynolds numbers. The method addresses transition due to attachment line contamination/transition, Gortler vortices, and crossflow and Tollmien-Schlichting modal instabilities. The design method is applied to the wing of the Common Research Model (CRM) at transonic flight conditions. Computational analysis predicts significant extents of laminar flow on the wing upper surface, which results in drag savings. A 5.2 percent scale semispan model of the CRM NLF wing will be built and tested in the NTF. This test will aim to validate the NLF design method, as well as characterize the laminar flow testing capabilities in the wind tunnel facility.

  16. Structural Element Testing in Support of the Design of the NASA Composite Crew Module

    NASA Technical Reports Server (NTRS)

    Kellas, Sotiris; Jackson, Wade C.; Thesken, John C.; Schleicher, Eric; Wagner, Perry; Kirsch, Michael T.

    2012-01-01

    In January 2007, the NASA Administrator and Associate Administrator for the Exploration Systems Mission Directorate chartered the NASA Engineering and Safety Center (NESC) to design, build, and test a full-scale Composite Crew Module (CCM). For the design and manufacturing of the CCM, the team adopted the building block approach where design and manufacturing risks were mitigated through manufacturing trials and structural testing at various levels of complexity. Following NASA's Structural Design Verification Requirements, a further objective was the verification of design analysis methods and the provision of design data for critical structural features. Test articles increasing in complexity from basic material characterization coupons through structural feature elements and large structural components, to full-scale structures were evaluated. This paper discusses only four elements tests three of which include joints and one that includes a tapering honeycomb core detail. For each test series included are specimen details, instrumentation, test results, a brief analysis description, test analysis correlation and conclusions.

  17. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    PubMed

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  18. Cascading pressure reactor and method for solar-thermochemical reactions

    DOEpatents

    Ermanoski, Ivan

    2017-11-14

    Reactors and methods for solar thermochemical reactions are disclosed. The reactors and methods include a cascade of reduction chambers at successively lower pressures that leads to over an order of magnitude pressure decrease compared to a single-chambered design. The resulting efficiency gains are substantial, and represent an important step toward practical and efficient solar fuel production on a large scale.

  19. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  20. A small-scale, rolled-membrane microfluidic artificial lung designed towards future large area manufacturing.

    PubMed

    Thompson, A J; Marks, L H; Goudie, M J; Rojas-Pena, A; Handa, H; Potkay, J A

    2017-03-01

    Artificial lungs have been used in the clinic for multiple decades to supplement patient pulmonary function. Recently, small-scale microfluidic artificial lungs (μAL) have been demonstrated with large surface area to blood volume ratios, biomimetic blood flow paths, and pressure drops compatible with pumpless operation. Initial small-scale microfluidic devices with blood flow rates in the μ l/min to ml/min range have exhibited excellent gas transfer efficiencies; however, current manufacturing techniques may not be suitable for scaling up to human applications. Here, we present a new manufacturing technology for a microfluidic artificial lung in which the structure is assembled via a continuous "rolling" and bonding procedure from a single, patterned layer of polydimethyl siloxane (PDMS). This method is demonstrated in a small-scale four-layer device, but is expected to easily scale to larger area devices. The presented devices have a biomimetic branching blood flow network, 10  μ m tall artificial capillaries, and a 66  μ m thick gas transfer membrane. Gas transfer efficiency in blood was evaluated over a range of blood flow rates (0.1-1.25 ml/min) for two different sweep gases (pure O 2 , atmospheric air). The achieved gas transfer data closely follow predicted theoretical values for oxygenation and CO 2 removal, while pressure drop is marginally higher than predicted. This work is the first step in developing a scalable method for creating large area microfluidic artificial lungs. Although designed for microfluidic artificial lungs, the presented technique is expected to result in the first manufacturing method capable of simply and easily creating large area microfluidic devices from PDMS.

  1. SCALE: A modular code system for performing Standardized Computer Analyses for Licensing Evaluation. Volume 1, Part 2: Control modules S1--H1; Revision 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less

  2. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  3. Satellite attitude prediction by multiple time scales method

    NASA Technical Reports Server (NTRS)

    Tao, Y. C.; Ramnath, R.

    1975-01-01

    An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.

  4. PyHLA: tests for the association between HLA alleles and diseases.

    PubMed

    Fan, Yanhui; Song, You-Qiang

    2017-02-06

    Recently, several tools have been designed for human leukocyte antigen (HLA) typing using single nucleotide polymorphism (SNP) array and next-generation sequencing (NGS) data. These tools provide high-throughput and cost-effective approaches for identifying HLA types. Therefore, tools for downstream association analysis are highly desirable. Although several tools have been designed for multi-allelic marker association analysis, they were designed only for microsatellite markers and do not scale well with increasing data volumes, or they were designed for large-scale data but provided a limited number of tests. We have developed a Python package called PyHLA, which implements several methods for HLA association analysis, to fill the gap. PyHLA is a tailor-made, easy to use, and flexible tool designed specifically for the association analysis of the HLA types imputed from genome-wide genotyping and NGS data. PyHLA provides functions for association analysis, zygosity tests, and interaction tests between HLA alleles and diseases. Monte Carlo permutation and several methods for multiple testing corrections have also been implemented. PyHLA provides a convenient and powerful tool for HLA analysis. Existing methods have been integrated and desired methods have been added in PyHLA. Furthermore, PyHLA is applicable to small and large sample sizes and can finish the analysis in a timely manner on a personal computer with different platforms. PyHLA is implemented in Python. PyHLA is a free, open source software distributed under the GPLv2 license. The source code, tutorial, and examples are available at https://github.com/felixfan/PyHLA.

  5. Examining the Characteristics of Student Postings That Are Liked and Linked in a CSCL Environment

    ERIC Educational Resources Information Center

    Makos, Alexandra; Lee, Kyungmee; Zingaro, Daniel

    2015-01-01

    This case study is the first iteration of a large-scale design-based research project to improve Pepper, an interactive discussion-based learning environment. In this phase, we designed and implemented two social features to scaffold positive learner interactivity behaviors: a "Like" button and linking tool. A mixed-methods approach was…

  6. Improved genome-scale multi-target virtual screening via a novel collaborative filtering approach to cold-start problem

    PubMed Central

    Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar

    2016-01-01

    Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331

  7. Improved genome-scale multi-target virtual screening via a novel collaborative filtering approach to cold-start problem.

    PubMed

    Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar

    2016-12-13

    Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.

  8. Evaluation of modal pushover-based scaling of one component of ground motion: Tall buildings

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2012-01-01

    Nonlinear response history analysis (RHA) is now increasingly used for performance-based seismic design of tall buildings. Required for nonlinear RHAs is a set of ground motions selected and scaled appropriately so that analysis results would be accurate (unbiased) and efficient (having relatively small dispersion). This paper evaluates accuracy and efficiency of recently developed modal pushover–based scaling (MPS) method to scale ground motions for tall buildings. The procedure presented explicitly considers structural strength and is based on the standard intensity measure (IM) of spectral acceleration in a form convenient for evaluating existing structures or proposed designs for new structures. Based on results presented for two actual buildings (19 and 52 stories, respectively), it is demonstrated that the MPS procedure provided a highly accurate estimate of the engineering demand parameters (EDPs), accompanied by significantly reduced record-to-record variability of the responses. In addition, the MPS procedure is shown to be superior to the scaling procedure specified in the ASCE/SEI 7-05 document.

  9. Decomposing Multifractal Crossovers

    PubMed Central

    Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras

    2017-01-01

    Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694

  10. High-uniformity centimeter-wide Si etching method for MEMS devices with large opening elements

    NASA Astrophysics Data System (ADS)

    Okamoto, Yuki; Tohyama, Yukiya; Inagaki, Shunsuke; Takiguchi, Mikio; Ono, Tomoki; Lebrasseur, Eric; Mita, Yoshio

    2018-04-01

    We propose a compensated mesh pattern filling method to achieve highly uniform wafer depth etching (over hundreds of microns) with a large-area opening (over centimeter). The mesh opening diameter is gradually changed between the center and the edge of a large etching area. Using such a design, the etching depth distribution depending on sidewall distance (known as the local loading effect) inversely compensates for the over-centimeter-scale etching depth distribution, known as the global or within-die(chip)-scale loading effect. Only a single DRIE with test structure patterns provides a micro-electromechanical systems (MEMS) designer with the etched depth dependence on the mesh opening size as well as on the distance from the chip edge, and the designer only has to set the opening size so as to obtain a uniform etching depth over the entire chip. This method is useful when process optimization cannot be performed, such as in the cases of using standard conditions for a foundry service and of short turn-around-time prototyping. To demonstrate, a large MEMS mirror that needed over 1 cm2 of backside etching was successfully fabricated using as-is-provided DRIE conditions.

  11. Inverse problems in the design, modeling and testing of engineering systems

    NASA Technical Reports Server (NTRS)

    Alifanov, Oleg M.

    1991-01-01

    Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.

  12. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  13. Computer design of porous active materials at different dimensional scales

    NASA Astrophysics Data System (ADS)

    Nasedkin, Andrey

    2017-12-01

    The paper presents a mathematical and computer modeling of effective properties of porous piezoelectric materials of three types: with ordinary porosity, with metallized pore surfaces, and with nanoscale porosity structure. The described integrated approach includes the effective moduli method of composite mechanics, simulation of representative volumes, and finite element method.

  14. Molecular Design of Multilayer Composites from Carbon Nanotubes

    DTIC Science & Technology

    2008-03-31

    approaches that will enable large scale and 5-30 times faster manufacturing of the LBL composites than traditional LBL: (1) dewetting method and (2...Films made by Dewetting Method Of Layer-By-Layer Assembly, Nano Letters 2007, 7(11), 3266-3273. Loh, K. J.; Lynch, J. P.; Shim, B. S.; Kotov, N. An

  15. A successful trap design for capturing large terrestrial snakes

    Treesearch

    Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer

    2005-01-01

    Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...

  16. Scale-Up of GRCop: From Laboratory to Rocket Engines

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2016-01-01

    GRCop is a high temperature, high thermal conductivity copper-based series of alloys designed primarily for use in regeneratively cooled rocket engine liners. It began with laboratory-level production of a few grams of ribbon produced by chill block melt spinning and has grown to commercial-scale production of large-scale rocket engine liners. Along the way, a variety of methods of consolidating and working the alloy were examined, a database of properties was developed and a variety of commercial and government applications were considered. This talk will briefly address the basic material properties used for selection of compositions to scale up, the methods used to go from simple ribbon to rocket engines, the need to develop a suitable database, and the issues related to getting the alloy into a rocket engine or other application.

  17. Examination of the Philadelphia Geriatric Morale Scale as a Subjective Quality-of-Life Measure in Elderly Hong Kong Chinese

    ERIC Educational Resources Information Center

    Wong, Eric; Woo, Jean; Hui, Elsie; Ho, Suzanne C.

    2004-01-01

    Purpose: We examine the psychometric properties of the Philadelphia Geriatric Morale Scale (PGMS) in an elderly Chinese population in Hong Kong. Design and Methods: The study consisted of two cohorts: (a) 759 participants aged 70 years and older living in the community who were recruited as part of a territory-wide health survey and interviewed in…

  18. Stories to Communicate Risks about Tobacco: Development of a Brief Scale to Measure Transportation into a Video Story--The ACCE Project

    ERIC Educational Resources Information Center

    Williams, Jessica H.; Green, Melanie C.; Kohler, Connie; Allison, Jeroan J.; Houston, Thomas K.

    2011-01-01

    Objective: To evaluate the construct and criterion validity of the Video Transportation Scale (VTS). Setting: Inpatient service of a safety net hospital in Birmingham, Alabama, USA. Method: We administered the VTS in the context of a randomized controlled trial of a DVD-delivered narrative-based intervention (stories) designed to encourage smoking…

  19. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  20. A Method for Measuring Fishing Effort by Small-Scale Fish Aggregating Device (FAD) Fishers from the Commonwealth of Dominica

    ERIC Educational Resources Information Center

    Alvard, Michael; McGaffey, Ethan; Carlson, David

    2015-01-01

    We used global positioning system (GPS) technology and tracking analysis to measure fishing effort by marine, small-scale, fish aggregating device (FAD) fishers of the Commonwealth of Dominica. FADs are human-made structures designed to float on the surface of the water and attract fish. They are also prone to common pool resource problems. To…

  1. Dependability and Treatment Sensitivity of Multi-Item Direct Behavior Rating Scales for Interpersonal Peer Conflict

    ERIC Educational Resources Information Center

    Daniels, Brian; Volpe, Robert J.; Briesch, Amy M.; Gadow, Kenneth D.

    2017-01-01

    Direct behavior rating (DBR) represents a feasible method for monitoring student behavior in the classroom; however, limited work to date has focused on the use of multi-item scales. The purposes of the study were to examine the (a) dependability of data obtained from a multi-item DBR designed to assess peer conflict and (b) treatment sensitivity…

  2. In silico multi-scale model of transport and dynamic seeding in a bone tissue engineering perfusion bioreactor.

    PubMed

    Spencer, T J; Hidalgo-Bastida, L A; Cartmell, S H; Halliday, I; Care, C M

    2013-04-01

    Computer simulations can potentially be used to design, predict, and inform properties for tissue engineering perfusion bioreactors. In this work, we investigate the flow properties that result from a particular poly-L-lactide porous scaffold and a particular choice of perfusion bioreactor vessel design used in bone tissue engineering. We also propose a model to investigate the dynamic seeding properties such as the homogeneity (or lack of) of the cellular distribution within the scaffold of the perfusion bioreactor: a pre-requisite for the subsequent successful uniform growth of a viable bone tissue engineered construct. Flows inside geometrically complex scaffolds have been investigated previously and results shown at these pore scales. Here, it is our aim to show accurately that through the use of modern high performance computers that the bioreactor device scale that encloses a scaffold can affect the flows and stresses within the pores throughout the scaffold which has implications for bioreactor design, control, and use. Central to this work is that the boundary conditions are derived from micro computed tomography scans of both a device chamber and scaffold in order to avoid generalizations and uncertainties. Dynamic seeding methods have also been shown to provide certain advantages over static seeding methods. We propose here a novel coupled model for dynamic seeding accounting for flow, species mass transport and cell advection-diffusion-attachment tuned for bone tissue engineering. The model highlights the timescale differences between different species suggesting that traditional homogeneous porous flow models of transport must be applied with caution to perfusion bioreactors. Our in silico data illustrate the extent to which these experiments have the potential to contribute to future design and development of large-scale bioreactors. Copyright © 2012 Wiley Periodicals, Inc.

  3. Understanding electrical conduction in lithium ion batteries through multi-scale modeling

    NASA Astrophysics Data System (ADS)

    Pan, Jie

    Silicon (Si) has been considered as a promising negative electrode material for lithium ion batteries (LIBs) because of its high theoretical capacity, low discharge voltage, and low cost. However, the utilization of Si electrode has been hampered by problems such as slow ionic transport, large stress/strain generation, and unstable solid electrolyte interphase (SEI). These problems severely influence the performance and cycle life of Si electrodes. In general, ionic conduction determines the rate performance of the electrode, while electron leakage through the SEI causes electrolyte decomposition and, thus, causes capacity loss. The goal of this thesis research is to design Si electrodes with high current efficiency and durability through a fundamental understanding of the ionic and electronic conduction in Si and its SEI. Multi-scale physical and chemical processes occur in the electrode during charging and discharging. This thesis, thus, focuses on multi-scale modeling, including developing new methods, to help understand these coupled physical and chemical processes. For example, we developed a new method based on ab initio molecular dynamics to study the effects of stress/strain on Li ion transport in amorphous lithiated Si electrodes. This method not only quantitatively shows the effect of stress on ionic transport in amorphous materials, but also uncovers the underlying atomistic mechanisms. However, the origin of ionic conduction in the inorganic components in SEI is different from that in the amorphous Si electrode. To tackle this problem, we developed a model by separating the problem into two scales: 1) atomistic scale: defect physics and transport in individual SEI components with consideration of the environment, e.g., LiF in equilibrium with Si electrode; 2) mesoscopic scale: defect distribution near the heterogeneous interface based on a space charge model. In addition, to help design better artificial SEI, we further demonstrated a theoretical design of multicomponent SEIs by utilizing the synergetic effect found in the natural SEI. We show that the electrical conduction can be optimized by varying the grain size and volume fraction of two phases in the artificial multicomponent SEI.

  4. Supporting BPMN choreography with system integration artefacts for enterprise process collaboration

    NASA Astrophysics Data System (ADS)

    Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2014-07-01

    Business Process Model and Notation (BPMN) choreography modelling depicts externally visible message exchanges between collaborating processes of enterprise information systems. Implementation of choreography relies on designing system integration solutions to realise message exchanges between independently developed systems. Enterprise integration patterns (EIPs) are widely accepted artefacts to design integration solutions. If the choreography model represents coordination requirements between processes with behaviour mismatches, the integration designer needs to analyse the routing requirements and address these requirements by manually designing EIP message routers. As collaboration scales and complexity increases, manual design becomes inefficient. Thus, the research problem of this paper is to explore a method to automatically identify routing requirements from BPMN choreography model and to accordingly design routing in the integration solution. To achieve this goal, recurring behaviour mismatch scenarios are analysed as patterns, and corresponding solutions are proposed as EIP routers. Using this method, a choreography model can be analysed by computer to identify occurrences of mismatch patterns, leading to corresponding router selection. A case study demonstrates that the proposed method enables computer-assisted integration design to implement choreography. A further experiment reveals that the method is effective to improve the design quality and reduce time cost.

  5. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  6. Design and fabrication of a fixed-bed batch type pyrolysis reactor for pilot scale pyrolytic oil production in Bangladesh

    NASA Astrophysics Data System (ADS)

    Aziz, Mohammad Abdul; Al-khulaidi, Rami Ali; Rashid, MM; Islam, M. R.; Rashid, MAN

    2017-03-01

    In this research, a development and performance test of a fixed-bed batch type pyrolysis reactor for pilot scale pyrolysis oil production was successfully completed. The characteristics of the pyrolysis oil were compared to other experimental results. A solid horizontal condenser, a burner for furnace heating and a reactor shield were designed. Due to the pilot scale pyrolytic oil production encountered numerous problems during the plant’s operation. This fixed-bed batch type pyrolysis reactor method will demonstrate the energy saving concept of solid waste tire by creating energy stability. From this experiment, product yields (wt. %) for liquid or pyrolytic oil were 49%, char 38.3 % and pyrolytic gas 12.7% with an operation running time of 185 minutes.

  7. Continued Water-Based Phase Change Material Heat Exchanger Development

    NASA Technical Reports Server (NTRS)

    Hansen, Scott W.; Sheth, Rubik B.; Poynot, Joe; Giglio, Tony; Ungar, Gene K.

    2015-01-01

    In a cyclical heat load environment such as low Lunar orbit, a spacecraft's radiators are not sized to meet the full heat rejection demands. Traditionally, a supplemental heat rejection device (SHReD) such as an evaporator or sublimator is used to act as a "topper" to meet the additional heat rejection demands. Utilizing a Phase Change Material (PCM) heat exchanger (HX) as a SHReD provides an attractive alternative to evaporators and sublimators as PCM HX's do not use a consumable, thereby leading to reduced launch mass and volume requirements. In continued pursuit of water PCM HX development two full-scale, Orion sized water-based PCM HX's were constructed by Mezzo Technologies. These HX's were designed by applying prior research on freeze front propagation to a full-scale design. Design options considered included bladder restraint and clamping mechanisms, bladder manufacturing, tube patterns, fill/drain methods, manifold dimensions, weight optimization, and midplate designs. Two units, Units A and B, were constructed and differed only in their midplate design. Both units failed multiple times during testing. This report highlights learning outcomes from these tests and are applied to a final sub-scale PCM HX which is slated to be tested on the ISS in early 2017.

  8. Scaling and characterisation of a 2-DoF velocity amplified electromagnetic vibration energy harvester

    NASA Astrophysics Data System (ADS)

    O’Donoghue, D.; Frizzell, R.; Punch, J.

    2018-07-01

    Vibration energy harvesters (VEHs) offer an alternative to batteries for the autonomous operation of low-power electronics. Understanding the influence of scaling on VEHs is of great importance in the design of reduced scale harvesters. The nonlinear harvesters investigated here employ velocity amplification, a technique used to increase velocity through impacts, to improve the power output of multiple-degree-of-freedom VEHs, compared to linear resonators. Such harvesters, employing electromagnetic induction, are referred to as velocity amplified electromagnetic generators (VAEGs), with gains in power achieved by increasing the relative velocity between the magnet and coil in the transducer. The influence of scaling on a nonlinear 2-DoF VAEG is presented. Due to the increased complexity of VAEGs, compared to linear systems, linear scaling theory cannot be directly applied to VAEGs. Therefore, a detailed nonlinear scaling method is utilised. Experimental and numerical methods are employed. This nonlinear scaling method can be used for analysing the scaling behaviour of all nonlinear electromagnetic VEHs. It is demonstrated that the electromagnetic coupling coefficient degrades more rapidly with scale for systems with larger displacement amplitudes, meaning that systems operating at low frequencies will scale poorly compared to those operating at higher frequencies. The load power of the 2-DoF VAEG is predicted to scale as {P}L\\propto {s}5.51 (s = volume1/3), suggesting that achieving high power densities in a VAEG with low device volume is extremely challenging.

  9. Small-scale fixed wing airplane software verification flight test

    NASA Astrophysics Data System (ADS)

    Miller, Natasha R.

    The increased demand for micro Unmanned Air Vehicles (UAV) driven by military requirements, commercial use, and academia is creating a need for the ability to quickly and accurately conduct low Reynolds Number aircraft design. There exist several open source software programs that are free or inexpensive that can be used for large scale aircraft design, but few software programs target the realm of low Reynolds Number flight. XFLR5 is an open source, free to download, software program that attempts to take into consideration viscous effects that occur at low Reynolds Number in airfoil design, 3D wing design, and 3D airplane design. An off the shelf, remote control airplane was used as a test bed to model in XFLR5 and then compared to flight test collected data. Flight test focused on the stability modes of the 3D plane, specifically the phugoid mode. Design and execution of the flight tests were accomplished for the RC airplane using methodology from full scale military airplane test procedures. Results from flight test were not conclusive in determining the accuracy of the XFLR5 software program. There were several sources of uncertainty that did not allow for a full analysis of the flight test results. An off the shelf drone autopilot was used as a data collection device for flight testing. The precision and accuracy of the autopilot is unknown. Potential future work should investigate flight test methods for small scale UAV flight.

  10. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  11. Mixing characterisation of full-scale membrane bioreactors: CFD modelling with experimental validation.

    PubMed

    Brannock, M; Wang, Y; Leslie, G

    2010-05-01

    Membrane Bioreactors (MBRs) have been successfully used in aerobic biological wastewater treatment to solve the perennial problem of effective solids-liquid separation. The optimisation of MBRs requires knowledge of the membrane fouling, biokinetics and mixing. However, research has mainly concentrated on the fouling and biokinetics (Ng and Kim, 2007). Current methods of design for a desired flow regime within MBRs are largely based on assumptions (e.g. complete mixing of tanks) and empirical techniques (e.g. specific mixing energy). However, it is difficult to predict how sludge rheology and vessel design in full-scale installations affects hydrodynamics, hence overall performance. Computational Fluid Dynamics (CFD) provides a method for prediction of how vessel features and mixing energy usage affect the hydrodynamics. In this study, a CFD model was developed which accounts for aeration, sludge rheology and geometry (i.e. bioreactor and membrane module). This MBR CFD model was then applied to two full-scale MBRs and was successfully validated against experimental results. The effect of sludge settling and rheology was found to have a minimal impact on the bulk mixing (i.e. the residence time distribution).

  12. Vibration test of 1/5 scale H-II launch vehicle

    NASA Astrophysics Data System (ADS)

    Morino, Yoshiki; Komatsu, Keiji; Sano, Masaaki; Minegishi, Masakatsu; Morita, Toshiyuki; Kohsetsu, Y.

    In order to predict dynamic loads on the newly designed Japanese H-II launch vehicle, the adequacy of prediction methods has been assessed by the dynamic scale model testing. The three-dimensional dynamic model was used in the analysis to express coupling effects among axial, lateral (pitch and yaw) and torsional vibrations. The liquid/tank interaction was considered by use of a boundary element method. The 1/5 scale model of the H-II launch vehicle was designed to simulate stiffness and mass properties of important structural parts, such as core/SRB junctions, first and second stage Lox tanks and engine mount structures. Modal excitation of the test vehicle was accomplished with 100-1000 N shakers which produced random or sinusoidal vibrational forces. The vibrational response of the test vehicle was measured at various locations with accelerometers and pressure sensor. In the lower frequency range, corresmpondence between analysis and experiment was generally good. The basic procedures in analysis seem to be adequate so far, but some improvements in mathematical modeling are suggested by comparison of test and analysis.

  13. Shaping the Atomic-Scale Geometries of Electrodes to Control Optical and Electrical Performance of Molecular Devices.

    PubMed

    Zhao, Zhikai; Liu, Ran; Mayer, Dirk; Coppola, Maristella; Sun, Lu; Kim, Youngsang; Wang, Chuankui; Ni, Lifa; Chen, Xing; Wang, Maoning; Li, Zongliang; Lee, Takhee; Xiang, Dong

    2018-04-01

    A straightforward method to generate both atomic-scale sharp and atomic-scale planar electrodes is reported. The atomic-scale sharp electrodes are generated by precisely stretching a suspended nanowire, while the atomic-scale planar electrodes are obtained via mechanically controllable interelectrodes compression followed by a thermal-driven atom migration process. Notably, the gap size between the electrodes can be precisely controlled at subangstrom accuracy with this method. These two types of electrodes are subsequently employed to investigate the properties of single molecular junctions. It is found, for the first time, that the conductance of the amine-linked molecular junctions can be enhanced ≈50% as the atomic-scale sharp electrodes are used. However, the atomic-scale planar electrodes show great advantages to enhance the sensitivity of Raman scattering upon the variation of nanogap size. The underlying mechanisms for these two interesting observations are clarified with the help of density functional theory calculation and finite-element method simulation. These findings not only provide a strategy to control the electron transport through the molecule junction, but also pave a way to modulate the optical response as well as to improve the stability of single molecular devices via the rational design of electrodes geometries. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Maximizing algebraic connectivity in air transportation networks

    NASA Astrophysics Data System (ADS)

    Wei, Peng

    In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.

  15. Exploration of a Capability-Focused Aerospace System of Systems Architecture Alternative with Bilayer Design Space, Based on RST-SOM Algorithmic Methods

    PubMed Central

    Li, Zhifei; Qin, Dongliang

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572

  16. Exploration of a capability-focused aerospace system of systems architecture alternative with bilayer design space, based on RST-SOM algorithmic methods.

    PubMed

    Li, Zhifei; Qin, Dongliang; Yang, Feng

    2014-01-01

    In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.

  17. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  18. Generative Representations for Automated Design of Robots

    NASA Technical Reports Server (NTRS)

    Homby, Gregory S.; Lipson, Hod; Pollack, Jordan B.

    2007-01-01

    A method of automated design of complex, modular robots involves an evolutionary process in which generative representations of designs are used. The term generative representations as used here signifies, loosely, representations that consist of or include algorithms, computer programs, and the like, wherein encoded designs can reuse elements of their encoding and thereby evolve toward greater complexity. Automated design of robots through synthetic evolutionary processes has already been demonstrated, but it is not clear whether genetically inspired search algorithms can yield designs that are sufficiently complex for practical engineering. The ultimate success of such algorithms as tools for automation of design depends on the scaling properties of representations of designs. A nongenerative representation (one in which each element of the encoded design is used at most once in translating to the design) scales linearly with the number of elements. Search algorithms that use nongenerative representations quickly become intractable (search times vary approximately exponentially with numbers of design elements), and thus are not amenable to scaling to complex designs. Generative representations are compact representations and were devised as means to circumvent the above-mentioned fundamental restriction on scalability. In the present method, a robot is defined by a compact programmatic form (its generative representation) and the evolutionary variation takes place on this form. The evolutionary process is an iterative one, wherein each cycle consists of the following steps: 1. Generative representations are generated in an evolutionary subprocess. 2. Each generative representation is a program that, when compiled, produces an assembly procedure. 3. In a computational simulation, a constructor executes an assembly procedure to generate a robot. 4. A physical-simulation program tests the performance of a simulated constructed robot, evaluating the performance according to a fitness criterion to yield a figure of merit that is fed back into the evolutionary subprocess of the next iteration. In comparison with prior approaches to automated evolutionary design of robots, the use of generative representations offers two advantages: First, a generative representation enables the reuse of components in regular and hierarchical ways and thereby serves a systematic means of creating more complex modules out of simpler ones. Second, the evolved generative representation may capture intrinsic properties of the design problem, so that variations in the representations move through the design space more effectively than do equivalent variations in a nongenerative representation. This method has been demonstrated by using it to design some robots that move, variously, by walking, rolling, or sliding. Some of the robots were built (see figure). Although these robots are very simple, in comparison with robots designed by humans, their structures are more regular, modular, hierarchical, and complex than are those of evolved designs of comparable functionality synthesized by use of nongenerative representations.

  19. Using Systematic Item Selection Methods to Improve Universal Design of Assessments. Policy Directions. Number 18

    ERIC Educational Resources Information Center

    Johnstone, Christopher; Thurlow, Martha; Moore, Michael; Altman, Jason

    2006-01-01

    The No Child Left Behind Act of 2001 (NCLB) and other recent changes in federal legislation have placed greater emphasis on accountability in large-scale testing. Included in this emphasis are regulations that require assessments to be accessible. States are accountable for the success of all students, and tests should be designed in a way that…

  20. An Observational Study for Evaluating the Effects of Interpersonal Problem-Solving Skills Training on Behavioural Dimensions

    ERIC Educational Resources Information Center

    Anliak, Sakire; Sahin, Derya

    2010-01-01

    The present observational study was designed to evaluate the effectiveness of the I Can Problem Solve (ICPS) programme on behavioural change from aggression to pro-social behaviours by using the DECB rating scale. Non-participant observation method was used to collect data in pretest-training-posttest design. It was hypothesised that the ICPS…

  1. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less

  2. Deep-Reaching Hydrodynamic Flow Confinement: Micrometer-Scale Liquid Localization for Open Substrates With Topographical Variations.

    PubMed

    Oskooei, Ali; Kaigala, Govind V

    2017-06-01

    We present a method for nonintrusive localization and reagent delivery on immersed biological samples with topographical variation on the order of hundreds of micrometers. Our technique, which we refer to as the deep-reaching hydrodynamic flow confinement (DR-HFC), is simple and passive: it relies on a deep-reaching hydrodynamic confinement delivered through a simple microfluidic probe design to perform localized microscale alterations on substrates as deep as 600 μm. Designed to scan centimeter-scale areas of biological substrates, our method passively prevents sample intrusion by maintaining a large gap between the probe and the substrate. The gap prevents collision of the probe and the substrate and reduces the shear stress experienced by the sample. We present two probe designs: linear and annular DR-HFC. Both designs comprise a reagent-injection aperture and aspiration apertures that serve to confine the reagent. We identify the design parameters affecting reagent localization and depth by DR-HFC and study their individual influence on the operation of DR-HFC numerically. Using DR-HFC, we demonstrate localized binding of antihuman immunoglobulin G (IgG) onto an activated substrate at various depths from 50 to 600 μm. DR-HFC provides a readily implementable approach for noninvasive processing of biological samples applicable to the next generation of diagnostic and bioanalytical devices.

  3. Design and construction of a DNA origami drug delivery system based on MPT64 antibody aptamer for tuberculosis treatment.

    PubMed

    Ranjbar, Reza; Hafezi-Moghadam, Mohammad Sadegh

    2016-02-01

    With all of the developments on infectious diseases, tuberculosis (TB) remains a cause of death among people. One of the most promising assembly techniques in nano-technology is "scaffolded DNA origami" to design and construct a nano-scale drug delivery system. Because of the global health problems of tuberculosis, the development of potent new anti-tuberculosis drug delivery system without cross-resistance with known anti-mycobacterial agents is urgently needed. The aim of this study was to design a nano-scale drug delivery system for TB treatment using the DNA origami method. In this study, we presented an experimental research on a DNA drug delivery system for treating Tuberculosis. TEM images were visualized with an FEI Tecnai T12 BioTWIN at 120 kV. The model was designed by caDNAno software and computational prediction of the 3D solution shape and its flexibility was calculated with a CanDo server. Synthesizing the product was imaged using transmission electron microscopy after negative-staining by uranyl formate. We constructed a multilayer 3D DNA nanostructure system by designing square lattice geometry with the scaffolded-DNA-origami method. With changes in the lock and key sequences, we recommend that this system be used for other infectious diseases to target the pathogenic bacteria.

  4. Engineering behavior of small-scale foundation piers constructed from alternative materials

    NASA Astrophysics Data System (ADS)

    Prokudin, Maxim Mikhaylovich

    Testing small-scale prototype pier foundations to evaluate engineering behavior is an alternative to full-scale testing that facilitates testing of several piers and pier groups at relatively low cost. In this study, various pier systems and pier groups at one tenth scale were subjected to static vertical loading under controlled conditions to evaluate stiffness, bearing capacity, and group efficiency. Pier length, material properties and methods of installation were evaluated. Pier length to diameter ratios varied between four and eight. A unique soil pit with dimensions of 2.1 m in width, 1.5 m in length and 2.0 m in depth was designed to carry out this research. The test pit was filled with moisture conditioned and compacted Western Iowa loess. A special load test frame was designed and fabricated to provide up to 25,000 kg vertical reaction force for load testing. A load cell and displacement instrumentation was setup to capture the load test data. Alternative materials to conventional cement concrete were studied. The pier materials evaluated in this study included compacted aggregate, cement stabilized silt, cementitious grouts, and fiber reinforced silt. Key findings from this study demonstrated that (1) the construction method influences the behavior of aggregate piers, (2) the composition of the pier has a significant impact on the stiffness, (3) group efficiencies were found to be a function of pier length and pier material, (4) in comparison to full-scale testing the scaled piers were found to produce a stiffer response with load-settlement and bearing capacities to be similar. Further, although full-scale test results were not available for all pier materials, the small-scale testing provided a means for comparing results between pier systems. Finally, duplicate pier tests for a given length and material were found to be repeatable.

  5. The Intelligent Control System and Experiments for an Unmanned Wave Glider.

    PubMed

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the "Ocean Rambler" UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified.

  6. The Intelligent Control System and Experiments for an Unmanned Wave Glider

    PubMed Central

    Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan

    2016-01-01

    The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the “Ocean Rambler” UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified. PMID:28005956

  7. Spatially telescoping measurements for improved characterization of groundwater-surface water interactions

    USGS Publications Warehouse

    Kikuchi, Colin; Ferre, Ty P.A.; Welker, Jeffery M.

    2012-01-01

    The suite of measurement methods available to characterize fluxes between groundwater and surface water is rapidly growing. However, there are few studies that examine approaches to design of field investigations that include multiple methods. We propose that performing field measurements in a spatially telescoping sequence improves measurement flexibility and accounts for nested heterogeneities while still allowing for parsimonious experimental design. We applied this spatially telescoping approach in a study of ground water-surface water (GW-SW) interaction during baseflow conditions along Lucile Creek, located near Wasilla, Alaska. Catchment-scale data, including channel geomorphic indices and hydrogeologic transects, were used to screen areas of potentially significant GW-SW exchange. Specifically, these data indicated increasing groundwater contribution from a deeper regional aquifer along the middle to lower reaches of the stream. This initial assessment was tested using reach-scale estimates of groundwater contribution during baseflow conditions, including differential discharge measurements and the use of chemical tracers analyzed in a three-component mixing model. The reach-scale measurements indicated a large increase in discharge along the middle reaches of the stream accompanied by a shift in chemical composition towards a regional groundwater end member. Finally, point measurements of vertical water fluxes -- obtained using seepage meters as well as temperature-based methods -- were used to evaluate spatial and temporal variability of GW-SW exchange within representative reaches. The spatial variability of upward fluxes, estimated using streambed temperature mapping at the sub-reach scale, was observed to vary in relation to both streambed composition and the magnitude of groundwater contribution from differential discharge measurements. The spatially telescoping approach improved the efficiency of this field investigation. Beginning our assessment with catchment-scale data allowed us to identify locations of GW-SW exchange, plan measurements at representative field sites and improve our interpretation of reach-scale and point-scale measurements.

  8. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  9. Observation of force-detected nuclear magnetic resonance in a homogeneous field

    PubMed Central

    Madsen, L. A.; Leskowitz, G. M.; Weitekamp, D. P.

    2004-01-01

    We report the experimental realization of BOOMERANG (better observation of magnetization, enhanced resolution, and no gradient), a sensitive and general method of magnetic resonance. The prototype millimeter-scale NMR spectrometer shows signal and noise levels in agreement with the design principles. We present 1H and 19F NMR in both solid and liquid samples, including time-domain Fourier transform NMR spectroscopy, multiple-pulse echoes, and heteronuclear J spectroscopy. By measuring a 1H-19F J coupling, this last experiment accomplishes chemically specific spectroscopy with force-detected NMR. In BOOMERANG, an assembly of permanent magnets provides a homogeneous field throughout the sample, while a harmonically suspended part of the assembly, a detector, is mechanically driven by spin-dependent forces. By placing the sample in a homogeneous field, signal dephasing by diffusion in a field gradient is made negligible, enabling application to liquids, in contrast to other force-detection methods. The design appears readily scalable to μm-scale samples where it should have sensitivity advantages over inductive detection with microcoils and where it holds great promise for application of magnetic resonance in biology, chemistry, physics, and surface science. We briefly discuss extensions of the BOOMERANG method to the μm and nm scales. PMID:15326302

  10. A Neutral Network based Early Eathquake Warning model in California region

    NASA Astrophysics Data System (ADS)

    Xiao, H.; MacAyeal, D. R.

    2016-12-01

    Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.

  11. A Technique for the Assessment of Flight Operability Characteristics of Human Rated Spacecraft

    NASA Technical Reports Server (NTRS)

    Crocker, Alan

    2010-01-01

    In support of new human rated spacecraft development programs, the Mission Operations Directorate at NASA Johnson Space Center has implemented a formal method for the assessment of spacecraft operability. This "Spacecraft Flight Operability Assessment Scale" defines six key themes of flight operability, with guiding principles and goals stated for each factor. A standardized rating technique provides feedback that is useful to the operations, design and program management communities. Applicability of this concept across the program structure and life cycle is addressed. Examples of operationally desirable and undesirable spacecraft design characteristics are provided, as is a sample of the assessment scale product.

  12. Singular perturbations and time scales in the design of digital flight control systems

    NASA Technical Reports Server (NTRS)

    Naidu, Desineni S.; Price, Douglas B.

    1988-01-01

    The results are presented of application of the methodology of Singular Perturbations and Time Scales (SPATS) to the control of digital flight systems. A block diagonalization method is described to decouple a full order, two time (slow and fast) scale, discrete control system into reduced order slow and fast subsystems. Basic properties and numerical aspects of the method are discussed. A composite, closed-loop, suboptimal control system is constructed as the sum of the slow and fast optimal feedback controls. The application of this technique to an aircraft model shows close agreement between the exact solutions and the decoupled (or composite) solutions. The main advantage of the method is the considerable reduction in the overall computational requirements for the evaluation of optimal guidance and control laws. The significance of the results is that it can be used for real time, onboard simulation. A brief survey is also presented of digital flight systems.

  13. Medium-scale traveling ionospheric disturbances by three-dimensional ionospheric GPS tomography

    NASA Astrophysics Data System (ADS)

    Chen, C. H.; Saito, A.; Lin, C. H.; Yamamoto, M.; Suzuki, S.; Seemala, G. K.

    2016-02-01

    In this study, we develop a three-dimensional ionospheric tomography with the ground-based global position system (GPS) total electron content observations. Because of the geometric limitation of GPS observation path, it is difficult to solve the ill-posed inverse problem for the ionospheric electron density. Different from methods given by pervious studies, we consider an algorithm combining the least-square method with a constraint condition, in which the gradient of electron density tends to be smooth in the horizontal direction and steep in the vicinity of the ionospheric F2 peak. This algorithm is designed to be independent of any ionospheric or plasmaspheric electron density models as the initial condition. An observation system simulation experiment method is applied to evaluate the performance of the GPS ionospheric tomography in detecting ionospheric electron density perturbation at the scale size of around 200 km in wavelength, such as the medium-scale traveling ionospheric disturbances.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, L.G.; Witzke, E.L.

    This effort studied the integration of innovative methods of key management crypto synchronization, and key agility while scaling encryption speed. Viability of these methods for encryption of ATM cell payloads at the SONET OC- 192 data rate (10 Gb/s), and for operation at OC-48 rates (2.5 Gb/s) was shown. An SNL-Developed pipelined DES design was adapted for the encryption of ATM cells. A proof-of-principle prototype circuit board containing 11 Electronically Programmable Logic Devices (each holding the equivalent of 100,000 gates) was designed, built, and used to prototype a high speed encryptor.

  15. Bioinspired principles for large-scale networked sensor systems: an overview.

    PubMed

    Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg

    2011-01-01

    Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy.

  16. Airframe noise: A design and operating problem

    NASA Technical Reports Server (NTRS)

    Hardin, J. C.

    1976-01-01

    A critical assessment of the state of the art in airframe noise is presented. Full-scale data on the intensity, spectra, and directivity of this noise source are evaluated in light of the comprehensive theory developed by Ffowcs Williams and Hawkings. Vibration of panels on the aircraft is identified as a possible additional source of airframe noise. The present understanding and methods for prediction of other component sources - airfoils, struts, and cavities - are discussed. Operating problems associated with airframe noise as well as potential design methods for airframe noise reduction are identified.

  17. Multi Length Scale Finite Element Design Framework for Advanced Woven Fabrics

    NASA Astrophysics Data System (ADS)

    Erol, Galip Ozan

    Woven fabrics are integral parts of many engineering applications spanning from personal protective garments to surgical scaffolds. They provide a wide range of opportunities in designing advanced structures because of their high tenacity, flexibility, high strength-to-weight ratios and versatility. These advantages result from their inherent multi scale nature where the filaments are bundled together to create yarns while the yarns are arranged into different weave architectures. Their highly versatile nature opens up potential for a wide range of mechanical properties which can be adjusted based on the application. While woven fabrics are viable options for design of various engineering systems, being able to understand the underlying mechanisms of the deformation and associated highly nonlinear mechanical response is important and necessary. However, the multiscale nature and relationships between these scales make the design process involving woven fabrics a challenging task. The objective of this work is to develop a multiscale numerical design framework using experimentally validated mesoscopic and macroscopic length scale approaches by identifying important deformation mechanisms and recognizing the nonlinear mechanical response of woven fabrics. This framework is exercised by developing mesoscopic length scale constitutive models to investigate plain weave fabric response under a wide range of loading conditions. A hyperelastic transversely isotropic yarn material model with transverse material nonlinearity is developed for woven yarns (commonly used in personal protection garments). The material properties/parameters are determined through an inverse method where unit cell finite element simulations are coupled with experiments. The developed yarn material model is validated by simulating full scale uniaxial tensile, bias extension and indentation experiments, and comparing to experimentally observed mechanical response and deformation mechanisms. Moreover, mesoscopic unit cell finite elements are coupled with a design-of-experiments method to systematically identify the important yarn material properties for the macroscale response of various weave architectures. To demonstrate the macroscopic length scale approach, two new material models for woven fabrics were developed. The Planar Material Model (PMM) utilizes two important deformation mechanisms in woven fabrics: (1) yarn elongation, and (2) relative yarn rotation due to shear loads. The yarns' uniaxial tensile response is modeled with a nonlinear spring using constitutive relations while a nonlinear rotational spring is implemented to define fabric's shear stiffness. The second material model, Sawtooth Material Model (SMM) adopts the sawtooth geometry while recognizing the biaxial nature of woven fabrics by implementing the interactions between the yarns. Material properties/parameters required by both PMM and SMM can be directly determined from standard experiments. Both macroscopic material models are implemented within an explicit finite element code and validated by comparing to the experiments. Then, the developed macroscopic material models are compared under various loading conditions to determine their accuracy. Finally, the numerical models developed in the mesoscopic and macroscopic length scales are linked thus demonstrating the new systematic design framework involving linked mesoscopic and macroscopic length scale modeling approaches. The approach is demonstrated with both Planar and Sawtooth Material Models and the simulation results are verified by comparing the results obtained from meso and macro models.

  18. Design, testing and emplacement of sand-bentonite for the construction of a gas-permeable seal test (gast)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teodori, Sven-Peter; Ruedi, Jorg; Reinhold, Matthias

    2013-07-01

    The main aim of a gas-permeable seal is to increase the gas transport capacity of the backfilled underground structures without compromising the radionuclide retention capacity of the engineered barrier system or the host rock. Such a seal, proposed by NAGRA as part of the 'Engineered Gas Transport System' in a L/ILW repository, considers specially designed backfill and sealing materials such as sand/bentonite (S/B) mixtures with a bentonite content of 20- 30%. NAGRA's RD and D plan foresees demonstrating the construction and performance of repository seals and improving the understanding and the database for reliably predicting water and gas transport throughmore » these systems. The fluid flow and gas transport properties of these backfills have been determined at the laboratory scale and through modelling the maximum gas pressures in the near field of a repository system and the gas flow rates have been evaluated. Within this context, the Gas-permeable Seal Test (GAST) was constructed at Grimsel Test Site (GTS) to validate the effective functioning of gas-permeable seals at realistic scale. The intrinsic permeability of such seals should be in the order of 10-18 m2. Because the construction of S/B seals is not common practice for construction companies, a stepwise approach was followed to evaluate different construction and quality assurance methods. As a first step, an investigation campaign with simple tests in the laboratory and in the field followed by 1:1 scale pre-tests at GTS was performed. Through this gradual increase of the degree of complexity, practical experience was gained and confidence in the methods and procedures to be used was built, which allowed reliably producing and working with S/B mixtures at a realistic scale. During the whole pre-testing phase, a quality assurance (QA) programme for S/B mixtures was developed and different methods were assessed. They helped to evaluate and choose appropriate emplacement techniques and methodologies to achieve the target S/B dry density of 1.70 g/cm{sup 3}, which results in the desired intrinsic permeability throughout the experiment. The final QA methodology was targeted at engineering measures to decide if the work can proceed, and at producing high resolution material properties database for future water and gas transport modelling activities. The different applied QA techniques included standard core cutter tests, the application of neutron-gamma (Troxler) probes and two mass balance methods (2D and 3D). The methods, looking at different representative scales, have provided only slightly different results and showed that the average density of the emplaced S/B plug was between 1.65 and 1.73 g/cm{sup 3}. Spatial variability of dry densities was observed at decimeter scale. Overall, the pre-testing and QA programme performed for the GAST project demonstrated how the given design criteria and requirements can be met by appropriately planning and designing the material emplacement. (authors)« less

  19. A new method to estimate local pitch angles in spiral galaxies: Application to spiral arms and feathers in M81 and M51

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx

    2014-12-01

    We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less

  20. Giga-voxel computational morphogenesis for structural design

    NASA Astrophysics Data System (ADS)

    Aage, Niels; Andreassen, Erik; Lazarov, Boyan S.; Sigmund, Ole

    2017-10-01

    In the design of industrial products ranging from hearing aids to automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer-aided design approaches. The computer-aided approach known as topology optimization enables unrestricted design freedom and shows great promise with regard to weight savings, but its applicability has so far been limited to the design of single components or simple structures, owing to the resolution limits of current optimization methods. Here we report a computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution—more than two orders of magnitude higher than previously reported. Such resolution provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented structural detail at length scales ranging from tens of metres to millimetres and, intriguingly, shows remarkable similarity to naturally occurring bone structures in, for example, bird beaks. We estimate that our optimized design corresponds to a reduction in mass of 2-5 per cent compared to currently used aeroplane wing designs, which translates into a reduction in fuel consumption of about 40-200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems, antennas, nano-optics and micro-systems.

  1. Giga-voxel computational morphogenesis for structural design.

    PubMed

    Aage, Niels; Andreassen, Erik; Lazarov, Boyan S; Sigmund, Ole

    2017-10-04

    In the design of industrial products ranging from hearing aids to automobiles and aeroplanes, material is distributed so as to maximize the performance and minimize the cost. Historically, human intuition and insight have driven the evolution of mechanical design, recently assisted by computer-aided design approaches. The computer-aided approach known as topology optimization enables unrestricted design freedom and shows great promise with regard to weight savings, but its applicability has so far been limited to the design of single components or simple structures, owing to the resolution limits of current optimization methods. Here we report a computational morphogenesis tool, implemented on a supercomputer, that produces designs with giga-voxel resolution-more than two orders of magnitude higher than previously reported. Such resolution provides insights into the optimal distribution of material within a structure that were hitherto unachievable owing to the challenges of scaling up existing modelling and optimization frameworks. As an example, we apply the tool to the design of the internal structure of a full-scale aeroplane wing. The optimized full-wing design has unprecedented structural detail at length scales ranging from tens of metres to millimetres and, intriguingly, shows remarkable similarity to naturally occurring bone structures in, for example, bird beaks. We estimate that our optimized design corresponds to a reduction in mass of 2-5 per cent compared to currently used aeroplane wing designs, which translates into a reduction in fuel consumption of about 40-200 tonnes per year per aeroplane. Our morphogenesis process is generally applicable, not only to mechanical design, but also to flow systems, antennas, nano-optics and micro-systems.

  2. Performance of an Abbreviated Version of the Lubben Social Network Scale among Three European Community-Dwelling Older Adult Populations

    ERIC Educational Resources Information Center

    Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.

    2006-01-01

    Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…

  3. EFFECT OF FLOW CHARACTERISTICS ON DO DISTRIBUTION IN A FULL SCALE OXIDATION DITCH WITH DIFFUSED AERATION AND VERTICAL FLOW BOOSTERS

    NASA Astrophysics Data System (ADS)

    Nakamachi, Kazuo; Fujiwara, Taku; Kawaguchi, Yukio; Tsuno, Hiroshi

    The high loading rate oxidation ditch (OD) system with dual dissolved oxygen (DO) control has been developed for the purpose of advanced wastewater treatment and cost saving. For the purpose of scale-up to the real scale, the clean water experiments were conducted, with the full scale oxidation ditch with diffused aeration and vertical flow boosters, to examine the effect to the dual DO control by the design and operational factors, which include a flow characteristics and a oxygen supply capability. In this study, the flow characteristics of the OD channel were analyzed using a tank number and circulation ratio as the parameters. The analysis showed the complicated flow characteristics of the OD channel, which changed from the plug flow to the completely mixing transiently. Based on the tank number N =65~100 which were obtained from the tracer tests, a model of DO mass balance was constructed, then the accurate method for estimate the overall oxygen transfer coefficients was proposed. The potential error of the conventional method in the specific conditions was indicated. In addition, the effect of the flow characteristics on the design and operational parameters of the dual DO control, which include the circulation time or the DO profile, was clarified.

  4. Teaching Quality in Math Class: The Development of a Scale and the Analysis of Its Relationship with Engagement and Achievement

    PubMed Central

    Leon, Jaime; Medina-Garrido, Elena; Núñez, Juan L.

    2017-01-01

    Math achievement and engagement declines in secondary education; therefore, educators are faced with the challenge of engaging students to avoid school failure. Within self-determination theory, we address the need to assess comprehensively student perceptions of teaching quality that predict engagement and achievement. In study one we tested, in a sample of 548 high school students, a preliminary version of a scale to assess nine factors: teaching for relevance, acknowledge negative feelings, participation encouragement, controlling language, optimal challenge, focus on the process, class structure, positive feedback, and caring. In the second study, we analyzed the scale’s reliability and validity in a sample of 1555 high school students. The scale showed evidence of reliability, and with regard to criterion validity, at the classroom level, teaching quality was a predictor of behavioral engagement, and higher grades were observed in classes where students, as a whole, displayed more behavioral engagement. At the within level, behavioral engagement was associated with achievement. We not only provide a reliable and valid method to assess teaching quality, but also a method to design interventions, these could be designed based on the scale items to encourage students to persist and display more engagement on school duties, which in turn bolsters student achievement. PMID:28701964

  5. Experimental measurement of preferences in health and healthcare using best-worst scaling: an overview.

    PubMed

    Mühlbacher, Axel C; Kaczynski, Anika; Zweifel, Peter; Johnson, F Reed

    2016-12-01

    Best-worst scaling (BWS), also known as maximum-difference scaling, is a multiattribute approach to measuring preferences. BWS aims at the analysis of preferences regarding a set of attributes, their levels or alternatives. It is a stated-preference method based on the assumption that respondents are capable of making judgments regarding the best and the worst (or the most and least important, respectively) out of three or more elements of a choice-set. As is true of discrete choice experiments (DCE) generally, BWS avoids the known weaknesses of rating and ranking scales while holding the promise of generating additional information by making respondents choose twice, namely the best as well as the worst criteria. A systematic literature review found 53 BWS applications in health and healthcare. This article expounds possibilities of application, the underlying theoretical concepts and the implementation of BWS in its three variants: 'object case', 'profile case', 'multiprofile case'. This paper contains a survey of BWS methods and revolves around study design, experimental design, and data analysis. Moreover the article discusses the strengths and weaknesses of the three types of BWS distinguished and offered an outlook. A companion paper focuses on special issues of theory and statistical inference confronting BWS in preference measurement.

  6. Acoustic resonance in MEMS scale cylindrical tubes with side branches

    NASA Astrophysics Data System (ADS)

    Schill, John F.; Holthoff, Ellen L.; Pellegrino, Paul M.; Marcus, Logan S.

    2014-05-01

    Photoacoustic spectroscopy (PAS) is a useful monitoring technique that is well suited for trace gas detection. This method routinely exhibits detection limits at the parts-per-million (ppm) or parts-per-billion (ppb) level for gaseous samples. PAS also possesses favorable detection characteristics when the system dimensions are scaled to a microelectromechanical system (MEMS) design. One of the central issues related to sensor miniaturization is optimization of the photoacoustic cell geometry, especially in relationship to high acoustical amplification and reduced system noise. Previous work relied on a multiphysics approach to analyze the resonance structures of the MEMS scale photo acoustic cell. This technique was unable to provide an accurate model of the acoustic structure. In this paper we describe a method that relies on techniques developed from musical instrument theory and electronic transmission line matrix methods to describe cylindrical acoustic resonant cells with side branches of various configurations. Experimental results are presented that demonstrate the ease and accuracy of this method. All experimental results were within 2% of those predicted by this theory.

  7. [A method for obtaining redshifts of quasars based on wavelet multi-scaling feature matching].

    PubMed

    Liu, Zhong-Tian; Li, Xiang-Ru; Wu, Fu-Chao; Zhao, Yong-Heng

    2006-09-01

    The LAMOST project, the world's largest sky survey project being implemented in China, is expected to obtain 10(5) quasar spectra. The main objective of the present article is to explore methods that can be used to estimate the redshifts of quasar spectra from LAMOST. Firstly, the features of the broad emission lines are extracted from the quasar spectra to overcome the disadvantage of low signal-to-noise ratio. Then the redshifts of quasar spectra can be estimated by using the multi-scaling feature matching. The experiment with the 15, 715 quasars from the SDSS DR2 shows that the correct rate of redshift estimated by the method is 95.13% within an error range of 0. 02. This method was designed to obtain the redshifts of quasar spectra with relative flux and a low signal-to-noise ratio, which is applicable to the LAMOST data and helps to study quasars and the large-scale structure of the universe etc.

  8. Accounting for Scale Heterogeneity in Healthcare-Related Discrete Choice Experiments when Comparing Stated Preferences: A Systematic Review.

    PubMed

    Wright, Stuart J; Vass, Caroline M; Sim, Gene; Burton, Michael; Fiebig, Denzil G; Payne, Katherine

    2018-02-28

    Scale heterogeneity, or differences in the error variance of choices, may account for a significant amount of the observed variation in the results of discrete choice experiments (DCEs) when comparing preferences between different groups of respondents. The aim of this study was to identify if, and how, scale heterogeneity has been addressed in healthcare DCEs that compare the preferences of different groups. A systematic review identified all healthcare DCEs published between 1990 and February 2016. The full-text of each DCE was then screened to identify studies that compared preferences using data generated from multiple groups. Data were extracted and tabulated on year of publication, samples compared, tests for scale heterogeneity, and analytical methods to account for scale heterogeneity. Narrative analysis was used to describe if, and how, scale heterogeneity was accounted for when preferences were compared. A total of 626 healthcare DCEs were identified. Of these 199 (32%) aimed to compare the preferences of different groups specified at the design stage, while 79 (13%) compared the preferences of groups identified at the analysis stage. Of the 278 included papers, 49 (18%) discussed potential scale issues, 18 (7%) used a formal method of analysis to account for scale between groups, and 2 (1%) accounted for scale differences between preference groups at the analysis stage. Scale heterogeneity was present in 65% (n = 13) of studies that tested for it. Analytical methods to test for scale heterogeneity included coefficient plots (n = 5, 2%), heteroscedastic conditional logit models (n = 6, 2%), Swait and Louviere tests (n = 4, 1%), generalised multinomial logit models (n = 5, 2%), and scale-adjusted latent class analysis (n = 2, 1%). Scale heterogeneity is a prevalent issue in healthcare DCEs. Despite this, few published DCEs have discussed such issues, and fewer still have used formal methods to identify and account for the impact of scale heterogeneity. The use of formal methods to test for scale heterogeneity should be used, otherwise the results of DCEs potentially risk producing biased and potentially misleading conclusions regarding preferences for aspects of healthcare.

  9. Simulation Methods for Optics and Electromagnetics in Complex Geometries and Extreme Nonlinear Regimes with Disparate Scales

    DTIC Science & Technology

    2014-09-30

    software devel- oped with this project support. S1 Cork School 2013: I. UPPEcore Simulator design and usage, Simulation examples II. Nonlinear pulse...pulse propagation 08/28/13 — 08/02/13, University College Cork , Ireland S2 ACMS MURI School 2012: Computational Methods for Nonlinear PDEs describing

  10. Simple photometer circuits using modular electronic components

    NASA Technical Reports Server (NTRS)

    Wampler, J. E.

    1975-01-01

    Operational and peak holding amplifiers are discussed as useful circuits for bioluminescence assays. Circuit diagrams are provided. While analog methods can give a good integration on short time scales, digital methods were found best for long term integration in bioluminescence assays. Power supplies, a general photometer circuit with ratio capability, and variations in the basic photometer design are also considered.

  11. Design of rocker switches for work-vehicles--an application of Kansei Engineering.

    PubMed

    Schütte, Simon; Eklund, Jörgen

    2005-09-01

    Rocker switches used in vehicles meet high demands partly due to the increased focus on customer satisfaction. Previous studies focused on ergonomics and usability rather than design for emotions and affection. The aim of this study was to determine how and to what extent engineering properties influence the perception of rocker switches. Secondary aims were to compare two types of rating scales and to determine consistency over time of the ratings. As a method Kansei Engineering was used, describing a product domain from a physical and semantic point of view. A model was built and validated, and recommendations for new designs were given. It was seen that the subjective impressions of robustness, precision and design are strongly influenced by the zero position, the contact position, the form-ratio, shape and the surface of rocker switches. A 7-point scale was found suitable. The Kansei ratings were consistent over time.

  12. Bioresorbable scaffolds for bone tissue engineering: optimal design, fabrication, mechanical testing and scale-size effects analysis.

    PubMed

    Coelho, Pedro G; Hollister, Scott J; Flanagan, Colleen L; Fernandes, Paulo R

    2015-03-01

    Bone scaffolds for tissue regeneration require an optimal trade-off between biological and mechanical criteria. Optimal designs may be obtained using topology optimization (homogenization approach) and prototypes produced using additive manufacturing techniques. However, the process from design to manufacture remains a research challenge and will be a requirement of FDA design controls to engineering scaffolds. This work investigates how the design to manufacture chain affects the reproducibility of complex optimized design characteristics in the manufactured product. The design and prototypes are analyzed taking into account the computational assumptions and the final mechanical properties determined through mechanical tests. The scaffold is an assembly of unit-cells, and thus scale size effects on the mechanical response considering finite periodicity are investigated and compared with the predictions from the homogenization method which assumes in the limit infinitely repeated unit cells. Results show that a limited number of unit-cells (3-5 repeated on a side) introduce some scale-effects but the discrepancies are below 10%. Higher discrepancies are found when comparing the experimental data to numerical simulations due to differences between the manufactured and designed scaffold feature shapes and sizes as well as micro-porosities introduced by the manufacturing process. However good regression correlations (R(2) > 0.85) were found between numerical and experimental values, with slopes close to 1 for 2 out of 3 designs. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    NASA Astrophysics Data System (ADS)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  14. GPU implementation of the linear scaling three dimensional fragment method for large scale electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Wang, Jue; Chi, Xuebin; Wang, Lin-Wang

    2017-02-01

    LS3DF, namely linear scaling three-dimensional fragment method, is an efficient linear scaling ab initio total energy electronic structure calculation code based on a divide-and-conquer strategy. In this paper, we present our GPU implementation of the LS3DF code. Our test results show that the GPU code can calculate systems with about ten thousand atoms fully self-consistently in the order of 10 min using thousands of computing nodes. This makes the electronic structure calculations of 10,000-atom nanosystems routine work. This speed is 4.5-6 times faster than the CPU calculations using the same number of nodes on the Titan machine in the Oak Ridge leadership computing facility (OLCF). Such speedup is achieved by (a) carefully re-designing of the computationally heavy kernels; (b) redesign of the communication pattern for heterogeneous supercomputers.

  15. A novel test method to determine the filter material service life of decentralized systems treating runoff from traffic areas.

    PubMed

    Huber, Maximilian; Welker, Antje; Dierschke, Martina; Drewes, Jörg E; Helmreich, Brigitte

    2016-09-01

    In recent years, there has been a significant increase in the development and application of technical decentralized filter systems for the treatment of runoff from traffic areas. However, there are still many uncertainties regarding the service life and the performance of filter materials that are employed in decentralized treatment systems. These filter media are designed to prevent the transport of pollutants into the environment. A novel pilot-scale test method was developed to determine - within a few days - the service lives and long-term removal efficiencies for dissolved heavy metals in stormwater treatment systems. The proposed method consists of several steps including preloading the filter media in a pilot-scale model with copper and zinc by a load of n-1 years of the estimated service life (n). Subsequently, three representative rain events are simulated to evaluate the long-term performance by dissolved copper and zinc during the last year of application. The presented results, which verified the applicability of this method, were obtained for three filter channel systems and six filter shaft systems. The performance of the evaluated systems varied largely for both tested heavy metals and during all three simulated rain events. A validation of the pilot-scale assessment method with field measurements was also performed for two systems. Findings of this study suggest that this novel method does provide a standardized and accurate estimation of service intervals of decentralized treatment systems employing various filter materials. The method also provides regulatory authorities, designers, and operators with an objective basis for performance assessment and supports stormwater managers to make decisions for the installation of such decentralized treatment systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Chip-scale pattern modification method for equalizing residual layer thickness in nanoimprint lithography

    NASA Astrophysics Data System (ADS)

    Youn, Sung-Won; Suzuki, Kenta; Hiroshima, Hiroshi

    2018-06-01

    A software program for modifying a mold design to obtain a uniform residual layer thickness (RLT) distribution has been developed and its validity was verified by UV-nanoimprint lithography (UV-NIL) simulation. First, the effects of granularity (G) on both residual layer uniformity and filling characteristics were characterized. For a constant complementary pattern depth and a granularity that was sufficiently larger than the minimum pattern width, filling time decreased with the decrease in granularity. For a pattern design with a wide density range and an irregular distribution, the choice of a small granularity was not always a good strategy since the etching depth required for a complementary pattern occasionally exceptionally increased with the decrease in granularity. On basis of the results obtained, the automated method was applied to a chip-scale pattern modification. Simulation results showed a marked improvement in residual layer thickness uniformity for a capacity-equalized (CE) mold. For the given conditions, the standard deviation of RLT decreased in the range from 1/3 to 1/5 in accordance with pattern designs.

  17. Adaptation of the contraceptive self-efficacy scale for heterosexual Mexican men and women of reproductive age.

    PubMed

    Arias, María Luisa Flores; Champion, Jane Dimmitt; Soto, Norma Elva Sáenz

    2017-08-01

    Development of a Spanish Version Contraceptive Self-efficacy Scale for use among heterosexual Mexican populations of reproductive age inclusive of 18-35years. Methods of family planning have decreased in Mexico which may lead to an increase in unintended pregnancies. Contraceptive self-efficacy is considered a predictor and precursor for use of family planning methods. Cross-sectional, descriptive study design was used to assess contraceptive self-efficacy among a heterosexual Mexican population (N=160) of reproductive age (18-35years). Adaptation of a Spanish Version Contraceptive Self-efficacy scale was conducted prior to instrument administration. Exploratory and confirmatory factorial analyses identified seven factors with a variance of 72.812%. The adapted scale had a Cronbach alpha of 0.771. A significant correlation between the Spanish Version Contraceptive Self-efficacy Scale and the use of family planning methods was identified. The Spanish Version Contraceptive Self-efficacy scale has an acceptable Cronbach alpha. Exploratory factor analysis identified 7 components. A positive correlation between self-reported contraceptive self-efficacy and family planning method use was identified. This scale may be used among heterosexual Mexican men and women of reproductive age. The factor analysis (7 factors versus 4 factors for the original scale) identified a discrepancy for interpretation of the Spanish versus English language versions. Interpretation of findings obtained via the Spanish versión among heterosexual Mexican men and women of reproductive age require interpretation based upon these differences identified in these analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  19. Spatiotemporal Patterns, Monitoring Network Design, and Environmental Justice of Air Pollution in the Phoenix Metropolitan Region: A Landscape Approach

    NASA Astrophysics Data System (ADS)

    Pope, Ronald L.

    Air pollution is a serious problem in most urban areas around the world, which has a number of negative ecological and human health impacts. As a result, it's vitally important to detect and characterize air pollutants to protect the health of the urban environment and our citizens. An important early step in this process is ensuring that the air pollution monitoring network is properly designed to capture the patterns of pollution and that all social demographics in the urban population are represented. An important aspect in characterizing air pollution patterns is scale in space and time which, along with pattern and process relationships, is a key subject in the field of landscape ecology. Thus, using multiple landscape ecological methods, this dissertation research begins by characterizing and quantifying the multi-scalar patterns of ozone (O3) and particulate matter (PM10) in the Phoenix, Arizona, metropolitan region. Results showed that pollution patterns are scale-dependent, O3 is a regionally-scaled pollutant at longer temporal scales, and PM10 is a locally-scaled pollutant with patterns sensitive to season. Next, this dissertation examines the monitoring network within Maricopa County. Using a novel multiscale indicator-based approach, the adequacy of the network was quantified by integrating inputs from various academic and government stakeholders. Furthermore, deficiencies were spatially defined and recommendations were made on how to strengthen the design of the network. A sustainability ranking system also provided new insight into the strengths and weaknesses of the network. Lastly, the study addresses the question of whether distinct social groups were experiencing inequitable exposure to pollutants - a key issue of distributive environmental injustice. A novel interdisciplinary method using multi-scalar ambient pollution data and hierarchical multiple regression models revealed environmental inequities between air pollutants and race, ethnicity, age, and socioeconomic classes. The results indicate that changing the scale of the analysis can change the equitable relationship between pollution and demographics. The scientific findings of the scale-dependent relationships among air pollution patterns, network design, and population demographics, brought to light through this study, can help policymakers make informed decisions for protecting the human health and the urban environment in the Phoenix metropolitan region and beyond.

  20. Large-visual-angle microstructure inspired from quantitative design of Morpho butterflies' lamellae deviation using the FDTD/PSO method.

    PubMed

    Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di

    2013-01-15

    The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.

  1. Continued Water-Based Phase Change Material Heat Exchanger Development

    NASA Technical Reports Server (NTRS)

    Hansen, Scott; Poynot, Joe

    2014-01-01

    In a cyclical heat load environment such as low Lunar orbit, a spacecraft's radiators are not sized to reject the full heat load requirement. Traditionally, a supplemental heat rejection device (SHReD) such as an evaporator or sublimator is used to act as a "topper" to meet the additional heat rejection demands. Utilizing a Phase Change Material (PCM) heat exchanger (HX) as a SHReD provides an attractive alternative to evaporators and sublimators as PCM HXs do not use a consumable, thereby leading to reduced launch mass and volume requirements. In continued pursuit of water PCM HX development two full-scale, Orion sized water-based PCM HX's were constructed by Mezzo Technologies. These HX's were designed by applying prior research and experimentation to the full scale design. Design options considered included bladder restraint and clamping mechanisms, bladder manufacturing, tube patterns, fill/drain methods, manifold dimensions, weight optimization, and midplate designs. Design and construction of these HX's led to successful testing of both PCM HX's.

  2. Preparing university students to lead K-12 engineering outreach programmes: a design experiment

    NASA Astrophysics Data System (ADS)

    Anthony, Anika B.; Greene, Howard; Post, Paul E.; Parkhurst, Andrew; Zhan, Xi

    2016-11-01

    This paper describes an engineering outreach programme designed to increase the interest of under-represented youth in engineering and to disseminate pre-engineering design challenge materials to K-12 educators and volunteers. Given university students' critical role as facilitators of the outreach programme, researchers conducted a two-year design experiment to examine the programme's effectiveness at preparing university students to lead pre-engineering activities. Pre- and post-surveys incorporated items from the Student Engagement sub-scale of the Teacher Sense of Efficacy Scale. Surveys were analysed using paired-samples t-test. Interview and open-ended survey data were analysed using discourse analysis and the constant comparative method. As a result of participation in the programme, university students reported a gain in efficacy to lead pre-engineering activities. The paper discusses programme features that supported efficacy gains and concludes with a set of design principles for developing learning environments that effectively prepare university students to facilitate pre-engineering outreach programmes.

  3. Design of a Minimum Surface-Effect Tendon-Based Microactuator for Micromanipulation

    NASA Technical Reports Server (NTRS)

    Goldfarb, Michael; Lipsey, James H.

    1997-01-01

    A piezoelectric (PZT) stack-based actuator was developed to provide a means of actuation with dynamic characteristics appropriate for small-scale manipulation. In particular, the design incorporates a highly nonlinear, large-ratio transmission that provides approximately two orders of magnitude motion amplification from the PZT stack. In addition to motion amplification, the nonlinear transmission was designed via optimization methods to distort the highly non-uniform properties of a piezoelectric actuator so that the achievable actuation force is nearly constant throughout the actuator workspace. The package also includes sensors that independently measure actuator output force and displacement, so that a manipulator structure need not incorporate sensors nor the associated wires. Specifically, the actuator was designed to output a maximum force of at least one Newton through a stroke of at least one millimeter. For purposes of small-scale precision position and/or force control, the actuator/sensor package was designed to eliminate stick-slip friction and backlash. The overall dimensions of the actuator/sensor package are approximately 40 x 65 x 25 mm.

  4. Improving the sensory quality of flavored liquid milk by engaging sensory analysis and consumer preference.

    PubMed

    Zhi, Ruicong; Zhao, Lei; Shi, Jingye

    2016-07-01

    Developing innovative products that satisfy various groups of consumers helps a company maintain a leading market share. The hedonic scale and just-about-right (JAR) scale are 2 popular methods for hedonic assessment and product diagnostics. In this paper, we chose to study flavored liquid milk because it is one of the most necessary nutrient sources in China. The hedonic scale and JAR scale methods were combined to provide directional information for flavored liquid milk optimization. Two methods of analysis (penalty analysis and partial least squares regression on dummy variables) were used and the results were compared. This paper had 2 aims: (1) to investigate consumer preferences of basic flavor attributes of milk from various cities in China; and (2) to determine the improvement direction for specific products and the ideal overall liking for consumers in various cities. The results showed that consumers in China have local-specific requirements for characteristics of flavored liquid milk. Furthermore, we provide a consumer-oriented product design method to improve sensory quality according to the preference of particular consumers. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. Full scale visualization of the wing tip vortices generated by a typical agricultural aircraft

    NASA Technical Reports Server (NTRS)

    Cross, E. J., Jr.; Bridges, P.; Brownlee, J. A.; Liningston, W. W.

    1980-01-01

    The trajectories of the wing tip vortices of a typical agricultural aircraft were experimentally determined by flight test. A flow visualization method, similar to the vapor screen method used in wind tunnels, was used to obtain trajectory data for a range of flight speeds, airplane configurations, and wing loadings. Detailed measurements of the spanwise surface pressure distribution were made for all test points. Further, a powered 1/8 scale model of the aircraft was designed, built, and used to obtain tip vortex trajectory data under conditions similar to that of the full-scale test. The effects of light wind on the vortices were demonstrated, and the interaction of the flap vortex and the tip vortex was clearly shown in photographs and plotted trajectory data.

  6. Controllers, observers, and applications thereof

    NASA Technical Reports Server (NTRS)

    Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)

    2011-01-01

    Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.

  7. Measuring high-density built environment for public health research: Uncertainty with respect to data, indicator design and spatial scale.

    PubMed

    Sun, Guibo; Webster, Chris; Ni, Michael Y; Zhang, Xiaohu

    2018-05-07

    Uncertainty with respect to built environment (BE) data collection, measure conceptualization and spatial scales is evident in urban health research, but most findings are from relatively lowdensity contexts. We selected Hong Kong, an iconic high-density city, as the study area as limited research has been conducted on uncertainty in such areas. We used geocoded home addresses (n=5732) from a large population-based cohort in Hong Kong to extract BE measures for the participants' place of residence based on an internationally recognized BE framework. Variability of the measures was mapped and Spearman's rank correlation calculated to assess how well the relationships among indicators are preserved across variables and spatial scales. We found extreme variations and uncertainties for the 180 measures collected using comprehensive data and advanced geographic information systems modelling techniques. We highlight the implications of methodological selection and spatial scales of the measures. The results suggest that more robust information regarding urban health research in high-density city would emerge if greater consideration were given to BE data, design methods and spatial scales of the BE measures.

  8. Self-reconfigurable ship fluid-network modeling for simulation-based design

    NASA Astrophysics Data System (ADS)

    Moon, Kyungjin

    Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.

  9. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-06-15

    Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less

  10. COBRApy: COnstraints-Based Reconstruction and Analysis for Python.

    PubMed

    Ebrahim, Ali; Lerman, Joshua A; Palsson, Bernhard O; Hyduke, Daniel R

    2013-08-08

    COnstraint-Based Reconstruction and Analysis (COBRA) methods are widely used for genome-scale modeling of metabolic networks in both prokaryotes and eukaryotes. Due to the successes with metabolism, there is an increasing effort to apply COBRA methods to reconstruct and analyze integrated models of cellular processes. The COBRA Toolbox for MATLAB is a leading software package for genome-scale analysis of metabolism; however, it was not designed to elegantly capture the complexity inherent in integrated biological networks and lacks an integration framework for the multiomics data used in systems biology. The openCOBRA Project is a community effort to promote constraints-based research through the distribution of freely available software. Here, we describe COBRA for Python (COBRApy), a Python package that provides support for basic COBRA methods. COBRApy is designed in an object-oriented fashion that facilitates the representation of the complex biological processes of metabolism and gene expression. COBRApy does not require MATLAB to function; however, it includes an interface to the COBRA Toolbox for MATLAB to facilitate use of legacy codes. For improved performance, COBRApy includes parallel processing support for computationally intensive processes. COBRApy is an object-oriented framework designed to meet the computational challenges associated with the next generation of stoichiometric constraint-based models and high-density omics data sets. http://opencobra.sourceforge.net/

  11. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  12. Fabrication methods for YF-12 wing panels for the Supersonic Cruise Aircraft Research Program

    NASA Technical Reports Server (NTRS)

    Hoffman, E. L.; Payne, L.; Carter, A. L.

    1975-01-01

    Advanced fabrication and joining processes for titanium and composite materials are being investigated by NASA to develop technology for the Supersonic Cruise Aircraft Research (SCAR) Program. With Lockheed-ADP as the prime contractor, full-scale structural panels are being designed and fabricated to replace an existing integrally stiffened shear panel on the upper wing surface of the NASA YF-12 aircraft. The program involves ground testing and Mach 3 flight testing of full-scale structural panels and laboratory testing of representative structural element specimens. Fabrication methods and test results for weldbrazed and Rohrbond titanium panels are discussed. The fabrication methods being developed for boron/aluminum, Borsic/aluminum, and graphite/polyimide panels are also presented.

  13. Andragogy and Workplace Relationships: A Mixed-Methods Study Exploring Employees' Perceptions of Their Relationships with Their Supervisors

    ERIC Educational Resources Information Center

    Klepper, Erin M.

    2017-01-01

    The purpose of this mixed-method study was to explore employees' perceptions of their relationships with their direct supervisor, and to determine why employees chose to remain at SSM Health. This study used a three-part research design comprised of quantitative Likert scale rating statements, Henschke's (2016) Modified Instructional Perspectives…

  14. Evaluating the Rank-Ordering Method for Standard Maintaining

    ERIC Educational Resources Information Center

    Bramley, Tom; Gill, Tim

    2010-01-01

    The rank-ordering method for standard maintaining was designed for the purpose of mapping a known cut-score (e.g. a grade boundary mark) on one test to an equivalent point on the test score scale of another test, using holistic expert judgements about the quality of exemplars of examinees' work (scripts). It is a novel application of an old…

  15. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  16. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-09-14

    Ever-tightening regulations on fuel economy, and the likely future regulation of carbon emissions, demand persistent innovation in vehicle design to reduce vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials, by adding material diversity and composite materials, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing plate thickness while retaining sufficient strength and ductility required for durability and safety. A project tomore » develop computational material models for advanced high strength steel is currently being executed under the auspices of the United States Automotive Materials Partnership (USAMP) funded by the US Department of Energy. Under this program, new Third Generation Advanced High Strength Steel (i.e., 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. The objectives of the project are to integrate atomistic, microstructural, forming and performance models to create an integrated computational materials engineering (ICME) toolkit for 3GAHSS. The mechanical properties of Advanced High Strength Steels (AHSS) are controlled by many factors, including phase composition and distribution in the overall microstructure, volume fraction, size and morphology of phase constituents as well as stability of the metastable retained austenite phase. The complex phase transformation and deformation mechanisms in these steels make the well-established traditional techniques obsolete, and a multi-scale microstructure-based modeling approach following the ICME [0]strategy was therefore chosen in this project. Multi-scale modeling as a major area of research and development is an outgrowth of the Comprehensive Test Ban Treaty of 1996 which banned surface testing of nuclear devices [1]. This had the effect that experimental work was reduced from large scale tests to multiscale experiments to provide material models with validation at different length scales. In the subsequent years industry realized that multi-scale modeling and simulation-based design were transferable to the design optimization of any structural system. Horstemeyer [1] lists a number of advantages of the use of multiscale modeling. Among these are: the reduction of product development time by alleviating costly trial-and-error iterations as well as the reduction of product costs through innovations in material, product and process designs. Multi-scale modeling can reduce the number of costly large scale experiments and can increase product quality by providing more accurate predictions. Research tends to be focussed on each particular length scale, which enhances accuracy in the long term. This paper serves as an introduction to the LS-OPT and LS-DYNA methodology for multi-scale modeling. It mainly focuses on an approach to integrate material identification using material models of different length scales. As an example, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a homogenized State Variable (SV) model, is discussed and the parameter identification of the individual material models of different length scales is demonstrated. The paper concludes with thoughts on integrating the multi-scale methodology into the overall vehicle design.« less

  17. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment Using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David; Johnson, Kenneth

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  18. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  19. [Instruments for quantitative methods of nursing research].

    PubMed

    Vellone, E

    2000-01-01

    Instruments for quantitative nursing research are a mean to objectify and measure a variable or a phenomenon in the scientific research. There are direct instruments to measure concrete variables and indirect instruments to measure abstract concepts (Burns, Grove, 1997). Indirect instruments measure the attributes by which a concept is made of. Furthermore, there are instruments for physiologic variables (e.g. for the weight), observational instruments (Check-lists e Rating Scales), interviews, questionnaires, diaries and the scales (Check-lists, Rating Scales, Likert Scales, Semantic Differential Scales e Visual Anologue Scales). The choice to select an instrument or another one depends on the research question and design. Instruments research are very useful in research both to describe the variables and to see statistical significant relationships. Very carefully should be their use in the clinical practice for diagnostic assessment.

  20. Parallel steady state studies on a milliliter scale accelerate fed-batch bioprocess design for recombinant protein production with Escherichia coli.

    PubMed

    Schmideder, Andreas; Cremer, Johannes H; Weuster-Botz, Dirk

    2016-11-01

    In general, fed-batch processes are applied for recombinant protein production with Escherichia coli (E. coli). However, state of the art methods for identifying suitable reaction conditions suffer from severe drawbacks, i.e. direct transfer of process information from parallel batch studies is often defective and sequential fed-batch studies are time-consuming and cost-intensive. In this study, continuously operated stirred-tank reactors on a milliliter scale were applied to identify suitable reaction conditions for fed-batch processes. Isopropyl β-d-1-thiogalactopyranoside (IPTG) induction strategies were varied in parallel-operated stirred-tank bioreactors to study the effects on the continuous production of the recombinant protein photoactivatable mCherry (PAmCherry) with E. coli. Best-performing induction strategies were transferred from the continuous processes on a milliliter scale to liter scale fed-batch processes. Inducing recombinant protein expression by dynamically increasing the IPTG concentration to 100 µM led to an increase in the product concentration of 21% (8.4 g L -1 ) compared to an implemented high-performance production process with the most frequently applied induction strategy by a single addition of 1000 µM IPGT. Thus, identifying feasible reaction conditions for fed-batch processes in parallel continuous studies on a milliliter scale was shown to be a powerful, novel method to accelerate bioprocess design in a cost-reducing manner. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1426-1435, 2016. © 2016 American Institute of Chemical Engineers.

  1. Towards the hand-held mass spectrometer: design considerations, simulation, and fabrication of micrometer-scaled cylindrical ion traps

    NASA Astrophysics Data System (ADS)

    Blain, Matthew G.; Riter, Leah S.; Cruz, Dolores; Austin, Daniel E.; Wu, Guangxiang; Plass, Wolfgang R.; Cooks, R. Graham

    2004-08-01

    Breakthrough improvements in simplicity and reductions in the size of mass spectrometers are needed for high-consequence fieldable applications, including error-free detection of chemical/biological warfare agents, medical diagnoses, and explosives and contraband discovery. These improvements are most likely to be realized with the reconceptualization of the mass spectrometer, rather than by incremental steps towards miniaturization. Microfabricated arrays of mass analyzers represent such a conceptual advance. A massively parallel array of micrometer-scaled mass analyzers on a chip has the potential to set the performance standard for hand-held sensors due to the inherit selectivity, sensitivity, and universal applicability of mass spectrometry as an analytical method. While the effort to develop a complete micro-MS system must include innovations in ultra-small-scale sample introduction, ion sources, mass analyzers, detectors, and vacuum and power subsystems, the first step towards radical miniaturization lies in the design, fabrication, and characterization of the mass analyzer itself. In this paper we discuss design considerations and results from simulations of ion trapping behavior for a micrometer scale cylindrical ion trap (CIT) mass analyzer (internal radius r0 = 1 [mu]m). We also present a description of the design and microfabrication of a 0.25 cm2 array of 106 one-micrometer CITs, including integrated ion detectors, constructed in tungsten on a silicon substrate.

  2. Equipment characterization to mitigate risks during transfers of cell culture manufacturing processes.

    PubMed

    Sieblist, Christian; Jenzsch, Marco; Pohlscheidt, Michael

    2016-08-01

    The production of monoclonal antibodies by mammalian cell culture in bioreactors up to 25,000 L is state of the art technology in the biotech industry. During the lifecycle of a product, several scale up activities and technology transfers are typically executed to enable the supply chain strategy of a global pharmaceutical company. Given the sensitivity of mammalian cells to physicochemical culture conditions, process and equipment knowledge are critical to avoid impacts on timelines, product quantity and quality. Especially, the fluid dynamics of large scale bioreactors versus small scale models need to be described, and similarity demonstrated, in light of the Quality by Design approach promoted by the FDA. This approach comprises an associated design space which is established during process characterization and validation in bench scale bioreactors. Therefore the establishment of predictive models and simulation tools for major operating conditions of stirred vessels (mixing, mass transfer, and shear force.), based on fundamental engineering principles, have experienced a renaissance in the recent years. This work illustrates the systematic characterization of a large variety of bioreactor designs deployed in a global manufacturing network ranging from small bench scale equipment to large scale production equipment (25,000 L). Several traditional methods to determine power input, mixing, mass transfer and shear force have been used to create a data base and identify differences for various impeller types and configurations in operating ranges typically applied in cell culture processes at manufacturing scale. In addition, extrapolation of different empirical models, e.g. Cooke et al. (Paper presented at the proceedings of the 2nd international conference of bioreactor fluid dynamics, Cranfield, UK, 1988), have been assessed for their validity in these operational ranges. Results for selected designs are shown and serve as examples of structured characterization to enable fast and agile process transfers, scale up and troubleshooting.

  3. Using PAT to accelerate the transition to continuous API manufacturing.

    PubMed

    Gouveia, Francisca F; Rahbek, Jesper P; Mortensen, Asmus R; Pedersen, Mette T; Felizardo, Pedro M; Bro, Rasmus; Mealy, Michael J

    2017-01-01

    Significant improvements can be realized by converting conventional batch processes into continuous ones. The main drivers include reduction of cost and waste, increased safety, and simpler scale-up and tech transfer activities. Re-designing the process layout offers the opportunity to incorporate a set of process analytical technologies (PAT) embraced in the Quality-by-Design (QbD) framework. These tools are used for process state estimation, providing enhanced understanding of the underlying variability in the process impacting quality and yield. This work describes a road map for identifying the best technology to speed-up the development of continuous processes while providing the basis for developing analytical methods for monitoring and controlling the continuous full-scale reaction. The suitability of in-line Raman, FT-infrared (FT-IR), and near-infrared (NIR) spectroscopy for real-time process monitoring was investigated in the production of 1-bromo-2-iodobenzene. The synthesis consists of three consecutive reaction steps including the formation of an unstable diazonium salt intermediate, which is critical to secure high yield and avoid formation of by-products. All spectroscopic methods were able to capture critical information related to the accumulation of the intermediate with very similar accuracy. NIR spectroscopy proved to be satisfactory in terms of performance, ease of installation, full-scale transferability, and stability to very adverse process conditions. As such, in-line NIR was selected to monitor the continuous full-scale production. The quantitative method was developed against theoretical concentration values of the intermediate since representative sampling for off-line reference analysis cannot be achieved. The rapid and reliable analytical system allowed the following: speeding up the design of the continuous process and a better understanding of the manufacturing requirements to ensure optimal yield and avoid unreacted raw materials and by-products in the continuous reactor effluent. Graphical Abstract Using PAT to accelerate the transition to continuous API manufacturing.

  4. Measuring the impostor phenomenon: a comparison of Clance's IP Scale and Harvey's I-P Scale.

    PubMed

    Holmes, S W; Kertay, L; Adamson, L B; Holland, C L; Clance, P R

    1993-02-01

    Many of the discrepancies reported to date in empirical investigations of the impostor phenomenon (IP) may be due in part to (a) the use of different methods for identifying individuals suffering from this syndrome (impostors), (b) the common use of a median split procedure to classify subjects and (c) the fact that subjects in many studies were drawn from impostor-prone samples. In this study, we compared the scores of independently identified impostors and nonimpostors on two instruments designed to measure the IP: Harvey's I-P Scale and Clance's IP Scale. The results suggest that Clance's scale may be the more sensitive and reliable instrument. Cutoff score suggestions for both instruments are offered.

  5. Dentists' use of validated child dental anxiety measures in clinical practice: a mixed methods study.

    PubMed

    Alshammasi, Hussain; Buchanan, Heather; Ashley, Paul

    2018-01-01

    Assessing anxiety is an important part of the assessment of a child presenting for dental treatment; however, the use of dental anxiety scales in practice is not well-documented. To introduce child dental anxiety scales, and to monitor the extent to which dentists used them; to explore the experience and views of dentists regarding anxiety assessment. A mixed-methods design was employed. A protocol for child anxiety assessment was introduced to paediatric dentists in Eastman Dental Hospital. After 6 months, 100 patient files were audited to examine compliance with the protocol. Fourteen dentists were interviewed to explore their experience and views regarding anxiety assessment. Only five patients were assessed using the scales. Thematic analysis of the dentist interviews revealed three themes: 'Clinical observations and experience: The gold standard'; 'Scales as an estimate or adjunct'; and 'Shortcomings and barriers to using scales'. The dentists in our study did not use anxiety scales, instead they rely on their own experience/judgement. Therefore, scales should be recommended as an adjunct to judgement. Brief scales are recommended as clinicians lack time and expertise in administering anxiety questionnaires. Advantages of using scales and hands-on experience could be incorporated more in undergraduate training. © 2017 BSPD, IAPD and John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. A design strategy for the use of vortex generators to manage inlet-engine distortion using computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.; Levy, Ralph

    1991-01-01

    A reduced Navier-Stokes solution technique was successfully used to design vortex generator installations for the purpose of minimizing engine face distortion by restructuring the development of secondary flow that is induced in typical 3-D curved inlet ducts. The results indicate that there exists an optimum axial location for this installation of corotating vortex generators, and within this configuration, there exists a maximum spacing between generator blades above which the engine face distortion increases rapidly. Installed vortex generator performance, as measured by engine face circumferential distortion descriptors, is sensitive to Reynolds number and thereby the generator scale, i.e., the ratio of generator blade height to local boundary layer thickness. Installations of corotating vortex generators work well in terms of minimizing engine face distortion within a limited range of generator scales. Hence, the design of vortex generator installations is a point design, and all other conditions are off design. In general, the loss levels associated with a properly designed vortex generator installation are very small; thus, they represent a very good method to manage engine face distortion. This study also showed that the vortex strength, generator scale, and secondary flow field structure have a complicated and interrelated influence over engine face distortion, over and above the influence of the initial arrangement of generators.

  7. Multiplexed genome engineering and genotyping methods applications for synthetic biology and metabolic engineering.

    PubMed

    Wang, Harris H; Church, George M

    2011-01-01

    Engineering at the scale of whole genomes requires fundamentally new molecular biology tools. Recent advances in recombineering using synthetic oligonucleotides enable the rapid generation of mutants at high efficiency and specificity and can be implemented at the genome scale. With these techniques, libraries of mutants can be generated, from which individuals with functionally useful phenotypes can be isolated. Furthermore, populations of cells can be evolved in situ by directed evolution using complex pools of oligonucleotides. Here, we discuss ways to utilize these multiplexed genome engineering methods, with special emphasis on experimental design and implementation. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Proposed Modifications to Engineering Design Guidelines Related to Resistivity Measurements and Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb

    2005-01-01

    A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for consideration and measurements of resistivity for many typical spacecraft charging scenarios.

  9. Successive equimarginal approach for optimal design of a pump and treat system

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.

    2007-08-01

    An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.

  10. Ligand design by a combinatorial approach based on modeling and experiment: application to HLA-DR4

    NASA Astrophysics Data System (ADS)

    Evensen, Erik; Joseph-McCarthy, Diane; Weiss, Gregory A.; Schreiber, Stuart L.; Karplus, Martin

    2007-07-01

    Combinatorial synthesis and large scale screening methods are being used increasingly in drug discovery, particularly for finding novel lead compounds. Although these "random" methods sample larger areas of chemical space than traditional synthetic approaches, only a relatively small percentage of all possible compounds are practically accessible. It is therefore helpful to select regions of chemical space that have greater likelihood of yielding useful leads. When three-dimensional structural data are available for the target molecule this can be achieved by applying structure-based computational design methods to focus the combinatorial library. This is advantageous over the standard usage of computational methods to design a small number of specific novel ligands, because here computation is employed as part of the combinatorial design process and so is required only to determine a propensity for binding of certain chemical moieties in regions of the target molecule. This paper describes the application of the Multiple Copy Simultaneous Search (MCSS) method, an active site mapping and de novo structure-based design tool, to design a focused combinatorial library for the class II MHC protein HLA-DR4. Methods for the synthesizing and screening the computationally designed library are presented; evidence is provided to show that binding was achieved. Although the structure of the protein-ligand complex could not be determined, experimental results including cross-exclusion of a known HLA-DR4 peptide ligand (HA) by a compound from the library. Computational model building suggest that at least one of the ligands designed and identified by the methods described binds in a mode similar to that of native peptides.

  11. Subscale and Full-Scale Testing of Buckling-Critical Launch Vehicle Shell Structures

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Haynie, Waddy T.; Lovejoy, Andrew E.; Roberts, Michael G.; Norris, Jeffery P.; Waters, W. Allen; Herring, Helen M.

    2012-01-01

    New analysis-based shell buckling design factors (aka knockdown factors), along with associated design and analysis technologies, are being developed by NASA for the design of launch vehicle structures. Preliminary design studies indicate that implementation of these new knockdown factors can enable significant reductions in mass and mass-growth in these vehicles and can help mitigate some of NASA s launch vehicle development and performance risks by reducing the reliance on testing, providing high-fidelity estimates of structural performance, reliability, robustness, and enable increased payload capability. However, in order to validate any new analysis-based design data or methods, a series of carefully designed and executed structural tests are required at both the subscale and full-scale level. This paper describes recent buckling test efforts at NASA on two different orthogrid-stiffened metallic cylindrical shell test articles. One of the test articles was an 8-ft-diameter orthogrid-stiffened cylinder and was subjected to an axial compression load. The second test article was a 27.5-ft-diameter Space Shuttle External Tank-derived cylinder and was subjected to combined internal pressure and axial compression.

  12. Herpetological Monitoring Using a Pitfall Trapping Design in Southern California

    USGS Publications Warehouse

    Fisher, Robert; Stokes, Drew; Rochester, Carlton; Brehme, Cheryl; Hathaway, Stacie; Case, Ted

    2008-01-01

    The steps necessary to conduct a pitfall trapping survey for small terrestrial vertebrates are presented. Descriptions of the materials needed and the methods to build trapping equipment from raw materials are discussed. Recommended data collection techniques are given along with suggested data fields. Animal specimen processing procedures, including toe- and scale-clipping, are described for lizards, snakes, frogs, and salamanders. Methods are presented for conducting vegetation surveys that can be used to classify the environment associated with each pitfall trap array. Techniques for data storage and presentation are given based on commonly use computer applications. As with any study, much consideration should be given to the study design and methods before beginning any data collection effort.

  13. A comparison of methods to estimate future sub-daily design rainfall

    NASA Astrophysics Data System (ADS)

    Li, J.; Johnson, F.; Evans, J.; Sharma, A.

    2017-12-01

    Warmer temperatures are expected to increase extreme short-duration rainfall due to the increased moisture-holding capacity of the atmosphere. While attention has been paid to the impacts of climate change on future design rainfalls at daily or longer time scales, the potential changes in short duration design rainfalls have been often overlooked due to the limited availability of sub-daily projections and observations. This study uses a high-resolution regional climate model (RCM) to predict the changes in sub-daily design rainfalls for the Greater Sydney region in Australia. Sixteen methods for predicting changes to sub-daily future extremes are assessed based on different options for bias correction, disaggregation and frequency analysis. A Monte Carlo cross-validation procedure is employed to evaluate the skill of each method in estimating the design rainfall for the current climate. It is found that bias correction significantly improves the accuracy of the design rainfall estimated for the current climate. For 1 h events, bias correcting the hourly annual maximum rainfall simulated by the RCM produces design rainfall closest to observations, whereas for multi-hour events, disaggregating the daily rainfall total is recommended. This suggests that the RCM fails to simulate the observed multi-duration rainfall persistence, which is a common issue for most climate models. Despite the significant differences in the estimated design rainfalls between different methods, all methods lead to an increase in design rainfalls across the majority of the study region.

  14. A study of large scale gust generation in a small scale atmospheric wind tunnel with applications to Micro Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Roadman, Jason Markos

    Modern technology operating in the atmospheric boundary layer can always benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the atmospheric boundary layer turbulence at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an "atmospheric wind tunnel" is sought. Many programs could utilize such a tool including Micro Aerial Vehicle(MAV) development, the wind energy industry, fuel efficient vehicle design, and the study of bird and insect flight, to name just a few. The small scale of MAVs provide the somewhat unique capability of full scale Reynolds number testing in a wind tunnel. However, that same small scale creates interactions under real world flight conditions, atmospheric gusts for example, that lead to a need for testing under more complex flows than the standard uniform flow found in most wind tunnels. It is for these reasons that MAVs are used as the initial testing application for the atmospheric gust tunnel. An analytical model for both discrete gusts and a continuous spectrum of gusts is examined. Then, methods for generating gusts in agreement with that model are investigated. Previously used methods are reviewed and a gust generation apparatus is designed. Expected turbulence and gust characteristics of this apparatus are compared with atmospheric data. The construction of an active "gust generator" for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to weather ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated using the surface oil flow visualization technique.

  15. Unsupervised learning on scientific ocean drilling datasets from the South China Sea

    NASA Astrophysics Data System (ADS)

    Tse, Kevin C.; Chiu, Hon-Chim; Tsang, Man-Yin; Li, Yiliang; Lam, Edmund Y.

    2018-06-01

    Unsupervised learning methods were applied to explore data patterns in multivariate geophysical datasets collected from ocean floor sediment core samples coming from scientific ocean drilling in the South China Sea. Compared to studies on similar datasets, but using supervised learning methods which are designed to make predictions based on sample training data, unsupervised learning methods require no a priori information and focus only on the input data. In this study, popular unsupervised learning methods including K-means, self-organizing maps, hierarchical clustering and random forest were coupled with different distance metrics to form exploratory data clusters. The resulting data clusters were externally validated with lithologic units and geologic time scales assigned to the datasets by conventional methods. Compact and connected data clusters displayed varying degrees of correspondence with existing classification by lithologic units and geologic time scales. K-means and self-organizing maps were observed to perform better with lithologic units while random forest corresponded best with geologic time scales. This study sets a pioneering example of how unsupervised machine learning methods can be used as an automatic processing tool for the increasingly high volume of scientific ocean drilling data.

  16. A randomized controlled trial of acupuncture and moxibustion to treat Bell's palsy according to different stages: design and protocol.

    PubMed

    Chen, Xiaoqin; Li, Ying; Zheng, Hui; Hu, Kaming; Zhang, Hongxing; Zhao, Ling; Li, Yan; Liu, Lian; Mang, Lingling; Yu, Shuyuan

    2009-07-01

    Acupuncture to treat Bell's palsy is one of the most commonly used methods in China. There are a variety of acupuncture treatment options to treat Bell's palsy in clinical practice. Since Bell's palsy has three different path-stages (acute stage, resting stage and restoration stage), so whether acupuncture is effective in the different path-stages and which acupuncture treatment is the best method are major issues in acupuncture clinical trials about Bell's palsy. In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with acupuncture. There are five acupuncture groups, with four according to different path-stages and one not. In total, 900 patients with Bell's palsy are enrolled in this study. These patients are randomly assigned to receive one of the following four treatment groups according to different path-stages, i.e. 1) staging acupuncture group, 2) staging acupuncture and moxibustion group, 3) staging electro-acupuncture group, 4) staging acupuncture along yangming musculature group or non-staging acupuncture control group. The outcome measurements in this trial are the effect comparison achieved among these five groups in terms of House-Brackmann scale (Global Score and Regional Score), Facial Disability Index scale, Classification scale of Facial Paralysis, and WHOQOL-BREF scale before randomization (baseline phase) and after randomization. The result of this trial will certify the efficacy of using staging acupuncture and moxibustion to treat Bell's palsy, and to approach a best acupuncture treatment among these five different methods for treating Bell's palsy.

  17. A pilot mixed methods study of patient satisfaction with chiropractic care for back pain.

    PubMed

    Rowell, Robert M; Polipnick, Judith

    2008-10-01

    Patient satisfaction is important to payers, clinicians, and patients. The concept of satisfaction is multifactorial and measurement is challenging. Our objective was to explore the use of a mixed-methods design to examine patient satisfaction with chiropractic care for low back pain. Patients were treated 3 times per week for 3 weeks. Outcomes were collected at week 3 and week 4. Qualitative interviews were conducted by the treating clinician and a nontreating staff member. Outcome measures were the Roland Morris Back Pain Disability Questionnaire, the visual analog scale for pain, and the Patient Satisfaction Scale. Interviews were recorded and transcribed and analyzed for themes and constructs of satisfaction. We compared qualitative interview data with quantitative outcomes, and qualitative data from 2 different interviewers. All patients reported high levels of satisfaction. Clinical outcomes were unremarkable with little change noted on visual analog scale and Roland Morris Back Pain Disability Questionnaire scores. We categorized patient comments into the same constructs of satisfaction as those identified for the Patient Satisfaction Scale: Information, Effectiveness, and Caring. An additional construct (Quality of Care) and additional subcategories were identified. Satisfaction with care is not explained by outcome alone. The qualitative data collected from 2 different interviewers had few differences. The results of this study suggest that it is feasible to use a mixed-methods design to examine patient satisfaction. We were able to refine data collection and analysis procedures for the outcome measures and qualitative interview data. We identified limitations and offer recommendations for the next step: the implementation of a larger study.

  18. Johnson Noise Thermometry in the range 505 K to 933 K

    NASA Astrophysics Data System (ADS)

    Tew, Weston; Labenski, John; Nam, Sae Woo; Benz, Samuel; Dresselhaus, Paul; Martinis, John

    2006-03-01

    The International Temperature Scale of 1990 (ITS-90) is an artifact-based temperature scale, T90, designed to approximate thermodynamic temperature T. The thermodynamic errors of the ITS-90, characterized as the value of T-T90, only recently have been quantified by primary thermodynamic methods. Johnson Noise Thermometry (JNT) is a primary method which can be applied over wide temperature ranges, and NIST is currently using JNT to determine T-T90 in the range 505 K to 933 K, overlapping both acoustic gas-based and radiation-based thermometry. Advances in digital electronics have now made the computationally intensive processing required for JNT viable using noise voltage correlation in the frequency domain. We have also optimized the design of the 5-wire JNT temperature probes to minimize electromagnetic interference and transmission line effects. Statistical uncertainties under 50 μK/K are achievable using relatively modest bandwidths of ˜100 kHz. The NIST JNT system will provide critical data for T-T90 linking together the highly accurate acoustic gas-based data at lower temperatures with the higher-temperature radiation-based data, forming the basis for a new International Temperature Scale with greatly improved thermodynamic accuracy.

  19. Bioinspired Principles for Large-Scale Networked Sensor Systems: An Overview

    PubMed Central

    Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg

    2011-01-01

    Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy. PMID:22163841

  20. Aerothermodynamic Design Sensitivities for a Reacting Gas Flow Solver on an Unstructured Mesh Using a Discrete Adjoint Formulation

    NASA Astrophysics Data System (ADS)

    Thompson, Kyle Bonner

    An algorithm is described to efficiently compute aerothermodynamic design sensitivities using a decoupled variable set. In a conventional approach to computing design sensitivities for reacting flows, the species continuity equations are fully coupled to the conservation laws for momentum and energy. In this algorithm, the species continuity equations are solved separately from the mixture continuity, momentum, and total energy equations. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This decoupled approach for computing design sensitivities with the adjoint system is demonstrated for inviscid flow in chemical non-equilibrium around a re-entry vehicle with a retro-firing annular nozzle. The sensitivities of the surface temperature and mass flow rate through the nozzle plenum are computed with respect to plenum conditions and verified against sensitivities computed using a complex-variable finite-difference approach. The decoupled scheme significantly reduces the computational time and memory required to complete the optimization, making this an attractive method for high-fidelity design of hypersonic vehicles.

  1. Conceptual design of flapping-wing micro air vehicles.

    PubMed

    Whitney, J P; Wood, R J

    2012-09-01

    Traditional micro air vehicles (MAVs) are miniature versions of full-scale aircraft from which their design principles closely follow. The first step in aircraft design is the development of a conceptual design, where basic specifications and vehicle size are established. Conceptual design methods do not rely on specific knowledge of the propulsion system, vehicle layout and subsystems; these details are addressed later in the design process. Non-traditional MAV designs based on birds or insects are less common and without well-established conceptual design methods. This paper presents a conceptual design process for hovering flapping-wing vehicles. An energy-based accounting of propulsion and aerodynamics is combined with a one degree-of-freedom dynamic flapping model. Important results include simple analytical expressions for flight endurance and range, predictions for maximum feasible wing size and body mass, and critical design space restrictions resulting from finite wing inertia. A new figure-of-merit for wing structural-inertial efficiency is proposed and used to quantify the performance of real and artificial insect wings. The impact of these results on future flapping-wing MAV designs is discussed in detail.

  2. A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles

    NASA Technical Reports Server (NTRS)

    Kinzie, Kevin W.; Schein, David B.

    2004-01-01

    A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.

  3. Expediting analog design retargeting by design knowledge re-use and circuit synthesis: a practical example on a Delta-Sigma modulator

    NASA Astrophysics Data System (ADS)

    Webb, Matthew; Tang, Hua

    2016-08-01

    In the past decade or two, due to constant and rapid technology changes, analog design re-use or design retargeting to newer technologies has been brought to the table in order to expedite the design process and improve time-to-market. If properly conducted, analog design retargeting could significantly cut down design cycle compared to designs starting from the scratch. In this article, we present an empirical and general method for efficient analog design retargeting by design knowledge re-use and circuit synthesis (CS). The method first identifies circuit blocks that compose the source system and extracts the performance parameter specifications of each circuit block. Then, for each circuit block, it scales the values of design variables (DV) from the source design to derive an initial design in the target technology. Depending on the performance of this initial target design, a design space is defined for synthesis. Subsequently, each circuit block is automatically synthesised using state-of-art analog synthesis tools based on a combination of global and local optimisation techniques to achieve comparable performance specifications to those extracted from the source system. Finally, the overall system is composed of those synthesised circuit blocks in the target technology. We illustrate the method using a practical example of a complex Delta-Sigma modulator (DSM) circuit.

  4. SELECTIVE DISSEMINATION OF INFORMATION--REVIEW OF SELECTED SYSTEMS AND A DESIGN FOR ARMY TECHNICAL LIBRARIES. FINAL REPORT. ARMY TECHNICAL LIBRARY IMPROVEMENT STUDIES (ATLIS), REPORT NO. 8.

    ERIC Educational Resources Information Center

    BIVONA, WILLIAM A.

    THIS REPORT PRESENTS AN ANALYSIS OF OVER EIGHTEEN SMALL, INTERMEDIATE, AND LARGE SCALE SYSTEMS FOR THE SELECTIVE DISSEMINATION OF INFORMATION (SDI). SYSTEMS ARE COMPARED AND ANALYZED WITH RESPECT TO DESIGN CRITERIA AND THE FOLLOWING NINE SYSTEM PARAMETERS--(1) INFORMATION INPUT, (2) METHODS OF INDEXING AND ABSTRACTING, (3) USER INTEREST PROFILE…

  5. Thermal, size and surface effects on the nonlinear pull-in of small-scale piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    SoltanRezaee, Masoud; Ghazavi, Mohammad-Reza

    2017-09-01

    Electrostatically actuated miniature wires/tubes have many operational applications in the high-tech industries. In this research, the nonlinear pull-in instability of piezoelectric thermal small-scale switches subjected to Coulomb and dissipative forces is analyzed using strain gradient and modified couple stress theories. The discretized governing equation is solved numerically by means of the step-by-step linearization method. The correctness of the formulated model and solution procedure is validated through comparison with experimental and several theoretical results. Herein, the length-scale, surface energy, van der Waals attraction and nonlinear curvature are considered in the present comprehensive model and the thermo-electro-mechanical behavior of cantilever piezo-beams are discussed in detail. It is found that the piezoelectric actuation can be used as a design parameter to control the pull-in phenomenon. The obtained results are applicable in stability analysis, practical design and control of actuated miniature intelligent devices.

  6. Programming Self-Assembly of DNA Origami Honeycomb Two-Dimensional Lattices and Plasmonic Metamaterials.

    PubMed

    Wang, Pengfei; Gaitanaros, Stavros; Lee, Seungwoo; Bathe, Mark; Shih, William M; Ke, Yonggang

    2016-06-22

    Scaffolded DNA origami has proven to be a versatile method for generating functional nanostructures with prescribed sub-100 nm shapes. Programming DNA-origami tiles to form large-scale 2D lattices that span hundreds of nanometers to the micrometer scale could provide an enabling platform for diverse applications ranging from metamaterials to surface-based biophysical assays. Toward this end, here we design a family of hexagonal DNA-origami tiles using computer-aided design and demonstrate successful self-assembly of micrometer-scale 2D honeycomb lattices and tubes by controlling their geometric and mechanical properties including their interconnecting strands. Our results offer insight into programmed self-assembly of low-defect supra-molecular DNA-origami 2D lattices and tubes. In addition, we demonstrate that these DNA-origami hexagon tiles and honeycomb lattices are versatile platforms for assembling optical metamaterials via programmable spatial arrangement of gold nanoparticles (AuNPs) into cluster and superlattice geometries.

  7. Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network

    NASA Astrophysics Data System (ADS)

    Nasution, T. H.; Andayani, U.

    2017-03-01

    The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.

  8. Parametric study of variation in cargo-airplane performance related to progression from current to spanloader designs

    NASA Technical Reports Server (NTRS)

    Toll, T. A.

    1980-01-01

    A parametric analysis was made to investigate the relationship between current cargo airplanes and possible future designs that may differ greatly in both size and configuration. The method makes use of empirical scaling laws developed from statistical studies of data from current and advanced airplanes and, in addition, accounts for payload density, effects of span distributed load, and variations in tail area ratio. The method is believed to be particularly useful for exploratory studies of design and technology options for large airplanes. The analysis predicts somewhat more favorable variations of the ratios of payload to gross weight and block fuel to payload as the airplane size is increased than has been generally understood from interpretations of the cube-square law. In terms of these same ratios, large all wing (spanloader) designs show an advantage over wing-fuselage designs.

  9. Advanced composite elevator for Boeing 727 aircraft, volume 2

    NASA Technical Reports Server (NTRS)

    Chovil, D. V.; Grant, W. D.; Jamison, E. S.; Syder, H.; Desper, O. E.; Harvey, S. T.; Mccarty, J. E.

    1980-01-01

    Preliminary design activity consisted of developing and analyzing alternate design concepts and selecting the optimum elevator configuration. This included trade studies in which durability, inspectability, producibility, repairability, and customer acceptance were evaluated. Preliminary development efforts consisted of evaluating and selecting material, identifying ancillary structural development test requirements, and defining full scale ground and flight test requirements necessary to obtain Federal Aviation Administration (FAA) certification. After selection of the optimum elevator configuration, detail design was begun and included basic configuration design improvements resulting from manufacturing verification hardware, the ancillary test program, weight analysis, and structural analysis. Detail and assembly tools were designed and fabricated to support a full-scope production program, rather than a limited run. The producibility development programs were used to verify tooling approaches, fabrication processes, and inspection methods for the production mode. Quality parts were readily fabricated and assembled with a minimum rejection rate, using prior inspection methods.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, S.; Yokosawa, M.; Matsuyama, M.

    To study the practical application of a tritium separation process using Self-Developing Gas Chromatography (SDGC) using a Pd-Pt alloy, intermediate scale-up experiments (22 mm ID x 2 m length column) and the development of a computational simulation method have been conducted. In addition, intermediate scale production of Pd-Pt powder has been developed for the scale-up experiments.The following results were obtained: (1) a 50-fold scale-up from 3 mm to 22 mm causes no significant impact on the SDGC process; (2) the Pd-Pt alloy powder is applicable to a large size SDGC process; and (3) the simulation enables preparation of a conceptualmore » design of a SDGC process for tritium separation.« less

  11. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  12. Design of small-scale gradient coils in magnetic resonance imaging by using the topology optimization method

    NASA Astrophysics Data System (ADS)

    Pan, Hui; Jia, Feng; Liu, Zhen-Yu; Zaitsev, Maxim; Hennig, Juergen; Korvink, Jan G.

    2018-05-01

    Not Available Project supported by the National Natural Science Foundation of China (Grant Nos. 51675506 and 51275504) and the German Research Foundation (DFG) (Grant Nos. #ZA 422/5-1 and #ZA 422/6-1).

  13. New Challenges in Computational Thermal Hydraulics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadigaroglu, George; Lakehal, Djamel

    New needs and opportunities drive the development of novel computational methods for the design and safety analysis of light water reactors (LWRs). Some new methods are likely to be three dimensional. Coupling is expected between system codes, computational fluid dynamics (CFD) modules, and cascades of computations at scales ranging from the macro- or system scale to the micro- or turbulence scales, with the various levels continuously exchanging information back and forth. The ISP-42/PANDA and the international SETH project provide opportunities for testing applications of single-phase CFD methods to LWR safety problems. Although industrial single-phase CFD applications are commonplace, computational multifluidmore » dynamics is still under development. However, first applications are appearing; the state of the art and its potential uses are discussed. The case study of condensation of steam/air mixtures injected from a downward-facing vent into a pool of water is a perfect illustration of a simulation cascade: At the top of the hierarchy of scales, system behavior can be modeled with a system code; at the central level, the volume-of-fluid method can be applied to predict large-scale bubbling behavior; at the bottom of the cascade, direct-contact condensation can be treated with direct numerical simulation, in which turbulent flow (in both the gas and the liquid), interfacial dynamics, and heat/mass transfer are directly simulated without resorting to models.« less

  14. Fuzzy logic-based flight control system design

    NASA Astrophysics Data System (ADS)

    Nho, Kyungmoon

    The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.

  15. In-chip direct laser writing of a centimeter-scale acoustic micromixer

    NASA Astrophysics Data System (ADS)

    van't Oever, Jorick; Spannenburg, Niels; Offerhaus, Herman; van den Ende, Dirk; Herek, Jennifer; Mugele, Frieder

    2015-04-01

    A centimeter-scale micromixer was fabricated by two-photon polymerization inside a closed microchannel using direct laser writing. The structure consists of a repeating pattern of 20 μm×20 μm×155 μm acrylate pillars and extends over 1.2 cm. Using external ultrasonic actuation, the micropillars locally induce streaming with flow speeds of 30 μm s-1. The fabrication method allows for large flexibility and more complex designs.

  16. Cache Coherence Protocols for Large-Scale Multiprocessors

    DTIC Science & Technology

    1990-09-01

    and is compared with the other protocols for large-scale machines. In later analysis, this coherence method is designated by the acronym OCPD , which...private read misses 2 6 6 ( OCPD ) private write misses 2 6 6 Table 4.2: Transaction Types and Costs. the performance of the memory system. These...methodologies. Figure 4-2 shows the processor utiliza- tions of the Weather program, with special code in the dyn-nic post-mortem sched- 94 OCPD DlrINB

  17. Multi-Scale Experiments to Evaluate Mobility Control Methods for Enhancing the Sweep Efficiency of Injected Subsurface Remediation Amendments

    DTIC Science & Technology

    2010-08-01

    petroleum industry. Moreover, heterogeneity control strategies can be applied to improve the efficiency of a variety of in situ remediation technologies...conditions that differ significantly from those found in environmental systems . Therefore many of the design criteria used by the petroleum industry for...were helpful in constructing numerical models in up-scaled systems (2-D tanks). The UTCHEM model was able to successfully simulate 2-D experimental

  18. Nanoscale piezoelectric vibration energy harvester design

    NASA Astrophysics Data System (ADS)

    Foruzande, Hamid Reza; Hajnayeb, Ali; Yaghootian, Amin

    2017-09-01

    Development of new nanoscale devices has increased the demand for new types of small-scale energy resources such as ambient vibrations energy harvesters. Among the vibration energy harvesters, piezoelectric energy harvesters (PEHs) can be easily miniaturized and fabricated in micro and nano scales. This change in the dimensions of a PEH leads to a change in its governing equations of motion, and consequently, the predicted harvested energy comparing to a macroscale PEH. In this research, effects of small scale dimensions on the nonlinear vibration and harvested voltage of a nanoscale PEH is studied. The PEH is modeled as a cantilever piezoelectric bimorph nanobeam with a tip mass, using the Euler-Bernoulli beam theory in conjunction with Hamilton's principle. A harmonic base excitation is applied as a model of the ambient vibrations. The nonlocal elasticity theory is used to consider the size effects in the developed model. The derived equations of motion are discretized using the assumed-modes method and solved using the method of multiple scales. Sensitivity analysis for the effect of different parameters of the system in addition to size effects is conducted. The results show the significance of nonlocal elasticity theory in the prediction of system dynamic nonlinear behavior. It is also observed that neglecting the size effects results in lower estimates of the PEH vibration amplitudes. The results pave the way for designing new nanoscale sensors in addition to PEHs.

  19. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE PAGES

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    2017-08-24

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  20. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  1. Parameter uncertainty and nonstationarity in regional extreme rainfall frequency analysis in Qu River Basin, East China

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Xu, Y. P.; Gu, H.

    2014-12-01

    Traditionally, regional frequency analysis methods were developed for stationary environmental conditions. Nevertheless, recent studies have identified significant changes in hydrological records, leading to the 'death' of stationarity. Besides, uncertainty in hydrological frequency analysis is persistent. This study aims to investigate the impact of one of the most important uncertainty sources, parameter uncertainty, together with nonstationarity, on design rainfall depth in Qu River Basin, East China. A spatial bootstrap is first proposed to analyze the uncertainty of design rainfall depth estimated by regional frequency analysis based on L-moments and estimated on at-site scale. Meanwhile, a method combining the generalized additive models with 30-year moving window is employed to analyze non-stationarity existed in the extreme rainfall regime. The results show that the uncertainties of design rainfall depth with 100-year return period under stationary conditions estimated by regional spatial bootstrap can reach 15.07% and 12.22% with GEV and PE3 respectively. On at-site scale, the uncertainties can reach 17.18% and 15.44% with GEV and PE3 respectively. In non-stationary conditions, the uncertainties of maximum rainfall depth (corresponding to design rainfall depth) with 0.01 annual exceedance probability (corresponding to 100-year return period) are 23.09% and 13.83% with GEV and PE3 respectively. Comparing the 90% confidence interval, the uncertainty of design rainfall depth resulted from parameter uncertainty is less than that from non-stationarity frequency analysis with GEV, however, slightly larger with PE3. This study indicates that the spatial bootstrap can be successfully applied to analyze the uncertainty of design rainfall depth on both regional and at-site scales. And the non-stationary analysis shows that the differences between non-stationary quantiles and their stationary equivalents are important for decision makes of water resources management and risk management.

  2. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  3. Rahman Prize Lecture: Lattice Boltzmann simulation of complex states of flowing matter

    NASA Astrophysics Data System (ADS)

    Succi, Sauro

    Over the last three decades, the Lattice Boltzmann (LB) method has gained a prominent role in the numerical simulation of complex flows across an impressively broad range of scales, from fully-developed turbulence in real-life geometries, to multiphase flows in micro-fluidic devices, all the way down to biopolymer translocation in nanopores and lately, even quark-gluon plasmas. After a brief introduction to the main ideas behind the LB method and its historical developments, we shall present a few selected applications to complex flow problems at various scales of motion. Finally, we shall discuss prospects for extreme-scale LB simulations of outstanding problems in the physics of fluids and its interfaces with material sciences and biology, such as the modelling of fluid turbulence, the optimal design of nanoporous gold catalysts and protein folding/aggregation in crowded environments.

  4. A comparative study on assessment procedures and metric properties of two scoring systems of the Coma Recovery Scale-Revised items: standard and modified scores.

    PubMed

    Sattin, Davide; Lovaglio, Piergiorgio; Brenna, Greta; Covelli, Venusia; Rossi Sebastiano, Davide; Duran, Dunja; Minati, Ludovico; Giovannetti, Ambra Mara; Rosazza, Cristina; Bersano, Anna; Nigri, Anna; Ferraro, Stefania; Leonardi, Matilde

    2017-09-01

    The study compared the metric characteristics (discriminant capacity and factorial structure) of two different methods for scoring the items of the Coma Recovery Scale-Revised and it analysed scale scores collected using the standard assessment procedure and a new proposed method. Cross sectional design/methodological study. Inpatient, neurological unit. A total of 153 patients with disorders of consciousness were consecutively enrolled between 2011 and 2013. All patients were assessed with the Coma Recovery Scale-Revised using standard (rater 1) and inverted (rater 2) procedures. Coma Recovery Scale-Revised score, number of cognitive and reflex behaviours and diagnosis. Regarding patient assessment, rater 1 using standard and rater 2 using inverted procedures obtained the same best scores for each subscale of the Coma Recovery Scale-Revised for all patients, so no clinical (and statistical) difference was found between the two procedures. In 11 patients (7.7%), rater 2 noted that some Coma Recovery Scale-Revised codified behavioural responses were not found during assessment, although higher response categories were present. A total of 51 (36%) patients presented the same Coma Recovery Scale-Revised scores of 7 or 8 using a standard score, whereas no overlap was found using the modified score. Unidimensionality was confirmed for both score systems. The Coma Recovery Scale Modified Score showed a higher discriminant capacity than the standard score and a monofactorial structure was also supported. The inverted assessment procedure could be a useful evaluation method for the assessment of patients with disorder of consciousness diagnosis.

  5. Design and evaluation of low cost blades for large wind driven generating systems

    NASA Technical Reports Server (NTRS)

    Eggert, W. S.

    1982-01-01

    The development and evaluation of a low cost blade concept based on the NASA-Lewis specifications is discussed. A blade structure was designed and construction methods and materials were selected. Complete blade tooling concepts, various technical and economic analysis, and evaluations of the blade design were performed. A comprehensive fatigue test program was conducted to provide data and to verify the design. A test specimen of the spar assembly, including the root end attachment, was fabricated. This is a full-scale specimen of the root end configuration, 20 ft long. A blade design for the Mod '0' system was completed.

  6. Scaling Support Vector Machines On Modern HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Yang; Fu, Haohuan; Song, Shuaiwen

    2015-02-01

    We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.

  7. Volume II: Compendium Abstracts

    DTIC Science & Technology

    2008-08-01

    project developed a fast and simple method of characterization for ceramic , polymer composite, and ceramic -composite materials systems. Current methods...incrementally at 1-inch intervals and displayed as a false-color image map of the sample. This experimental setup can be easily scaled from single ceramic ...low-power, high-force characteristics of lead zirconate titanate ( PZT ) and an offset-beam design to achieve rotational or near-linear translational

  8. Sex-Role Egalitarian Attitudes and Gender Role Socialization Experiences of African American Men and Women: A Mixed Methods Paradigm

    ERIC Educational Resources Information Center

    Heard, Courtney Christian Charisse

    2013-01-01

    The purpose of this study was to assess the sex-role egalitarian attitudes and gender role socialization experiences of African American men and women. A sequential mixed-methods design was employed to research this phenomenon. The Sex-Role Egalitarianism Scale-Short Form BB (SRES-BB) was utilized to assess sex-role egalitarian attitudes (King…

  9. Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis

    NASA Technical Reports Server (NTRS)

    Mcanelly, W. B.; Young, C. T. K.

    1973-01-01

    Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.

  10. Sap flow sensors: construction, quality control and comparison.

    PubMed

    Davis, Tyler W; Kuo, Chen-Min; Liang, Xu; Yu, Pao-Shan

    2012-01-01

    This work provides a design for two types of sensors, based on the thermal dissipation and heat ratio methods of sap flow calculation, for moderate to large scale deployments for the purpose of monitoring tree transpiration. These designs include a procedure for making these sensors, a quality control method for the final products, and a complete list of components with vendors and pricing information. Both sensor designs were field tested alongside a commercial sap flow sensor to assess their performance and show the importance for quality controlling the sensor outputs. Results show that for roughly 2% of the cost of commercial sensors, self-made sap flow sensors can provide acceptable estimates of the sap flow measurements compared to the commercial sensors.

  11. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    PubMed

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  12. Evaluation of Informed Choice for contraceptive methods among women attending a family planning program: conceptual development; a case study in Chile.

    PubMed

    Valdés, Patricio R; Alarcon, Ana M; Munoz, Sergio R

    2013-03-01

    To generate and validate a scale to measure the Informed Choice of contraceptive methods among women attending a family health care service in Chile. The study follows a multimethod design that combined expert opinions from 13 physicians, 3 focus groups of 21 women each, and a sample survey of 1,446 women. Data analysis consisted of a qualitative text analysis of group interviews, a factor analysis for construct validity, and kappa statistic and Cronbach alpha to assess scale reliability. The instrument comprises 25 items grouped into six categories: information and orientation, quality of treatment, communication, participation in decision making, expression of reproductive rights, and method access and availability. Internal consistency measured with Cronbach alpha ranged from 0.75 to 0.89 for all subscales (kappa, 0.62; standard deviation, 0.06), and construct validity was demonstrated from the testing of several hypotheses. The use of mixed methods contributed to developing a scale of Informed Choice that was culturally appropriate for assessing the women who participated in the family planning program. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. A program for handling map projections of small-scale geospatial raster data

    USGS Publications Warehouse

    Finn, Michael P.; Steinwand, Daniel R.; Trent, Jason R.; Buehler, Robert A.; Mattli, David M.; Yamamoto, Kristina H.

    2012-01-01

    Scientists routinely accomplish small-scale geospatial modeling using raster datasets of global extent. Such use often requires the projection of global raster datasets onto a map or the reprojection from a given map projection associated with a dataset. The distortion characteristics of these projection transformations can have significant effects on modeling results. Distortions associated with the reprojection of global data are generally greater than distortions associated with reprojections of larger-scale, localized areas. The accuracy of areas in projected raster datasets of global extent is dependent on spatial resolution. To address these problems of projection and the associated resampling that accompanies it, methods for framing the transformation space, direct point-to-point transformations rather than gridded transformation spaces, a solution to the wrap-around problem, and an approach to alternative resampling methods are presented. The implementations of these methods are provided in an open-source software package called MapImage (or mapIMG, for short), which is designed to function on a variety of computer architectures.

  14. A spectrophotometric method for detecting substellar companions to late-type M stars

    NASA Astrophysics Data System (ADS)

    Oetiker, Brian Glen

    The most common stars in the Galaxy are the main-sequence M stars, yet current techniques are not optimized for detecting companions around the lowest mass stars; those with spectral designations ranging from M6 to M10. Described in this study is a search for companions around such stars using two methods: a unique implementation of the transit method, and a newly designed differential spectrophotometric method. The TEP project focusses on the detection of transits of terrestrial sized and larger companions in the eclipsing binary system CM Draconis. The newly designed spectrophotometric technique combines the strengths of the spectroscopic and photometric methods, while minimizing their inherent weaknesses. This unique method relies on the placement of three narrow band optical filters on and around the Titanium Oxide (TiO) bandhead near 8420 Å, a feature commonly seen in the atmospheres of late M stars. One filter is placed on the slope of the bandhead feature, while the remaining two are located on the adjacent continuum portions of the star's spectrum. The companion-induced motion of the star results in a doppler shifting of the bandhead feature, which in turn causes a change in flux passing through the filter located on the slope of the TiO bandhead. The spectrophotometric method is optimized for detecting compact systems containing brown dwarfs and giant planets. Because of its low dispersion-high photon efficiency design, this method is well suited for surveying large numbers of faint M stars. A small scale survey has been implemented, producing a candidate brown dwarf class companion of the star WX UMa. Applying the spectrophotometric method to a larger scale survey for brown dwarf and giant planet companions, coupled with a photometric transit study addresses two key astronomical issues. By detecting or placing limits on compact late type M star systems, a discrimination among competing theories of planetary formation may be gained. Furthermore, searching for a broad range of companion masses, may result in a better understanding of the substellar mass function.

  15. Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs

    NASA Astrophysics Data System (ADS)

    Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram

    2017-04-01

    In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.

  16. Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Anderson, W. Kyle

    2000-01-01

    Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.

  17. Optimization design of multiphase pump impeller based on combined genetic algorithm and boundary vortex flux diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-ya; Cai, Shu-jie; Li, Yong-jiang; Li, Yong-jiang; Zhang, Yong-xue

    2017-12-01

    A novel optimization design method for the multiphase pump impeller is proposed through combining the quasi-3D hydraulic design (Q3DHD), the boundary vortex flux (BVF) diagnosis, and the genetic algorithm (GA). The BVF diagnosis based on the Q3DHD is used to evaluate the objection function. Numerical simulations and hydraulic performance tests are carried out to compare the impeller designed only by the Q3DHD method and that optimized by the presented method. The comparisons of both the flow fields simulated under the same condition show that (1) the pressure distribution in the optimized impeller is more reasonable and the gas-liquid separation is more efficiently inhibited, (2) the scales of the gas pocket and the vortex decrease remarkably for the optimized impeller, (3) the unevenness of the BVF distributions near the shroud of the original impeller is effectively eliminated in the optimized impeller. The experimental results show that the differential pressure and the maximum efficiency of the optimized impeller are increased by 4% and 2.5%, respectively. Overall, the study indicates that the optimization design method proposed in this paper is feasible.

  18. Low-Cost Approach to the Design and Fabrication of a LOX/RP-1 Injector

    NASA Technical Reports Server (NTRS)

    Shadoan, Michael D.; Sparks, Dave L.; Turner, James E. (Technical Monitor)

    2000-01-01

    NASA Marshall Space Flight Center (MSFC) has designed, built, and is currently testing Fastrac, a liquid oxygen (LOX)/RP-1 fueled 60K-lb thrust class rocket engine. One facet of Fastrac, which makes it unique is that it is the first large-scale engine designed and developed in accordance with the Agency's mandated "faster, better, cheaper" (FBC) program policy. The engine was developed under the auspices of MSFC's Low Cost Boost Technology office. Development work for the main injector actually began in 1993 in subscale form. In 1996, work began on the full-scale unit approximately 1 year prior to initiation of the engine development program. In order to achieve the value goals established by the FBC policy, a review of traditional design practices was necessary. This internal reevaluation would ultimately challenge more conventional methods of material selection. design process, and fabrication techniques. The effort was highly successful. This "new way" of thinking has resulted in an innovative injector design, one with reduced complexity and significantly lower cost. Application of lessons learned during this effort to new or existing designs can have a similar effect on costs and future program successes.

  19. Development and examination of the psychometric properties of the Learning Experience Scale in nursing.

    PubMed

    Takase, Miyuki; Imai, Takiko; Uemura, Chizuru

    2016-06-01

    This paper examines the psychometric properties of the Learning Experience Scale. A survey method was used to collect data from a total of 502 nurses. Data were analyzed by factor analysis and the known-groups technique to examine the construct validity of the scale. In addition, internal consistency was evaluated by Cronbach's alpha, and stability was examined by test-retest correlation. Factor analysis showed that the Learning Experience Scale consisted of five factors: learning from practice, others, training, feedback, and reflection. The scale also had the power to discriminate between nurses with high and low levels of nursing competence. The internal consistency and the stability of the scale were also acceptable. The Learning Experience Scale is a valid and reliable instrument, and helps organizations to effectively design learning interventions for nurses. © 2015 Wiley Publishing Asia Pty Ltd.

  20. Development and Application of the Collaborative Optimization Architecture in a Multidisciplinary Design Environment

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Kroo, I. M.

    1995-01-01

    Collaborative optimization is a design architecture applicable in any multidisciplinary analysis environment but specifically intended for large-scale distributed analysis applications. In this approach, a complex problem is hierarchically de- composed along disciplinary boundaries into a number of subproblems which are brought into multidisciplinary agreement by a system-level coordination process. When applied to problems in a multidisciplinary design environment, this scheme has several advantages over traditional solution strategies. These advantageous features include reducing the amount of information transferred between disciplines, the removal of large iteration-loops, allowing the use of different subspace optimizers among the various analysis groups, an analysis framework which is easily parallelized and can operate on heterogenous equipment, and a structural framework that is well-suited for conventional disciplinary organizations. In this article, the collaborative architecture is developed and its mathematical foundation is presented. An example application is also presented which highlights the potential of this method for use in large-scale design applications.

  1. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    NASA Astrophysics Data System (ADS)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  2. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE PAGES

    Engelmann, Christian; Hukerikar, Saurabh

    2017-09-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  3. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Hukerikar, Saurabh

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics.more » Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across hardware and software components. The resilience patterns and the design framework also enable exploration and evaluation of design alternatives and support optimization of the cost-benefit trade-offs among performance, protection coverage, and power consumption of resilience solutions. Here, the overall goal of this work is to establish a systematic methodology for the design and evaluation of resilience technologies in extreme-scale HPC systems that keep scientific applications running to a correct solution in a timely and cost-efficient manner despite frequent faults, errors, and failures of various types.« less

  4. Development of Support Service for Prevention and Recovery from Dementia and Science of Lethe

    NASA Astrophysics Data System (ADS)

    Otake, Mihoko

    Purpose of this study is to explore service design method through the development of support service for prevention and recovery from dementia towards science of lethe. We designed and implemented conversation support service via coimagination method based on multiscale service design method, both were proposed by the author. Multiscale service model consists of tool, event, human, network, style and rule. Service elements at different scales are developed according to the model. Interactive conversation supported by coimagination method activates cognitive functions so as to prevent progress of dementia. This paper proposes theoretical bases for science of lethe. Firstly, relationship among coimagination method and three cognitive functions including division of attention, planning, episodic memory which decline at mild cognitive imparement. Secondly, thought state transition model during conversation which describes cognitive enhancement via interactive communication. Thirdly, Set Theoretical Measure of Interaction is proposed for evaluating effectiveness of conversation to cognitive enhancement. Simulation result suggests that the ideas which cannot be explored by each speaker are explored during interactive conversation. Finally, coimagination method compared with reminiscence therapy and its possibility for collaboration is discussed.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klima, Matej; Kucharik, MIlan; Shashkov, Mikhail Jurievich

    We analyze several new and existing approaches for limiting tensor quantities in the context of deviatoric stress remapping in an ALE numerical simulation of elastic flow. Remapping and limiting of the tensor component-by-component is shown to violate radial symmetry of derived variables such as elastic energy or force. Therefore, we have extended the symmetry-preserving Vector Image Polygon algorithm, originally designed for limiting vector variables. This limiter constrains the vector (in our case a vector of independent tensor components) within the convex hull formed by the vectors from surrounding cells – an equivalent of the discrete maximum principle in scalar variables.more » We compare this method with a limiter designed specifically for deviatoric stress limiting which aims to constrain the J 2 invariant that is proportional to the specific elastic energy and scale the tensor accordingly. We also propose a method which involves remapping and limiting the J 2 invariant independently using known scalar techniques. The deviatoric stress tensor is then scaled to match this remapped invariant, which guarantees conservation in terms of elastic energy.« less

  6. Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.

    2015-01-01

    Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.

  7. Field evaluation of an expertise-based formal decision system for fungicide management of grapevine downy and powdery mildews.

    PubMed

    Delière, Laurent; Cartolaro, Philippe; Léger, Bertrand; Naud, Olivier

    2015-09-01

    In France, viticulture accounts for 20% of the phytochemicals sprayed in agriculture, and 80% of grapevine pesticides target powdery and downy mildews. European policies promote pesticide use reduction, and new methods for low-input disease management are needed for viticulture. Here, we present the assessment, in France, of Mildium, a new decision support system for the management of grapevine mildews. A 4 year assessment trial of Mildium has been conducted in a network of 83 plots distributed across the French vineyards. In most vineyards, Mildium has proved to be successful at protecting the crop while reducing by 30-50% the number of treatments required when compared with grower practices. The design of Mildium results from the formalisation of a common management of both powdery and downy mildews and eventually leads to a significant fungicide reduction at the plot scale. It could encourage stakeholders to design customised farm-scale and low-chemical-input decision support methods. © 2014 Society of Chemical Industry.

  8. Model for teaching population health and community-based care across diverse clinical experiences.

    PubMed

    Van Dyk, Elizabeth J; Valentine-Maher, Sarah K; Tracy, Janet P

    2015-02-01

    The pillars constructivist model is designed to offer a unifying clinical paradigm to support consistent learning opportunities across diverse configurations of community and public health clinical sites. Thirty-six students and six faculty members participated in a mixed methods evaluation to assess the model after its inaugural semester of implementation. The evaluation methods included a rating scale that measures the model's ability to provide consistent learning opportunities at both population health and direct care sites, a case study to measure student growth within the five conceptual pillars, and a faculty focus group. Results revealed that the model served as an effective means of clinical education to support the use of multiple, small-scale public health sites. Although measurements of student growth within the pillars are inconclusive, the findings suggest efficacy. The authors recommend the continued use of the pillars constructivist model in baccalaureate programs, with further study of the author-designed evaluation tools. Copyright 2015, SLACK Incorporated.

  9. A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Ilchi Ghazaan, M.

    2018-02-01

    In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.

  10. Phylogenetic studies of transmission dynamics in generalized HIV epidemics: An essential tool where the burden is greatest?

    PubMed Central

    Dennis, Ann M.; Herbeck, Joshua T.; Brown, Andrew Leigh; Kellam, Paul; de Oliveira, Tulio; Pillay, Deenan; Fraser, Christophe; Cohen, Myron S.

    2014-01-01

    Efficient and effective HIV prevention measures for generalized epidemics in sub-Saharan Africa have not yet been validated at the population-level. Design and impact evaluation of such measures requires fine-scale understanding of local HIV transmission dynamics. The novel tools of HIV phylogenetics and molecular epidemiology may elucidate these transmission dynamics. Such methods have been incorporated into studies of concentrated HIV epidemics to identify proximate and determinant traits associated with ongoing transmission. However, applying similar phylogenetic analyses to generalized epidemics, including the design and evaluation of prevention trials, presents additional challenges. Here we review the scope of these methods and present examples of their use in concentrated epidemics in the context of prevention. Next, we describe the current uses for phylogenetics in generalized epidemics, and discuss their promise for elucidating transmission patterns and informing prevention trials. Finally, we review logistic and technical challenges inherent to large-scale molecular epidemiological studies of generalized epidemics, and suggest potential solutions. PMID:24977473

  11. Exploring the effects of spatial autocorrelation when identifying key drivers of wildlife crop-raiding.

    PubMed

    Songhurst, Anna; Coulson, Tim

    2014-03-01

    Few universal trends in spatial patterns of wildlife crop-raiding have been found. Variations in wildlife ecology and movements, and human spatial use have been identified as causes of this apparent unpredictability. However, varying spatial patterns of spatial autocorrelation (SA) in human-wildlife conflict (HWC) data could also contribute. We explicitly explore the effects of SA on wildlife crop-raiding data in order to facilitate the design of future HWC studies. We conducted a comparative survey of raided and nonraided fields to determine key drivers of crop-raiding. Data were subsampled at different spatial scales to select independent raiding data points. The model derived from all data was fitted to subsample data sets. Model parameters from these models were compared to determine the effect of SA. Most methods used to account for SA in data attempt to correct for the change in P-values; yet, by subsampling data at broader spatial scales, we identified changes in regression estimates. We consequently advocate reporting both model parameters across a range of spatial scales to help biological interpretation. Patterns of SA vary spatially in our crop-raiding data. Spatial distribution of fields should therefore be considered when choosing the spatial scale for analyses of HWC studies. Robust key drivers of elephant crop-raiding included raiding history of a field and distance of field to a main elephant pathway. Understanding spatial patterns and determining reliable socio-ecological drivers of wildlife crop-raiding is paramount for designing mitigation and land-use planning strategies to reduce HWC. Spatial patterns of HWC are complex, determined by multiple factors acting at more than one scale; therefore, studies need to be designed with an understanding of the effects of SA. Our methods are accessible to a variety of practitioners to assess the effects of SA, thereby improving the reliability of conservation management actions.

  12. A novel way to detect correlations on multi-time scales, with temporal evolution and for multi-variables

    NASA Astrophysics Data System (ADS)

    Yuan, Naiming; Xoplaki, Elena; Zhu, Congwen; Luterbacher, Juerg

    2016-06-01

    In this paper, two new methods, Temporal evolution of Detrended Cross-Correlation Analysis (TDCCA) and Temporal evolution of Detrended Partial-Cross-Correlation Analysis (TDPCCA), are proposed by generalizing DCCA and DPCCA. Applying TDCCA/TDPCCA, it is possible to study correlations on multi-time scales and over different periods. To illustrate their properties, we used two climatological examples: i) Global Sea Level (GSL) versus North Atlantic Oscillation (NAO); and ii) Summer Rainfall over Yangtze River (SRYR) versus previous winter Pacific Decadal Oscillation (PDO). We find significant correlations between GSL and NAO on time scales of 60 to 140 years, but the correlations are non-significant between 1865-1875. As for SRYR and PDO, significant correlations are found on time scales of 30 to 35 years, but the correlations are more pronounced during the recent 30 years. By combining TDCCA/TDPCCA and DCCA/DPCCA, we proposed a new correlation-detection system, which compared to traditional methods, can objectively show how two time series are related (on which time scale, during which time period). These are important not only for diagnosis of complex system, but also for better designs of prediction models. Therefore, the new methods offer new opportunities for applications in natural sciences, such as ecology, economy, sociology and other research fields.

  13. Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.

    PubMed

    Lijun Long; Jun Zhao

    2017-04-01

    In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.

  14. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  15. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that is, the reduced-order controller implemented with the full-order plant. A controller reduction approach is proposed such that to guarantee closed-loop stability. It is based on the concept of dissipativity (or positivity) of linear dynamical systems. Utilizing passivity preserving model reduction together with dissipative-LQG controllers, effective low-order optimal controllers are obtained. Results are shown through simulations.

  16. Optimization of the blade trailing edge geometric parameters for a small scale ORC turbine

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Zhuge, W. L.; Peng, J.; Liu, S. J.; Zhang, Y. J.

    2013-12-01

    In general, the method proposed by Whitfield and Baines is adopted for the turbine preliminary design. In this design procedure for the turbine blade trailing edge geometry, two assumptions (ideal gas and zero discharge swirl) and two experience values (WR and γ) are used to get the three blade trailing edge geometric parameters: relative exit flow angle β6, the exit tip radius R6t and hub radius R6h for the purpose of maximizing the rotor total-to-static isentropic efficiency. The method above is established based on the experience and results of testing using air as working fluid, so it does not provide a mathematical optimal solution to instruct the optimization of geometry parameters and consider the real gas effects of the organic, working fluid which must be taken into consideration for the ORC turbine design procedure. In this paper, a new preliminary design and optimization method is established for the purpose of reducing the exit kinetic energy loss to improve the turbine efficiency ηts, and the blade trailing edge geometric parameters for a small scale ORC turbine with working fluid R123 are optimized based on this method. The mathematical optimal solution to minimize the exit kinetic energy is deduced, which can be used to design and optimize the exit shroud/hub radius and exit blade angle. And then, the influence of blade trailing edge geometric parameters on turbine efficiency ηts are analysed and the optimal working ranges of these parameters for the equations are recommended in consideration of working fluid R123. This method is used to modify an existing ORC turbine exit kinetic energy loss from 11.7% to 7%, which indicates the effectiveness of the method. However, the internal passage loss increases from 7.9% to 9.4%, so the only way to consider the influence of geometric parameters on internal passage loss is to give the empirical ranges of these parameters, such as the recommended ranges that the value of γ is at 0.3 to 0.4, and the value of τ is at 0.5 to 0.6.

  17. A Quality Assurance Initiative for Commercial-Scale Production in High-Throughput Cryopreservation of Blue Catfish Sperm

    PubMed Central

    Hu, E; Liao, T. W.; Tiersch, T. R.

    2013-01-01

    Cryopreservation of fish sperm has been studied for decades at a laboratory (research) scale. However, high-throughput cryopreservation of fish sperm has recently been developed to enable industrial-scale production. This study treated blue catfish (Ictalurus furcatus) sperm high-throughput cryopreservation as a manufacturing production line and initiated quality assurance plan development. The main objectives were to identify: 1) the main production quality characteristics; 2) the process features for quality assurance; 3) the internal quality characteristics and their specification designs; 4) the quality control and process capability evaluation methods, and 5) the directions for further improvements and applications. The essential product quality characteristics were identified as fertility-related characteristics. Specification design which established the tolerance levels according to demand and process constraints was performed based on these quality characteristics. Meanwhile, to ensure integrity throughout the process, internal quality characteristics (characteristics at each quality control point within process) that could affect fertility-related quality characteristics were defined with specifications. Due to the process feature of 100% inspection (quality inspection of every fish), a specific calculation method, use of cumulative sum (CUSUM) control charts, was applied to monitor each quality characteristic. An index of overall process evaluation, process capacity, was analyzed based on in-control process and the designed specifications, which further integrates the quality assurance plan. With the established quality assurance plan, the process could operate stably and quality of products would be reliable. PMID:23872356

  18. Triaxial digital fluxgate magnetometer for NASA applications explorer mission: Results of tests of critical elements

    NASA Technical Reports Server (NTRS)

    Mcleod, M. G.; Means, J. D.

    1977-01-01

    Tests performed to prove the critical elements of the triaxial digital fluxgate magnetometer design were described. A method for improving the linearity of the analog to digital converter portion of the instrument was studied in detail. A sawtooth waveform was added to the signal being measured before the A/D conversion, and averaging the digital readings over one cycle of the sawtooth. It was intended to reduce bit error nonlinearities present in the A/D converter which could be expected to be as much as 16 gamma if not reduced. No such nonlinearities were detected in the output of the instrument which included the feature designed to reduce these nonlinearities. However, a small scale nonlinearity of plus or minus 2 gamma with a 64 gamma repetition rate was observed in the unit tested. A design improvement intended to eliminate this small scale nonlinearity was examined.

  19. Read-In Integrated Circuits for Large-Format Multi-Chip Emitter Arrays

    DTIC Science & Technology

    2015-03-31

    chip has been designed and fabricated using ONSEMI C5N process to verify our approach. Keywords: Large scale arrays; Tiling; Mosaic; Abutment ...required. X and y addressing is not a sustainable and easily expanded addressing architecture nor will it work well with abutted RIICs. Abutment Method... Abutting RIICs into an array is challenging because of the precise positioning required to achieve a uniform image. This problem is a new design

  20. A modified priority list-based MILP method for solving large-scale unit commitment problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ke, Xinda; Lu, Ning; Wu, Di

    This paper studies the typical pattern of unit commitment (UC) results in terms of generator’s cost and capacity. A method is then proposed to combine a modified priority list technique with mixed integer linear programming (MILP) for UC problem. The proposed method consists of two steps. At the first step, a portion of generators are predetermined to be online or offline within a look-ahead period (e.g., a week), based on the demand curve and generator priority order. For the generators whose on/off status is predetermined, at the second step, the corresponding binary variables are removed from the UC MILP problemmore » over the operational planning horizon (e.g., 24 hours). With a number of binary variables removed, the resulted problem can be solved much faster using the off-the-shelf MILP solvers, based on the branch-and-bound algorithm. In the modified priority list method, scale factors are designed to adjust the tradeoff between solution speed and level of optimality. It is found that the proposed method can significantly speed up the UC problem with minor compromise in optimality by selecting appropriate scale factors.« less

  1. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  2. An air-liquid contactor for large-scale capture of CO2 from air.

    PubMed

    Holmes, Geoffrey; Keith, David W

    2012-09-13

    We present a conceptually simple method for optimizing the design of a gas-liquid contactor for capture of carbon dioxide from ambient air, or 'air capture'. We apply the method to a slab geometry contactor that uses components, design and fabrication methods derived from cooling towers. We use mass transfer data appropriate for capture using a strong NaOH solution, combined with engineering and cost data derived from engineering studies performed by Carbon Engineering Ltd, and find that the total costs for air contacting alone-no regeneration-can be of the order of $60 per tonne CO(2). We analyse the reasons why our cost estimate diverges from that of other recent reports and conclude that the divergence arises from fundamental design choices rather than from differences in costing methodology. Finally, we review the technology risks and conclude that they can be readily addressed by prototype testing.

  3. An automated laboratory-scale methodology for the generation of sheared mammalian cell culture samples.

    PubMed

    Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel

    2017-05-01

    Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10 5 W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10 5 W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Community Health Workers in Low- and Middle-Income Countries: What Do We Know About Scaling Up and Sustainability?

    PubMed Central

    Minhas, Dilpreet; Pérez-Escamilla, Rafael; Taylor, Lauren; Curry, Leslie; Bradley, Elizabeth H.

    2013-01-01

    Objectives. We sought to provide a systematic review of the determinants of success in scaling up and sustaining community health worker (CHW) programs in low- and middle-income countries (LMICs). Methods. We searched 11 electronic databases for academic literature published through December 2010 (n = 603 articles). Two independent reviewers applied exclusion criteria to identify articles that provided empirical evidence about the scale-up or sustainability of CHW programs in LMICs, then extracted data from each article by using a standardized form. We analyzed the resulting data for determinants and themes through iterated categorization. Results. The final sample of articles (n = 19) present data on CHW programs in 16 countries. We identified 23 enabling factors and 15 barriers to scale-up and sustainability, which were grouped into 3 thematic categories: program design and management, community fit, and integration with the broader environment. Conclusions. Scaling up and sustaining CHW programs in LMICs requires effective program design and management, including adequate training, supervision, motivation, and funding; acceptability of the program to the communities served; and securing support for the program from political leaders and other health care providers. PMID:23678926

  5. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications

    PubMed Central

    Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.

    2018-01-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069

  6. SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.

    PubMed

    Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D

    2017-04-01

    The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.

  7. Design of a Fission 99 Mo Recovery Process and Implications toward Mo Adsorption Mechanism on Titania and Alumina Sorbents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stepinski, Dominique C.; Youker, Amanda J.; Krahn, Elizabeth O.

    2017-03-01

    Molybdenum-99 is a parent of the most widely used medical isotope technetium-99m. Proliferation concerns have prompted development of alternative Mo production methods utilizing low enriched uranium. Alumina and titania sorbents were evaluated for separation of Mo from concentrated uranyl nitrate solutions. System, mass transfer, and isotherm parameters were determined to enable design of Mo separation processes under a wide range of conditions. A model-based approach was utilized to design representative commercial-scale column processes. The designs and parameters were verified with bench-scale experiments. The results are essential for design of Mo separation processes from irradiated uranium solutions, selection of support materialmore » and process optimization. Mo uptake studies show that adsorption decreases with increasing concentration of uranyl nitrate; howeveL, examination of Mo adsorption as a function of nitrate ion concentration shows no dependency, indicating that uranium competes with Mo for adsorption sites. These results are consistent with reports indicating that Mo forms inner-sphere complexes with titania and alumina surface groups.« less

  8. Fabrication Method for Laboratory-Scale High-Performance Membrane Electrode Assemblies for Fuel Cells.

    PubMed

    Sassin, Megan B; Garsany, Yannick; Gould, Benjamin D; Swider-Lyons, Karen E

    2017-01-03

    Custom catalyst-coated membranes (CCMs) and membrane electrode assemblies (MEAs) are necessary for the evaluation of advanced electrocatalysts, gas diffusion media (GDM), ionomers, polymer electrolyte membranes (PEMs), and electrode structures designed for use in next-generation fuel cells, electrolyzers, or flow batteries. This Feature provides a reliable and reproducible fabrication protocol for laboratory scale (10 cm 2 ) fuel cells based on ultrasonic spray deposition of a standard Pt/carbon electrocatalyst directly onto a perfluorosulfonic acid PEM.

  9. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images

    PubMed Central

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-01-01

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665

  10. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    PubMed

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  11. RESPONSIVENESS OF THE ACTIVITIES OF DAILY LIVING SCALE OF THE KNEE OUTCOME SURVEY AND NUMERIC PAIN RATING SCALE IN PATIENTS WITH PATELLOFEMORAL PAIN

    PubMed Central

    Piva, Sara R.; Gil, Alexandra B.; Moore, Charity G.; Fitzgerald, G. Kelley

    2016-01-01

    Objective To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. Design One group pre-post design. Subjects A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). Methods The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Results Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Conclusion Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain. PMID:19229444

  12. BioPartsBuilder: a synthetic biology tool for combinatorial assembly of biological parts.

    PubMed

    Yang, Kun; Stracquadanio, Giovanni; Luo, Jingchuan; Boeke, Jef D; Bader, Joel S

    2016-03-15

    Combinatorial assembly of DNA elements is an efficient method for building large-scale synthetic pathways from standardized, reusable components. These methods are particularly useful because they enable assembly of multiple DNA fragments in one reaction, at the cost of requiring that each fragment satisfies design constraints. We developed BioPartsBuilder as a biologist-friendly web tool to design biological parts that are compatible with DNA combinatorial assembly methods, such as Golden Gate and related methods. It retrieves biological sequences, enforces compliance with assembly design standards and provides a fabrication plan for each fragment. BioPartsBuilder is accessible at http://public.biopartsbuilder.org and an Amazon Web Services image is available from the AWS Market Place (AMI ID: ami-508acf38). Source code is released under the MIT license, and available for download at https://github.com/baderzone/biopartsbuilder joel.bader@jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  13. Scaling Studies for Advanced High Temperature Reactor Concepts, Final Technical Report: October 2014—December 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Brian; Gutowska, Izabela; Chiger, Howard

    Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less

  14. Ionospheric gravity wave measurements with the USU dynasonde

    NASA Technical Reports Server (NTRS)

    Berkey, Frank T.; Deng, Jun Yuan

    1992-01-01

    A method for the measurement of ionospheric Gravity Wave (GW) using the USU Dynasonde is outlined. This method consists of a series of individual procedures, which includes functions for data acquisition, adaptive scaling, polarization discrimination, interpolation and extrapolation, digital filtering, windowing, spectrum analysis, GW detection, and graphics display. Concepts of system theory are applied to treat the ionosphere as a system. An adaptive ionogram scaling method was developed for automatically extracting ionogram echo traces from noisy raw sounding data. The method uses the well known Least Mean Square (LMS) algorithm to form a stochastic optimal estimate of the echo trace which is then used to control a moving window. The window tracks the echo trace, simultaneously eliminating the noise and interference. Experimental results show that the proposed method functions as designed. Case studies which extract GW from ionosonde measurements were carried out using the techniques described. Geophysically significant events were detected and the resultant processed results are illustrated graphically. This method was also developed for real time implementation in mind.

  15. Sensitivity analysis and optimization method for the fabrication of one-dimensional beam-splitting phase gratings

    PubMed Central

    Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang

    2015-01-01

    A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268

  16. Stretch-and-release fabrication, testing and optimization of a flexible ceramic armor inspired from fish scales.

    PubMed

    Martini, Roberto; Barthelat, Francois

    2016-10-13

    Protective systems that are simultaneously hard to puncture and compliant in flexion are desirable, but difficult to achieve because hard materials are usually stiff. However, we can overcome this conflicting design requirement by combining plates of a hard material with a softer substrate, and a strategy which is widely found in natural armors such as fish scales or osteoderms. Man-made segmented armors have a long history, but their systematic implementation in a modern and a protective system is still hampered by a limited understanding of the mechanics and the design of optimization guidelines, and by challenges in cost-efficient manufacturing. This study addresses these limitations with a flexible bioinspired armor based on overlapping ceramic scales. The fabrication combines laser engraving and a stretch-and-release method which allows for fine tuning of the size and overlap of the scales, and which is suitable for large scale fabrication. Compared to a continuous layer of uniform ceramic, our fish-scale like armor is not only more flexible, but it is also more resistant to puncture and more damage tolerant. The proposed armor is also about ten times more puncture resistant than soft elastomers, making it a very attractive alternative to traditional protective equipment.

  17. Characterizing Long-Term Groundwater Conditions and Lithology for the Design of Large-Scale Borehole Heat Exchangers

    NASA Astrophysics Data System (ADS)

    Smith, David Charles

    Construction of large scale ground coupled heat pump (GCHP) systems that operate with hundreds or even thousands of boreholes for the borehole heat exchangers (BHE) has increased in recent years with many coming on line in the past 10 years. Many large institutions are constructing these systems because of their ability to store energy in the subsurface for indoor cooling during the warm summer months and extract that energy for heating during the cool winter months. Despite the increase in GCHP system systems constructed, there have been few long term studies on how these large systems interact with the subsurface. The thermal response test (TRT) is the industry standard for determining the thermal properties of the rock and soil. The TRT is limited in that it can only be used to determine the effective thermal conductivity over the whole length of a single borehole at the time that it is administered. The TRT cannot account for long-term changes in the aquifer saturation, changes in groundwater flow, or characterize different rock and soil units by effectiveness for heat storage. This study established new methods and also the need for the characterization of the subsurface for the purpose of design and long-term monitoring for GCHP systems. These new methods show that characterizing the long-term changes in aquifer saturation and groundwater flow, and characterizing different rock and soil units are an important part of the design and planning process of these systems. A greater understanding of how large-scale GCHP systems interact with the subsurface will result in designs that perform more efficiently over a longer period of time and expensive modifications due to unforeseen changes in system performance will be reduced.

  18. Preface: Introductory Remarks: Linear Scaling Methods

    NASA Astrophysics Data System (ADS)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up implementation questions relating to parallelization (particularly with multi-core processors starting to dominate the market) and inherent scaling and basis sets (in both normal and linear scaling codes). For now, the answer seems to lie between 100-1,000 atoms, though this depends on the type of simulation used among other factors. Basis sets are still a problematic question in the area of electronic structure calculations. The linear scaling community has largely split into two camps: those using relatively small basis sets based on local atomic-like functions (where systematic convergence to the full basis set limit is hard to achieve); and those that use necessarily larger basis sets which allow convergence systematically and therefore are the localised equivalent of plane waves. Related to basis sets is the study of Wannier functions, on which some linear scaling methods are based and which give a good point of contact with traditional techniques; they are particularly interesting for modelling unoccupied states with linear scaling methods. There are, of course, as many approaches to linear scaling solution for the density matrix as there are groups in the area, though there are various broad areas: McWeeny-based methods, fragment-based methods, recursion methods, and combinations of these. While many ideas have been in development for several years, there are still improvements emerging, as shown by the rich variety of the talks below. Applications using O(N) DFT methods are now starting to emerge, though they are still clearly not trivial. Once systems to be simulated cross the 10,000 atom barrier, only linear scaling methods can be applied, even with the most efficient standard techniques. One of the most challenging problems remaining, now that ab initio methods can be applied to large systems, is the long timescale problem. Although much of the work presented was concerned with improving the performance of the codes, and applying them to scientificallyimportant problems, there was another important theme: extending functionality. The search for greater accuracy has given an implementation of density functional designed to model van der Waals interactions accurately as well as local correlation, TDDFT and QMC and GW methods which, while not explicitly O(N), take advantage of localisation. All speakers at the workshop were invited to contribute to this issue, but not all were able to do this. Hence it is useful to give a complete list of the talks presented, with the names of the sessions; however, many talks fell within more than one area. This is an exciting time for linear scaling methods, which are already starting to contribute significantly to important scientific problems. Applications to nanostructures and biomolecules A DFT study on the structural stability of Ge 3D nanostructures on Si(001) using CONQUEST Tsuyoshi Miyazaki, D R Bowler, M J Gillan, T Otsuka and T Ohno Large scale electronic structure calculation theory and several applications Takeo Fujiwara and Takeo Hoshi ONETEP:Linear-scaling DFT with plane waves Chris-Kriton Skylaris, Peter D Haynes, Arash A Mostofi, Mike C Payne Maximally-localised Wannier functions as building blocks for large-scale electronic structure calculations Arash A Mostofi and Nicola Marzari A linear scaling three dimensional fragment method for ab initio calculations Lin-Wang Wang, Zhengji Zhao, Juan Meza Peta-scalable reactive Molecular dynamics simulation of mechanochemical processes Aiichiro Nakano, Rajiv K. Kalia, Ken-ichi Nomura, Fuyuki Shimojo and Priya Vashishta Recent developments and applications of the real-space multigrid (RMG) method Jerzy Bernholc, M Hodak, W Lu, and F Ribeiro Energy minimisation functionals and algorithms CONQUEST: A linear scaling DFT Code David R Bowler, Tsuyoshi Miyazaki, Antonio Torralba, Veronika Brazdova, Milica Todorovic, Takao Otsuka and Mike Gillan Kernel optimisation and the physical significance of optimised local orbitals in the ONETEP code Peter Haynes, Chris-Kriton Skylaris, Arash Mostofi and Mike Payne A miscellaneous overview of SIESTA algorithms Jose M Soler Wavelets as a basis set for electronic structure calculations and electrostatic problems Stefan Goedecker Wavelets as a basis set for linear scaling electronic structure calculationsMark Rayson O(N) Krylov subspace method for large-scale ab initio electronic structure calculations Taisuke Ozaki Linear scaling calculations with the divide-and-conquer approach and with non-orthogonal localized orbitals Weitao Yang Toward efficient wavefunction based linear scaling energy minimization Valery Weber Accurate O(N) first-principles DFT calculations using finite differences and confined orbitals Jean-Luc Fattebert Linear-scaling methods in dynamics simulations or beyond DFT and ground state properties An O(N) time-domain algorithm for TDDFT Guan Hua Chen Local correlation theory and electronic delocalization Joseph Subotnik Ab initio molecular dynamics with linear scaling: foundations and applications Eiji Tsuchida Towards a linear scaling Car-Parrinello-like approach to Born-Oppenheimer molecular dynamics Thomas Kühne, Michele Ceriotti, Matthias Krack and Michele Parrinello Partial linear scaling for quantum Monte Carlo calculations on condensed matter Mike Gillan Exact embedding of local defects in crystals using maximally localized Wannier functions Eric Cancès Faster GW calculations in larger model structures using ultralocalized nonorthogonal Wannier functions Paolo Umari Other approaches for linear-scaling, including methods formetals Partition-of-unity finite element method for large, accurate electronic-structure calculations of metals John E Pask and Natarajan Sukumar Semiclassical approach to density functional theory Kieron Burke Ab initio transport calculations in defected carbon nanotubes using O(N) techniques Blanca Biel, F J Garcia-Vidal, A Rubio and F Flores Large-scale calculations with the tight-binding (screened) KKR method Rudolf Zeller Acknowledgments We gratefully acknowledge funding for the workshop from the UK CCP9 network, CECAM and the ESF through the PsiK network. DRB, PDH and CKS are funded by the Royal Society. References [1] Car R and Parrinello M 1985 Phys. Rev. Lett. 55 2471 [2] Kühne T D, Krack M, Mohamed F R and Parrinello M 2007 Phys. Rev. Lett. 98 066401 [3] Goedecker S 1999 Rev. Mod. Phys. 71 1085

  19. Aerodynamic design applying automatic differentiation and using robust variable fidelity optimization

    NASA Astrophysics Data System (ADS)

    Takemiya, Tetsushi

    In modern aerospace engineering, the physics-based computational design method is becoming more important, as it is more efficient than experiments and because it is more suitable in designing new types of aircraft (e.g., unmanned aerial vehicles or supersonic business jets) than the conventional design method, which heavily relies on historical data. To enhance the reliability of the physics-based computational design method, researchers have made tremendous efforts to improve the fidelity of models. However, high-fidelity models require longer computational time, so the advantage of efficiency is partially lost. This problem has been overcome with the development of variable fidelity optimization (VFO). In VFO, different fidelity models are simultaneously employed in order to improve the speed and the accuracy of convergence in an optimization process. Among the various types of VFO methods, one of the most promising methods is the approximation management framework (AMF). In the AMF, objective and constraint functions of a low-fidelity model are scaled at a design point so that the scaled functions, which are referred to as "surrogate functions," match those of a high-fidelity model. Since scaling functions and the low-fidelity model constitutes surrogate functions, evaluating the surrogate functions is faster than evaluating the high-fidelity model. Therefore, in the optimization process, in which gradient-based optimization is implemented and thus many function calls are required, the surrogate functions are used instead of the high-fidelity model to obtain a new design point. The best feature of the AMF is that it may converge to a local optimum of the high-fidelity model in much less computational time than the high-fidelity model. However, through literature surveys and implementations of the AMF, the author xx found that (1) the AMF is very vulnerable when the computational analysis models have numerical noise, which is very common in high-fidelity models, and that (2) the AMF terminates optimization erroneously when the optimization problems have constraints. The first problem is due to inaccuracy in computing derivatives in the AMF, and the second problem is due to erroneous treatment of the trust region ratio, which sets the size of the domain for an optimization in the AMF. In order to solve the first problem of the AMF, automatic differentiation (AD) technique, which reads the codes of analysis models and automatically generates new derivative codes based on some mathematical rules, is applied. If derivatives are computed with the generated derivative code, they are analytical, and the required computational time is independent of the number of design variables, which is very advantageous for realistic aerospace engineering problems. However, if analysis models implement iterative computations such as computational fluid dynamics (CFD), which solves system partial differential equations iteratively, computing derivatives through the AD requires a massive memory size. The author solved this deficiency by modifying the AD approach and developing a more efficient implementation with CFD, and successfully applied the AD to general CFD software. In order to solve the second problem of the AMF, the governing equation of the trust region ratio, which is very strict against the violation of constraints, is modified so that it can accept the violation of constraints within some tolerance. By accepting violations of constraints during the optimization process, the AMF can continue optimization without terminating immaturely and eventually find the true optimum design point. With these modifications, the AMF is referred to as "Robust AMF," and it is applied to airfoil and wing aerodynamic design problems using Euler CFD software. The former problem has 21 design variables, and the latter 64. In both problems, derivatives computed with the proposed AD method are first compared with those computed with the finite differentiation (FD) method, and then, the Robust AMF is implemented along with the sequential quadratic programming (SQP) optimization method with only high-fidelity models. The proposed AD method computes derivatives more accurately and faster than the FD method, and the Robust AMF successfully optimizes shapes of the airfoil and the wing in a much shorter time than SQP with only high-fidelity models. These results clearly show the effectiveness of the Robust AMF. Finally, the feasibility of reducing computational time for calculating derivatives and the necessity of AMF with an optimum design point always in the feasible region are discussed as future work.

  20. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation. Functional modules F1--F8 -- Volume 2, Part 1, Revision 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Petrie, L.M.; Westfall, R.M.

    SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less

  1. Designing artificial 2D crystals with site and size controlled quantum dots.

    PubMed

    Xie, Xuejun; Kang, Jiahao; Cao, Wei; Chu, Jae Hwan; Gong, Yongji; Ajayan, Pulickel M; Banerjee, Kaustav

    2017-08-30

    Ordered arrays of quantum dots in two-dimensional (2D) materials would make promising optical materials, but their assembly could prove challenging. Here we demonstrate a scalable, site and size controlled fabrication of quantum dots in monolayer molybdenum disulfide (MoS 2 ), and quantum dot arrays with nanometer-scale spatial density by focused electron beam irradiation induced local 2H to 1T phase change in MoS 2 . By designing the quantum dots in a 2D superlattice, we show that new energy bands form where the new band gap can be controlled by the size and pitch of the quantum dots in the superlattice. The band gap can be tuned from 1.81 eV to 1.42 eV without loss of its photoluminescence performance, which provides new directions for fabricating lasers with designed wavelengths. Our work constitutes a photoresist-free, top-down method to create large-area quantum dot arrays with nanometer-scale spatial density that allow the quantum dots to interfere with each other and create artificial crystals. This technique opens up new pathways for fabricating light emitting devices with 2D materials at desired wavelengths. This demonstration can also enable the assembly of large scale quantum information systems and open up new avenues for the design of artificial 2D materials.

  2. Center for Interface Science and Catalysis | Theory

    Science.gov Websites

    & Stanford School of Engineering Toggle navigation Home Research Publications People About Academic to overcome challenges associated with the atomic-scale design of catalysts for chemical computational methods we are developing a quantitative description of chemical processes at the solid-gas and

  3. ADVANCED URBANIZED METEOROLOGICAL MODELING AND AIR QUALITY SIMULATIONS WITH CMAQ AT NEIGHBORHOOD SCALES

    EPA Science Inventory

    We present results from a study testing the new boundary layer parameterization method, the canopy drag approach (DA) which is designed to explicitly simulate the effects of buildings, street and tree canopies on the dynamic, thermodynamic structure and dispersion fields in urban...

  4. New design for interfacing computers to the Octopus network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sloan, L.J.

    1977-03-14

    The Lawrence Livermore Laboratory has several large-scale computers which are connected to the Octopus network. Several difficulties arise in providing adequate resources along with reliable performance. To alleviate some of these problems a new method of bringing large computers into the Octopus environment is proposed.

  5. Self-Efficacy Regarding Social Work Competencies

    ERIC Educational Resources Information Center

    Holden, Gary; Barker, Kathleen; Kuppens, Sofie; Rosenberg, Gary

    2017-01-01

    Purpose: The need for psychometrically sound measurement approaches to social work educational outcomes assessment is increasing. Method: The research reported here describes an original and two replication studies of a new scale (N = 550) designed to assess an individual's self-efficacy regarding social work competencies specified by the Council…

  6. Use of Microcomputer to Manage Assessment Data.

    ERIC Educational Resources Information Center

    Vance, Booney; Hayden, David

    1982-01-01

    Examples are provided of a computerized special education management system used to manage assessment data for exceptional students. The system is designed to provide a simple yet efficient method of tracking data from educational and psychological evaluations (specifically the Wechsler Intelligence Scale for Children--Revised scores). (CL)

  7. In-Memory Graph Databases for Web-Scale Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.

    RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less

  8. BETA (Bitter Electromagnet Testing Apparatus) Design and Testing

    NASA Astrophysics Data System (ADS)

    Bates, Evan; Birmingham, William; Rivera, William; Romero-Talamas, Carlos

    2016-10-01

    BETA is a 1T water cooled Bitter-type magnetic system that has been designed and constructed at the Dusty Plasma Laboratory of the University of Maryland, Baltimore County to serve as a prototype of a scaled 10T version. Currently the system is undergoing magnetic, thermal and mechanical testing to ensure safe operating conditions and to prove analytical design optimizations. These magnets will function as experimental tools for future dusty plasma based and collaborative experiments. An overview of design methods used for building a custom made Bitter magnet with user defined experimental constraints is reviewed. The three main design methods consist of minimizing the following: ohmic power, peak conductor temperatures, and stresses induced by Lorentz forces. We will also discuss the design of BETA which includes: the magnet core, pressure vessel, cooling system, power storage bank, high powered switching system, diagnostics with safety cutoff feedback, and data acquisition (DAQ)/magnet control Matlab code. Furthermore, we present experimental data from diagnostics for validation of our analytical preliminary design methodologies and finite element analysis calculations. BETA will contribute to the knowledge necessary to finalize the 10 T magnet design.

  9. Additive Manufacturing of Metal Structures at the Micrometer Scale.

    PubMed

    Hirt, Luca; Reiser, Alain; Spolenak, Ralph; Zambelli, Tomaso

    2017-05-01

    Currently, the focus of additive manufacturing (AM) is shifting from simple prototyping to actual production. One driving factor of this process is the ability of AM to build geometries that are not accessible by subtractive fabrication techniques. While these techniques often call for a geometry that is easiest to manufacture, AM enables the geometry required for best performance to be built by freeing the design process from restrictions imposed by traditional machining. At the micrometer scale, the design limitations of standard fabrication techniques are even more severe. Microscale AM thus holds great potential, as confirmed by the rapid success of commercial micro-stereolithography tools as an enabling technology for a broad range of scientific applications. For metals, however, there is still no established AM solution at small scales. To tackle the limited resolution of standard metal AM methods (a few tens of micrometers at best), various new techniques aimed at the micrometer scale and below are presently under development. Here, we review these recent efforts. Specifically, we feature the techniques of direct ink writing, electrohydrodynamic printing, laser-assisted electrophoretic deposition, laser-induced forward transfer, local electroplating methods, laser-induced photoreduction and focused electron or ion beam induced deposition. Although these methods have proven to facilitate the AM of metals with feature sizes in the range of 0.1-10 µm, they are still in a prototype stage and their potential is not fully explored yet. For instance, comprehensive studies of material availability and material properties are often lacking, yet compulsory for actual applications. We address these items while critically discussing and comparing the potential of current microscale metal AM techniques. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Assessment of Patterns of Patient-Reported Outcomes in Adults with Congenital Heart disease - International Study (APPROACH-IS): rationale, design, and methods.

    PubMed

    Apers, Silke; Kovacs, Adrienne H; Luyckx, Koen; Alday, Luis; Berghammer, Malin; Budts, Werner; Callus, Edward; Caruana, Maryanne; Chidambarathanu, Shanthi; Cook, Stephen C; Dellborg, Mikael; Enomoto, Junko; Eriksen, Katrine; Fernandes, Susan M; Jackson, Jamie L; Johansson, Bengt; Khairy, Paul; Kutty, Shelby; Menahem, Samuel; Rempel, Gwen; Sluman, Maayke A; Soufi, Alexandra; Thomet, Corina; Veldtman, Gruschen; Wang, Jou-Kou; White, Kamila; Moons, Philip

    2015-01-20

    Data on patient-reported outcomes (PROs) in adults with congenital heart disease (CHD) are inconsistent and vary across the world. Better understanding of PROs and their differences across cultural and geographic barriers can best be accomplished via international studies using uniform research methods. The APPROACH-IS consortium (Assessment of Patterns of Patient-Reported Outcomes in Adults with Congenital Heart disease - International Study) was created for this purpose and investigates PROs in adults with CHD worldwide. This paper outlines the project rationale, design, and methods. APPROACH-IS is a cross-sectional study. The goal is to recruit 3500-4000 adults with CHD from 15 countries in five major regions of the world (Asia, Australia, Europe, North and South America). Self-report questionnaires are administered to capture information on PRO domains: (i) perceived health status (12-item Short-form Health Survey & EuroQOL-5D); (ii) psychological functioning (Hospital Anxiety and Depression Scale); (iii) health behaviors (Health-Behavior Scale-Congenital Heart Disease); and (iv) quality of life (Linear Analog Scale & Satisfaction With Life Scale). Additionally, potential explanatory variables are assessed: (i) socio-demographic variables; (ii) medical history (chart review); (iii) sense of coherence (Orientation to Life Questionnaire); and (iv) illness perceptions (Brief Illness Perception Questionnaire). Descriptive analyses and multilevel models will examine differences in PROs and investigate potential explanatory variables. APPROACH-IS represents a global effort to increase research understanding and capacity in the field of CHD, and will have major implications for patient care. Results will generate valuable information for developing interventions to optimize patients' health and well-being. ClinicalTrials.gov: NCT02150603. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. A new precipitation-based method of baseflow separation and event identification for small watersheds (<50 km2)

    NASA Astrophysics Data System (ADS)

    Koskelo, Antti I.; Fisher, Thomas R.; Utz, Ryan M.; Jordan, Thomas E.

    2012-07-01

    SummaryBaseflow separation methods are often impractical, require expensive materials and time-consuming methods, and/or are not designed for individual events in small watersheds. To provide a simple baseflow separation method for small watersheds, we describe a new precipitation-based technique known as the Sliding Average with Rain Record (SARR). The SARR uses rainfall data to justify each separation of the hydrograph. SARR has several advantages such as: it shows better consistency with the precipitation and discharge records, it is easier and more practical to implement, and it includes a method of event identification based on precipitation and quickflow response. SARR was derived from the United Kingdom Institute of Hydrology (UKIH) method with several key modifications to adapt it for small watersheds (<50 km2). We tested SARR on watersheds in the Choptank Basin on the Delmarva Peninsula (US Mid-Atlantic region) and compared the results with the UKIH method at the annual scale and the hydrochemical method at the individual event scale. Annually, SARR calculated a baseflow index that was ˜10% higher than the UKIH method due to the finer time step of SARR (1 d) compared to UKIH (5 d). At the watershed scale, hydric soils were an important driver of the annual baseflow index likely due to increased groundwater retention in hydric areas. At the event scale, SARR calculated less baseflow than the hydrochemical method, again because of the differences in time step (hourly for hydrochemical) and different definitions of baseflow. Both SARR and hydrochemical baseflow increased with event size, suggesting that baseflow contributions are more important during larger storms. To make SARR easy to implement, we have written a MatLab program to automate the calculations which requires only daily rainfall and daily flow data as inputs.

  12. Conceptual design and analysis of a dynamic scale model of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.

    1994-01-01

    This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.

  13. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  14. Multi-scale Rule-of-Mixtures Model of Carbon Nanotube/Carbon Fiber/Epoxy Lamina

    NASA Technical Reports Server (NTRS)

    Frankland, Sarah-Jane V.; Roddick, Jaret C.; Gates, Thomas S.

    2005-01-01

    A unidirectional carbon fiber/epoxy lamina in which the carbon fibers are coated with single-walled carbon nanotubes is modeled with a multi-scale method, the atomistically informed rule-of-mixtures. This multi-scale model is designed to include the effect of the carbon nanotubes on the constitutive properties of the lamina. It included concepts from the molecular dynamics/equivalent continuum methods, micromechanics, and the strength of materials. Within the model both the nanotube volume fraction and nanotube distribution were varied. It was found that for a lamina with 60% carbon fiber volume fraction, the Young's modulus in the fiber direction varied with changes in the nanotube distribution, from 138.8 to 140 GPa with nanotube volume fractions ranging from 0.0001 to 0.0125. The presence of nanotube near the surface of the carbon fiber is therefore expected to have a small, but positive, effect on the constitutive properties of the lamina.

  15. Spatially intensive sampling by electrofishing for assessing longitudinal discontinuities in fish distribution in a headwater stream

    USGS Publications Warehouse

    Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.

    2017-01-01

    Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.

  16. Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation

    USGS Publications Warehouse

    Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.

    2000-01-01

    Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.

  17. Complex Approach to Conceptual Design of Machine Mechanically Extracting Oil from Jatropha curcas L. Seeds for Biomass-Based Fuel Production

    PubMed Central

    Mašín, Ivan

    2016-01-01

    One of important sources of biomass-based fuel is Jatropha curcas L. Great attention is paid to the biofuel produced from the oil extracted from the Jatropha curcas L. seeds. A mechanised extraction is the most efficient and feasible method for oil extraction for small-scale farmers but there is a need to extract oil in more efficient manner which would increase the labour productivity, decrease production costs, and increase benefits of small-scale farmers. On the other hand innovators should be aware that further machines development is possible only when applying the systematic approach and design methodology in all stages of engineering design. Systematic approach in this case means that designers and development engineers rigorously apply scientific knowledge, integrate different constraints and user priorities, carefully plan product and activities, and systematically solve technical problems. This paper therefore deals with the complex approach to design specification determining that can bring new innovative concepts to design of mechanical machines for oil extraction. The presented case study as the main part of the paper is focused on new concept of screw of machine mechanically extracting oil from Jatropha curcas L. seeds. PMID:27668259

  18. Optical system design with wide field of view and high resolution based on monocentric multi-scale construction

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Hu; Xiao, Nan; Shen, Yang; Xue, Yaoke

    2018-03-01

    With the development of related technology gradually mature in the field of optoelectronic information, it is a great demand to design an optical system with high resolution and wide field of view(FOV). However, as it is illustrated in conventional Applied Optics, there is a contradiction between these two characteristics. Namely, the FOV and imaging resolution are limited by each other. Here, based on the study of typical wide-FOV optical system design, we propose the monocentric multi-scale system design method to solve this problem. Consisting of a concentric spherical lens and a series of micro-lens array, this system has effective improvement on its imaging quality. As an example, we designed a typical imaging system, which has a focal length of 35mm and a instantaneous field angle of 14.7", as well as the FOV set to be 120°. By analyzing the imaging quality, we demonstrate that in different FOV, all the values of MTF at 200lp/mm are higher than 0.4 when the sampling frequency of the Nyquist is 200lp/mm, which shows a good accordance with our design.

  19. Systematic development of technical textiles

    NASA Astrophysics Data System (ADS)

    Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.

    2016-07-01

    Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.

  20. Matching methods evaluation framework for stereoscopic breast x-ray images.

    PubMed

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  1. FY10 Report on Multi-scale Simulation of Solvent Extraction Processes: Molecular-scale and Continuum-scale Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wardle, Kent E.; Frey, Kurt; Pereira, Candido

    2014-02-02

    This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less

  2. Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)

    2000-01-01

    A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.

  3. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  4. Scaling properties of the aerodynamic noise generated by low-speed fans

    NASA Astrophysics Data System (ADS)

    Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio

    2017-11-01

    The spectral decomposition algorithm presented in the paper may be applied to selected parts of the SPL spectrum, i.e. to specific noise generating mechanisms. It yields the propagation and the generation functions, and indeed the Mach number scaling exponent associated with each mechanism as a function of the Strouhal number. The input data are SPL spectra obtained from measurements taken during speed ramps. Firstly, the basic theory and the implemented algorithm are described. Then, the behaviour of the new method is analysed with reference to numerically generated spectral data and the results are compared with the ones of an existing method based on the assumption that the scaling exponent is constant. Guidelines for the employment of both methods are provided. Finally, the method is applied to measurements taken on a cooling fan mounted on a test plenum designed following the ISO 10302 standards. The most common noise generating mechanisms are present and attention is focused on the low-frequency part of the spectrum, where the mechanisms are superposed. Generally, both propagation and generation functions are determined with better accuracy than the scaling exponent, whose values are usually consistent with expectations based on coherence and compactness of the acoustic sources. For periodic noise, the computed exponent is less accurate, as the related SPL data set has usually a limited size. The scaling exponent is very sensitive to the details of the experimental data, e.g. to slight inconsistencies or random errors.

  5. Probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Wing, Kam Liu

    1987-01-01

    In the Probabilistic Finite Element Method (PFEM), finite element methods have been efficiently combined with second-order perturbation techniques to provide an effective method for informing the designer of the range of response which is likely in a given problem. The designer must provide as input the statistical character of the input variables, such as yield strength, load magnitude, and Young's modulus, by specifying their mean values and their variances. The output then consists of the mean response and the variance in the response. Thus the designer is given a much broader picture of the predicted performance than with simply a single response curve. These methods are applicable to a wide class of problems, provided that the scale of randomness is not too large and the probabilistic density functions possess decaying tails. By incorporating the computational techniques we have developed in the past 3 years for efficiency, the probabilistic finite element methods are capable of handling large systems with many sources of uncertainties. Sample results for an elastic-plastic ten-bar structure and an elastic-plastic plane continuum with a circular hole subject to cyclic loadings with the yield stress on the random field are given.

  6. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  7. An experimental design method leading to chemical Turing patterns.

    PubMed

    Horváth, Judit; Szalai, István; De Kepper, Patrick

    2009-05-08

    Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.

  8. Incorporating precision, accuracy and alternative sampling designs into a continental monitoring program for colonial waterbirds

    USGS Publications Warehouse

    Steinkamp, Melanie J.; Peterjohn, B.G.; Keisman, J.L.

    2003-01-01

    A comprehensive monitoring program for colonial waterbirds in North America has never existed. At smaller geographic scales, many states and provinces conduct surveys of colonial waterbird populations. Periodic regional surveys are conducted at varying times during the breeding season using a variety of survey methods, which complicates attempts to estimate population trends for most species. The US Geological Survey Patuxent Wildlife Research Center has recently started to coordinate colonial waterbird monitoring efforts throughout North America. A centralized database has been developed with an Internet-based data entry and retrieval page. The extent of existing colonial waterbird surveys has been defined, allowing gaps in coverage to be identified and basic inventories completed where desirable. To enable analyses of comparable data at regional or larger geographic scales, sampling populations through statistically sound sampling designs should supersede obtaining counts at every colony. Standardized breeding season survey techniques have been agreed upon and documented in a monitoring manual. Each survey in the manual has associated with it recommendations for bias estimation, and includes specific instructions on measuring detectability. The methods proposed in the manual are for developing reliable, comparable indices of population size to establish trend information at multiple spatial and temporal scales, but they will not result in robust estimates of total population numbers.

  9. Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources

    PubMed Central

    Leeson, Mark S.

    2014-01-01

    The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371

  10. Seven ways to make a hypertext project fail

    NASA Technical Reports Server (NTRS)

    Glushko, Robert J.

    1990-01-01

    Hypertext is an exciting concept, but designing and developing hypertext applications of practical scale is hard. To make a project feasible and successful 'hypertext engineers' must overcome the following problems: (1) developing realistic expectations in the face of hypertext hype; (2) assembling a multidisciplinary project team; (3) establishing and following design guidelines; (4) dealing with installed base constraints; (5) obtaining usable source files; (6) finding appropriate software technology and methods; and (7) overcoming legal uncertainties about intellectual property concerns.

  11. Boise Hydrogeophysical Research Site: Control Volume/Test Cell and Community Research Asset

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Bradford, J.; Malama, B.

    2008-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a research wellfield or field-scale test facility developed in a shallow, coarse, fluvial aquifer with the objectives of supporting: (a) development of cost- effective, non- or minimally-invasive quantitative characterization and imaging methods in heterogeneous aquifers using hydrologic and geophysical techniques; (b) examination of fundamental relationships and processes at multiple scales; (c) testing theories and models for groundwater flow and solute transport; and (d) educating and training of students in multidisciplinary subsurface science and engineering. The design of the wells and the wellfield support modular use and reoccupation of wells for a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrologic-geophysical experiments. Efforts to date by Boise State researchers and collaborators have been largely focused on: (a) establishing the 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for jointly inverting hard and soft data to return the 3D K distribution and (b) developing subsurface measurement and imaging methods including tomographic characterization and imaging methods. At this point the hydrostratigraphic framework of the BHRS is known to be a hierarchical multi-scale system which includes layers and lenses that are recognized with geologic, hydrologic, radar, seismic, and EM methods; details are now emerging which may allow 3D deterministic characterization of zones and/or material variations at the meter scale in the central wellfield. Also the site design and subsurface framework have supported a variety of testing configurations for joint hydrologic and geophysical experiments. Going forward we recognize the opportunity to increase the R&D returns from use of the BHRS with additional infrastructure (especially for monitoring the vadose zone and surface water-groundwater interactions), more collaborative activity, and greater access to site data. Our broader goal of becoming more available as a research asset for the scientific community also supports the long-term business plan of increasing funding opportunities to maintain and operate the site.

  12. Assessing biodiversity on the farm scale as basis for ecosystem service payments.

    PubMed

    von Haaren, Christina; Kempa, Daniela; Vogel, Katrin; Rüter, Stefan

    2012-12-30

    Ecosystem services payments must be based on a standardised transparent assessment of the goods and services provided. This is especially relevant in the context of EU agri-environmental programs, but also for organic-food companies that foster environmental services on their contractor farms. Addressing the farm scale is important because land users/owners are major recipients of payments and they could be more involved in data generation and conservation management. A standardised system for measuring on-farm biodiversity does not yet exist that concentrates on performance indicators and includes farmers in generating information. A method is required that produces ordinal or metric scaled assessment results as well as management measures. Another requirement is the ease of application, which includes the ease of gathering input data and understandability. In order to respond to this need, we developed a method which is designed for automated application in an open source farm assessment system named MANUELA. The method produces an ordinal scale assessment of biodiversity that includes biotopes, species, biotope connectivity and the influence of land use. In addition, specific measures for biotope types are proposed. The open source geographical information system OpenJump is used for the implementation of MANUELA. The results of the trial applications and robustness tests show that the assessment can be implemented, for the most part, using existing information as well as data available from farmers or advisors. The results are more sensitive for showing on-farm achievements and changes than existing biotope-type classifications. Such a differentiated classification is needed as a basis for ecosystem service payments and for designing effective measures. The robustness of the results with respect to biotope connectivity is comparable to that of complex models, but it should be further improved. Interviews with the test farmers substantiate that the assessment methods can be implemented on farms and they are understood by farmers. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Methods for reducing biases and errors in regional photochemical model outputs for use in emission reduction and exposure assessments

    EPA Science Inventory

    In the United States, regional-scale photochemical models are being used to design emission control strategies needed to meet the relevant National Ambient Air Quality Standards (NAAQS) within the framework of the attainment demonstration process. Previous studies have shown that...

  14. Sources of Writing Anxiety: A Study on French Language Teaching Students

    ERIC Educational Resources Information Center

    Aslim Yetis, Veda

    2017-01-01

    Conducted on French Language Teaching students, this research aims to determine the causes of writing anxiety. Designed in accordance with the mixed method, a writing anxiety inventory, a language proficiency exam, a retrospective composing-process questionnaire, a writing attitude scale and semi-structured interviews were used. After identifying…

  15. In-Situ Air Sparaing: Engineering and Design

    DTIC Science & Technology

    2008-01-31

    Construction Materials. Although PVC casing is commonly used, flexible or rigid polyethylene pipe may be more efficient for certain excavation methods, such as...depth, etc.) Piping insulation/ heat tape installed Piping flushed/cleaned/pressure tested Subsurface as-built equipment...4-4 Figure 4-2 Pilot-Scale Piping and Instrumentation Diagram

  16. Validating a Lifestyle Physical Activity Measure for People with Serious Mental Illness

    ERIC Educational Resources Information Center

    Bezyak, Jill L.; Chan, Fong; Chiu, Chung-Yi; Kaya, Cahit; Huck, Garrett

    2014-01-01

    Purpose: To evaluate the measurement structure of the "Physical Activity Scale for Individuals With Physical Disabilities" (PASIPD) as an assessment tool of lifestyle physical activities for people with severe mental illness. Method: A quantitative descriptive research design using factor analysis was employed. A sample of 72 individuals…

  17. Plant succession and approaches to community restoration

    Treesearch

    Bruce A. Roundy

    2005-01-01

    The processes of vegetation change over time, or plant succession, are also the processes involved in plant community restoration. Restoration efforts attempt to use designed disturbance, seedbed preparation and sowing methods, and selection of adapted and compatible native plant materials to enhance ecological function. The large scale of wildfires and weed invasion...

  18. A Method of Self-Evaluation for Counselor Education. Final Report.

    ERIC Educational Resources Information Center

    Martin, Donald G.

    A pretest-posttest control group design was used to test the value of employing four psychotherapeutic interaction scales for self-evaluation. Self-evaluation of the counselor-offered conditions empathy, positive regard, genuineness and intensity of interpersonal contact during the live counseling sessions of 44 counselors were compared with the…

  19. The Impact of Missing Background Data on Subpopulation Estimation

    ERIC Educational Resources Information Center

    Rutkowski, Leslie

    2011-01-01

    Although population modeling methods are well established, a paucity of literature appears to exist regarding the effect of missing background data on subpopulation achievement estimates. Using simulated data that follows typical large-scale assessment designs with known parameters and a number of missing conditions, this paper examines the extent…

  20. Academic and Recreational Reading Motivation of Teacher Candidates

    ERIC Educational Resources Information Center

    Lancellot, Michael

    2017-01-01

    The purpose of this mixed methods study was to determine relationships among teacher candidates' academic and recreational reading motivation. This study utilized a previously designed, reliable, and valid instrument called the Adult Reading Motivation Scale with permission from Schutte and Malouff (2007). The instrument included a pool of 50…

  1. EVALUATION OF A PROCESS TO CONVERT BIOMASS TO METHANOL FUEL

    EPA Science Inventory

    The report gives results of a review of the design of a reactor capable of gasifying approximately 50 lb/hr of biomass for a pilot-scale facility to develop, demonstrate, and evaluate the Hynol Process, a high-temperature, high-pressure method for converting biomass into methanol...

  2. EVALUATION OF A PROCESS TO CONVERT BIOMASS TO METHANOL FUEL - PROJECT SUMMARY

    EPA Science Inventory

    The report gives results of a review of the design of a reactor capable of gasifying approximately 50 lb/hr of biomass for a pilot-scale facility to develop, demonstrate, and evaluate the Hynol Process, a high-temperature, high-pressure method for converting biomass into methanol...

  3. Assessing Language Development: The Crediton Project.

    ERIC Educational Resources Information Center

    Wilkinson, Andrew; And Others

    This paper offers a review of methods of judging the quality of English compositions and demonstrates the need for establishing criteria to judge composition work in a developmental context. Scales of development designed to meet that need are categorized as: stylistic measures that include structure/organization, syntax, verbal competence, reader…

  4. Longitudinal Multistage Testing

    ERIC Educational Resources Information Center

    Pohl, Steffi

    2013-01-01

    This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…

  5. Agroecosystem research with big data and a modified scientific method using machine learning concepts

    USDA-ARS?s Scientific Manuscript database

    Long-term studies of agro-ecosystems at the continental scale are providing an extraordinary understanding of regional environmental dynamics. The new Long-Term Agro-ecosystem Research (LTAR) network (established in 2013) has designed an explicit research program with multiple USDA experimental wat...

  6. The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; Proctor, Fred H.

    2012-01-01

    The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.

  7. Design and methodology of a mixed methods follow-up study to the 2014 Ghana Demographic and Health Survey

    PubMed Central

    Staveteig, Sarah; Aryeetey, Richmond; Anie-Ansah, Michael; Ahiadeke, Clement; Ortiz, Ladys

    2017-01-01

    ABSTRACT Background: The intended meaning behind responses to standard questions posed in large-scale health surveys are not always well understood. Systematic follow-up studies, particularly those which pose a few repeated questions followed by open-ended discussions, are well positioned to gauge stability and consistency of data and to shed light on the intended meaning behind survey responses. Such follow-up studies require extensive coordination and face challenges in protecting respondent confidentiality during the process of recontacting and reinterviewing participants. Objectives: We describe practical field strategies for undertaking a mixed methods follow-up study during a large-scale health survey. Methods: The study was designed as a mixed methods follow-up study embedded within the 2014 Ghana Demographic and Health Survey (GDHS). The study was implemented in 13 clusters. Android tablets were used to import reference data from the parent survey and to administer the questionnaire, which asked a mixture of closed- and open-ended questions on reproductive intentions, decision-making, and family planning. Results: Despite a number of obstacles related to recontacting respondents and concern about respondent fatigue, over 92 percent of the selected sub-sample were successfully recontacted and reinterviewed; all consented to audio recording. A confidential linkage between GDHS data, follow-up tablet data, and audio transcripts was successfully created for the purpose of analysis. Conclusions: We summarize the challenges in follow-up study design, including ethical considerations, sample size, auditing, filtering, successful use of tablets, and share lessons learned for future such follow-up surveys. PMID:28145817

  8. Bioreactor Scalability: Laboratory-Scale Bioreactor Design Influences Performance, Ecology, and Community Physiology in Expanded Granular Sludge Bed Bioreactors

    PubMed Central

    Connelly, Stephanie; Shin, Seung G.; Dillon, Robert J.; Ijaz, Umer Z.; Quince, Christopher; Sloan, William T.; Collins, Gavin

    2017-01-01

    Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB—a one-dimensional and a three- dimensional scale-down of a full-scale design—were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory-scale bioreactors was associated with increased richness in the underlying microbial community at species (OTU) level and improved overall performance. PMID:28507535

  9. The effects of small-scale, homelike facilities for older people with dementia on residents, family caregivers and staff: design of a longitudinal, quasi-experimental study.

    PubMed

    Verbeek, Hilde; van Rossum, Erik; Zwakhalen, Sandra M G; Ambergen, Ton; Kempen, Gertrudis I J M; Hamers, Jan P H

    2009-01-20

    Small-scale and homelike facilities for older people with dementia are rising in current dementia care. In these facilities, a small number of residents live together and form a household with staff. Normal, daily life and social participation are emphasized. It is expected that these facilities improve residents' quality of life. Moreover, it may have a positive influence on staff's job satisfaction and families involvement and satisfaction with care. However, effects of these small-scale and homelike facilities have hardly been investigated. Since the number of people with dementia increases, and institutional long-term care is more and more organized in small-scale and homelike facilities, more research into effects is necessary. This paper presents the design of a study investigating effects of small-scale living facilities in the Netherlands on residents, family caregivers and nursing staff. A longitudinal, quasi-experimental study is carried out, in which 2 dementia care settings are compared: small-scale living facilities and regular psychogeriatric wards in traditional nursing homes. Data is collected from residents, their family caregivers and nursing staff at baseline and after 6 and 12 months of follow-up. Approximately 2 weeks prior to baseline measurement, residents are screened on cognition and activities of daily living (ADL). Based on this screening profile, residents in psychogeriatric wards are matched to residents living in small-scale living facilities. The primary outcome measure for residents is quality of life. In addition, neuropsychiatric symptoms, depressive symptoms and social engagement are assessed. Involvement with care, perceived burden and satisfaction with care provision are primary outcome variables for family caregivers. The primary outcomes for nursing staff are job satisfaction and motivation. Furthermore, job characteristics social support, autonomy and workload are measured. A process evaluation is performed to investigate to what extent small-scale living facilities and psychogeriatric wards are designed as they were intended. In addition, participants' satisfaction and experiences with small-scale living facilities are investigated. A longitudinal, quasi-experimental study is presented to investigate effects of small-scale living facilities. Although some challenges concerning this design exist, it is currently the most feasible method to assess effects of this relatively new dementia care setting.

  10. A trust-based recommendation method using network diffusion processes

    NASA Astrophysics Data System (ADS)

    Chen, Ling-Jiao; Gao, Jian

    2018-09-01

    A variety of rating-based recommendation methods have been extensively studied including the well-known collaborative filtering approaches and some network diffusion-based methods, however, social trust relations are not sufficiently considered when making recommendations. In this paper, we contribute to the literature by proposing a trust-based recommendation method, named CosRA+T, after integrating the information of trust relations into the resource-redistribution process. Specifically, a tunable parameter is used to scale the resources received by trusted users before the redistribution back to the objects. Interestingly, we find an optimal scaling parameter for the proposed CosRA+T method to achieve its best recommendation accuracy, and the optimal value seems to be universal under several evaluation metrics across different datasets. Moreover, results of extensive experiments on the two real-world rating datasets with trust relations, Epinions and FriendFeed, suggest that CosRA+T has a remarkable improvement in overall accuracy, diversity and novelty. Our work takes a step towards designing better recommendation algorithms by employing multiple resources of social network information.

  11. Comparison of rangeland vegetation sampling techniques in the Central Grasslands

    USGS Publications Warehouse

    Stohlgren, T.J.; Bull, K.A.; Otsuki, Yuka

    1998-01-01

    Maintaining native plant diversity, detecting exotic species, and monitoring rare species are becoming important objectives in rangeland conservation. Four rangeland vegetation sampling techniques were compared to see how well they captured local pant diversity. The methods tested included the commonly used Parker transects, Daubenmire transects as modified by the USDA Forest Service, a new transect and 'large quadrat' design proposed by the USDA Agricultural Research Service, and the Modified-Whittaker multi-scale vegetation plot. The 4 methods were superimposed in shortgrass steppe, mixed grass prairie, northern mixed prairie, and tallgrass prairie in the Central Grasslands of the United States with 4 replicates in each prairie type. Analysis of variance tests showed significant method effects and prairie type effects, but no significant method X type interactions for total species richness, the number of native species, the number of species with less than 1 % cover, and the time required for sampling. The methods behaved similarly in each prairie type under a wide variety of grazing regimens. The Parker, large quadrat, and Daubenmire transects significantly underestimated the total species richness and the number of native species in each prairie type, and the number of species with less than 1 % cover in all but the tallgrass prairie type. The transect techniques also consistently missed half the exotic species, including noxious weeds, in each prairie type. The Modified-Whittaker method, which included an exhaustive search for plant species in a 20 x 50 m plot, served as the baseline for species richeness comparisons. For all prairie types, the Modified-Whittaker plot captured an average of 42. (?? 2.4; 1 S.E.) plant species per site compared to 15.9 (?? 1.3), 18.9 (?? 1.2), and 22.8 (?? 1.6) plant species per site using the Parker, large quadrat, and Daubenmire transect methods, respectively. The 4 methods captured most of the dominant species at each site and thus produced similar results for total foliar cover and soil cover. The detection and measurement of exotic plant species were greatly enhanced by using ten 1 m2 subplots in a multi-scale sampling design and searching a larger area (1,000 m2) at each site. Even with 4 replicate sites, the transect methods usually captured, and thus would monitor, 36 to 66 % of the plant species at each site. To evaluate the status and trends of common, rare, and exotic plant species at local, regional, and national scales, innovative, multi-scale methods must replace the commonly used transect methods to the past.

  12. Ecologically Enhancing Coastal Infrastructure

    NASA Astrophysics Data System (ADS)

    Mac Arthur, Mairi; Naylor, Larissa; Hansom, Jim; Burrows, Mike; Boyd, Ian

    2017-04-01

    Hard engineering structures continue to proliferate in the coastal zone globally in response to increasing pressures associated with rising sea levels, coastal flooding and erosion. These structures are typically plain-cast by design and function as poor ecological surrogates for natural rocky shores which are highly topographically complex and host a range of available microhabitats for intertidal species. Ecological enhancement mitigates some of these negative impacts by integrating components of nature into the construction and design of these structures to improve their sustainability, resilience and multifunctionality. In the largest UK ecological enhancement trial to date, 184 tiles (15x15cm) of up to nine potential designs were deployed on vertical concrete coastal infrastructure in 2016 at three sites across the UK (Saltcoats, Blackness and Isle of Wight). The surface texture and complexity of the tiles were varied to test the effect of settlement surface texture at the mm-cm scale of enhancement on the success of colonisation and biodiversity in the mid-upper intertidal zone in order to answer the following experimental hypotheses: • Tiles with mm-scale geomorphic complexity will have greater barnacle abundances • Tiles with cm-scale geomorphic complexity will have greater species richness than mm-scale tiles. A range of methods were used in creating the tile designs including terrestrial laser scanning of creviced rock surfaces to mimic natural rocky shore complexity as well as artificially generated complexity using computer software. The designs replicated the topographic features of high ecological importance found on natural rocky shores and promoted species recruitment and community composition on artificial surfaces; thus enabling us to evaluate biological responses to geomorphic complexity in a controlled field trial. At two of the sites, the roughest tile designs (cm scale) did not have the highest levels of barnacle recruits which were instead counted on tiles of intermediate roughness such as the grooved concrete with 257 recruits on average (n=8) at four months' post-installation (Saltcoats) and 1291 recruits at two months' post-installation (Isle of Wight). This indicates that a higher level of complexity does not always reflect the most appropriate roughness scale for some colonisers. On average, tiles with mm scale texture were more successful in terms of barnacle colonisation compared to plain-cast control tiles (n=8 per site). The poor performance of the control tiles (9 recruits, Saltcoats; 147 recruits, Isle of Wight after 4 and 2 months, respectively) further highlights that artificial, hard substrates are poor ecological surrogates for natural rocky shores. One of the sites, Blackness, was an observed outlier to the general trend of colonisation, likely due to its estuarine location. This factor may contribute to why every design, including the control tile, had high abundances of barnacles. Artificially designed tiles with cm-scale complexity had higher levels of species richness, with periwinkles and topshells frequently observed to utilise the tile microhabitats in greater numbers than found on other tile designs. These results show that the scale of geomorphic complexity influences early stage colonisation. Data analysis is being carried out between now and the EGU - these advanced analyses would be presented.

  13. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    NASA Astrophysics Data System (ADS)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness <3% and RF coupling coefficient close to unity. The proposed design algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  14. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  15. Decentralized adaptive neural control for high-order interconnected stochastic nonlinear time-delay systems with unknown system dynamics.

    PubMed

    Si, Wenjie; Dong, Xunde; Yang, Feifei

    2018-03-01

    This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  17. Cross-validation of clinical characteristics and treatment patterns associated with phenotypes for lithium response defined by the Alda scale.

    PubMed

    Scott, Jan; Geoffroy, Pierre Alexis; Sportiche, Sarah; Brichant-Petit-Jean, Clara; Gard, Sebastien; Kahn, Jean-Pierre; Azorin, Jean-Michel; Henry, Chantal; Etain, Bruno; Bellivier, Frank

    2017-01-15

    It is increasingly recognised that reliable and valid assessments of lithium response are needed in order to target more efficiently the use of this medication in bipolar disorders (BD) and to identify genotypes, endophenotypes and biomarkers of response. In a large, multi-centre, clinically representative sample of 300 cases of BD, we assess external clinical validators of lithium response phenotypes as defined using three different recommended approaches to scoring the Alda lithium response scale. The scale comprises an A scale (rating lithium response) and a B scale (assessing confounders). Analysis of the two continuous scoring methods (A scale score minus the B scale score, or A scale score in those with a low B scale score) demonstrated that 21-23% of the explained variance in lithium response was accounted for by a positive family history of BD I and the early introduction of lithium. Categorical definitions of response suggest poor response is also associated with a positive history of alcohol and/or substance use comorbidities. High B scale scores were significantly associated with longer duration of illness prior to receiving lithium and the presence of psychotic symptoms. The original sample was not recruited specifically to study lithium response. The Alda scale is designed to assess response retrospectively. This cross-validation study identifies different clinical phenotypes of lithium response when defined by continuous or categorical measures. Future clinical, genetic and biomarker studies should report both the findings and the method employed to assess lithium response according to the Alda scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Engaging Community Stakeholders to Evaluate the Design, Usability, and Acceptability of a Chronic Obstructive Pulmonary Disease Social Media Resource Center

    PubMed Central

    Chaney, Beth; Chaney, Don; Paige, Samantha; Payne-Purvis, Caroline; Tennant, Bethany; Walsh-Childers, Kim; Sriram, PS; Alber, Julia

    2015-01-01

    Background Patients with chronic obstructive pulmonary disease (COPD) often report inadequate access to comprehensive patient education resources. Objective The purpose of this study was to incorporate community-engagement principles within a mixed-method research design to evaluate the usability and acceptability of a self-tailored social media resource center for medically underserved patients with COPD. Methods A multiphase sequential design (qual → QUANT → quant + QUAL) was incorporated into the current study, whereby a small-scale qualitative (qual) study informed the design of a social media website prototype that was tested with patients during a computer-based usability study (QUANT). To identify usability violations and determine whether or not patients found the website prototype acceptable for use, each patient was asked to complete an 18-item website usability and acceptability questionnaire, as well as a retrospective, in-depth, semistructured interview (quant + QUAL). Results The majority of medically underserved patients with COPD (n=8, mean 56 years, SD 7) found the social media website prototype to be easy to navigate and relevant to their self-management information needs. Mean responses on the 18-item website usability and acceptability questionnaire were very high on a scale of 1 (strongly disagree) to 5 (strongly agree) (mean 4.72, SD 0.33). However, the majority of patients identified several usability violations related to the prototype’s information design, interactive capabilities, and navigational structure. Specifically, 6 out of 8 (75%) patients struggled to create a log-in account to access the prototype, and 7 out of 8 patients (88%) experienced difficulty posting and replying to comments on an interactive discussion forum. Conclusions Patient perceptions of most social media website prototype features (eg, clickable picture-based screenshots of videos, comment tools) were largely positive. Mixed-method stakeholder feedback was used to make design recommendations, categorize usability violations, and prioritize potential solutions for improving the usability of a social media resource center for COPD patient education. PMID:25630449

  19. Methodology Used to Assess Acceptability of Oral Pediatric Medicines: A Systematic Literature Search and Narrative Review.

    PubMed

    Mistry, Punam; Batchelor, Hannah

    2017-06-01

    Regulatory guidelines require that any new medicine designed for a pediatric population must be demonstrated as being acceptable to that population. There is currently no guidance on how to conduct or report on acceptability testing. Our objective was to undertake a review of the methods used to assess the acceptability of medicines within a pediatric population and use this review to propose the most appropriate methodology. We used a defined search strategy to identify literature reports of acceptability assessments of medicines conducted within pediatric populations and extracted information about the tools used in these studies for comparison across studies. In total, 61 articles were included in the analysis. Palatability was the most common (54/61) attribute measured when evaluating acceptability. Simple scale methods were most commonly used, with visual analog scales (VAS) and hedonic scales used both separately and in combination in 34 of the 61 studies. Hedonic scales alone were used in 14 studies and VAS alone in just five studies. Other tools included Likert scales; forced choice or preference; surveys or questionnaires; observations of facial expressions during administration, ease of swallowing, or ability to swallow the dosage; prevalence of complaints or refusal to take the medicine; and time taken for a nurse to administer the medicine. The best scale in terms of validity, reliability, feasibility, and preference to use when assessing acceptability remains unclear. Further work is required to select the most appropriate method to justify whether a medicine is acceptable to a pediatric population.

  20. Contrast Transmission In Medical Image Display

    NASA Astrophysics Data System (ADS)

    Pizer, Stephen M.; Zimmerman, John B.; Johnston, R. Eugene

    1982-11-01

    The display of medical images involves transforming recorded intensities such at CT numbers into perceivable intensities such as combinations of color and luminance. For the viewer to extract the most information about patterns of decreasing and increasing recorded intensity, the display designer must pay attention to three issues: 1) choice of display scale, including its discretization; 2) correction for variations in contrast sensitivity across the display scale due to the observer and the display device (producing an honest display); and 3) contrast enhancement based on the information in the recorded image and its importance, determined by viewing objectives. This paper will present concepts and approaches in all three of these areas. In choosing display scales three properties are important: sensitivity, associability, and naturalness of order. The unit of just noticeable difference (jnd) will be carefully defined. An observer experiment to measure the jnd values across a display scale will be specified. The overall sensitivity provided by a scale as measured in jnd's gives a measure of sensitivity called the perceived dynamic range (PDR). Methods for determining the PDR fran the aforementioned PDR values, and PDR's for various grey and pseudocolor scales will be presented. Methods of achieving sensitivity while retaining associability and naturalness of order with pseudocolor scales will be suggested. For any display device and scale it is useful to compensate for the device and observer by preceding the device with an intensity mapping (lookup table) chosen so that perceived intensity is linear with display-driving intensity. This mapping can be determined from the aforementioned jnd values. With a linearized display it is possible to standardize display devices so that the same image displayed on different devices or scales (e.g. video and hard copy) will be in sane sense perceptually equivalent. Furthermore, with a linearized display, it is possible to design contrast enhancement mappings that optimize the transmission of information from the recorded image to the display-driving signal with the assurance that this information will not then be lost by a -further nonlinear relation between display-driving and perceived intensity. It is suggested that optimal contrast enhancement mappings are adaptive to the local distribution of recorded intensities.

Top