Sample records for measures design methods

  1. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  2. Comparison of measured efficiencies of nine turbine designs with efficiencies predicted by two empirical methods

    NASA Technical Reports Server (NTRS)

    English, Robert E; Cavicchi, Richard H

    1951-01-01

    Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.

  3. New knowledge network evaluation method for design rationale management

    NASA Astrophysics Data System (ADS)

    Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao

    2015-01-01

    Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.

  4. A novel vibration measurement and active control method for a hinged flexible two-connected piezoelectric plate

    NASA Astrophysics Data System (ADS)

    Qiu, Zhi-cheng; Wang, Xian-feng; Zhang, Xian-Min; Liu, Jin-guo

    2018-07-01

    A novel non-contact vibration measurement method using binocular vision sensors is proposed for piezoelectric flexible hinged plate. Decoupling methods of the bending and torsional low frequency vibration on measurement and driving control are investigated, using binocular vision sensors and piezoelectric actuators. A radial basis function neural network controller (RBFNNC) is designed to suppress both the larger and the smaller amplitude vibrations. To verify the non-contact measurement method and the designed controller, an experimental setup of the flexible hinged plate with binocular vision is constructed. Experiments on vibration measurement and control are conducted by using binocular vision sensors and the designed RBFNNC controllers, compared with the classical proportional and derivative (PD) control algorithm. The experimental measurement results demonstrate that the binocular vision sensors can detect the low-frequency bending and torsional vibration effectively. Furthermore, the designed RBF can suppress the bending vibration more quickly than the designed PD controller owing to the adjustment of the RBF control, especially for the small amplitude residual vibrations.

  5. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  6. Comparison of different measurement methods for transmittance haze

    NASA Astrophysics Data System (ADS)

    Yu, Hsueh-Ling; Hsaio, Chin-Chai

    2009-08-01

    Transmittance haze is increasingly important to the LCD and solar cell industry. Most commercial haze measurement instruments are designed according to the method recommended in the documentary standards like ASTM D 1003 (ASTM 2003 Standard Test Method for Haze and Luminous Transmittance of Transparent Plastics), JIS K 7361 (JIS 1997 Plastics—Determination of the Total Luminous Transmittance of Transparent Materials—Part 1: Single Beam Instrument) and ISO 14782 (ISO 1997 Plastics—Determination of Haze of Transparent Materials). To improve the measurement accuracy of the current standards, a new apparatus was designed by the Center for Measurement Standards (Yu et al 2006 Meas. Sci. Technol. 17 N29-36). Besides the methods mentioned above, a double-beam method is used in the design of some instruments. There are discrepancies between the various methods. But no matter which method is used, a white standard is always needed. This paper compares the measurement results from different methods, presents the effect of the white standard, and analyses the measurement uncertainty.

  7. [Review of research design and statistical methods in Chinese Journal of Cardiology].

    PubMed

    Zhang, Li-jun; Yu, Jin-ming

    2009-07-01

    To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.

  8. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  9. Performance of bias-correction methods for exposure measurement error using repeated measurements with and without missing data.

    PubMed

    Batistatou, Evridiki; McNamee, Roseanne

    2012-12-10

    It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Efficient measurement of large light source near-field color and luminance distributions for optical design and simulation

    NASA Astrophysics Data System (ADS)

    Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald

    2009-08-01

    The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.

  11. Neural network and multiple linear regression to predict school children dimensions for ergonomic school furniture design.

    PubMed

    Agha, Salah R; Alnahhal, Mohammed J

    2012-11-01

    The current study investigates the possibility of obtaining the anthropometric dimensions, critical to school furniture design, without measuring all of them. The study first selects some anthropometric dimensions that are easy to measure. Two methods are then used to check if these easy-to-measure dimensions can predict the dimensions critical to the furniture design. These methods are multiple linear regression and neural networks. Each dimension that is deemed necessary to ergonomically design school furniture is expressed as a function of some other measured anthropometric dimensions. Results show that out of the five dimensions needed for chair design, four can be related to other dimensions that can be measured while children are standing. Therefore, the method suggested here would definitely save time and effort and avoid the difficulty of dealing with students while measuring these dimensions. In general, it was found that neural networks perform better than multiple linear regression in the current study. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  13. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  14. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  15. Design of the micro pressure multi-node measuring system for micro-fluidic chip

    NASA Astrophysics Data System (ADS)

    Mu, Lili; Guo, Shuheng; Rong, Li; Yin, Ke

    2016-01-01

    An online multi-node microfludic pressure measuring system was designed in the paper. The research focused on the design of pressure test circuit system and methods on dealing with pressure data collecting. The MPXV7002 micro-pressure sensor was selected to measure the chip inside channel pressure and installed by a silicone tube on different micro-channel measured nodes. The pressure transmission loss was estimated in the paper, and corrected by the filtering and smoothing method. The pressure test experiment was carried out and the data were analyzed. Finally, the measuring system was calibrated. The results showed that the measuring system had high testing precision.

  16. Development of a photogrammetric method of measuring tree taper outside bark

    Treesearch

    David R. Larsen

    2006-01-01

    A photogrammetric method is presented for measuring tree diameters outside bark using calibrated control ground-based digital photographs. The method was designed to rapidly collect tree taper information from subject trees for the development of tree taper equations. Software that is commercially available, but designed for a different purpose, can be readily adapted...

  17. Contextualizing and assessing the social capital of seniors in congregate housing residences: study design and methods

    PubMed Central

    Moore, Spencer; Shiell, Alan; Haines, Valerie; Riley, Therese; Collier, Carrie

    2005-01-01

    Background This article discusses the study design and methods used to contextualize and assess the social capital of seniors living in congregate housing residences in Calgary, Alberta. The project is being funded as a pilot project under the Institute of Aging, Canadian Institutes for Health Research. Design/Methods Working with seniors living in 5 congregate housing residencies in Calgary, the project uses a mixed method approach to develop grounded measures of the social capital of seniors. The project integrates both qualitative and quantitative methods in a 3-phase research design: 1) qualitative, 2) quantitative, and 3) qualitative. Phase 1 uses gender-specific focus groups; phase 2 involves the administration of individual surveys that include a social network module; and phase 3 uses anamolous-case interviews. Not only does the study design allow us to develop grounded measures of social capital but it also permits us to test how well the three methods work separately, and how well they fit together to achieve project goals. This article describes the selection of the study population, the multiple methods used in the research and a brief discussion of our conceptualization and measurement of social capital. PMID:15836784

  18. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  19. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  20. Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.

    1999-01-01

    Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.

  1. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  2. Design and analysis of an automatic method of measuring silicon-controlled-rectifier holding current

    NASA Technical Reports Server (NTRS)

    Maslowski, E. A.

    1971-01-01

    The design of an automated SCR holding-current measurement system is described. The circuits used in the measurement system were designed to meet the major requirements of automatic data acquisition, reliability, and repeatability. Performance data are presented and compared with calibration data. The data verified the accuracy of the measurement system. Data taken over a 48-hr period showed that the measurement system operated satisfactorily and met all the design requirements.

  3. Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.

    ERIC Educational Resources Information Center

    Marcoulides, George A.

    1995-01-01

    A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)

  4. Different methods to analyze stepped wedge trial designs revealed different aspects of intervention effects.

    PubMed

    Twisk, J W R; Hoogendijk, E O; Zwijsen, S A; de Boer, M R

    2016-04-01

    Within epidemiology, a stepped wedge trial design (i.e., a one-way crossover trial in which several arms start the intervention at different time points) is increasingly popular as an alternative to a classical cluster randomized controlled trial. Despite this increasing popularity, there is a huge variation in the methods used to analyze data from a stepped wedge trial design. Four linear mixed models were used to analyze data from a stepped wedge trial design on two example data sets. The four methods were chosen because they have been (frequently) used in practice. Method 1 compares all the intervention measurements with the control measurements. Method 2 treats the intervention variable as a time-independent categorical variable comparing the different arms with each other. In method 3, the intervention variable is a time-dependent categorical variable comparing groups with different number of intervention measurements, whereas in method 4, the changes in the outcome variable between subsequent measurements are analyzed. Regarding the results in the first example data set, methods 1 and 3 showed a strong positive intervention effect, which disappeared after adjusting for time. Method 2 showed an inverse intervention effect, whereas method 4 did not show a significant effect at all. In the second example data set, the results were the opposite. Both methods 2 and 4 showed significant intervention effects, whereas the other two methods did not. For method 4, the intervention effect attenuated after adjustment for time. Different methods to analyze data from a stepped wedge trial design reveal different aspects of a possible intervention effect. The choice of a method partly depends on the type of the intervention and the possible time-dependent effect of the intervention. Furthermore, it is advised to combine the results of the different methods to obtain an interpretable overall result. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Noise Spectroscopy Used in Biology

    NASA Astrophysics Data System (ADS)

    Žacik, Michal

    This thesis contains glossary topic of spectroscopic measurement methods in broad bands of frequency. There is designed experimental measurement method for simple samples and biological samples measurements for noise spectroscopy in frequency range of 0.1 - 6 GHz, using broadband noise generator. There is realized the workplace and the measurement method is verified by measuring on selected samples. Measurements a displayed and analyzed.

  6. New methodology of measurement the unsteady thermal cooling of objects

    NASA Astrophysics Data System (ADS)

    Winczek, Jerzy

    2018-04-01

    The problems of measurements of unsteady thermal turbulent flow affect a many of domains, such as heat energy, manufacturing technologies, and many others. The subject of the study is focused on the analysis of current state of the problem, overview of the design solutions and methods to measure non-stationary thermal phenomena, presentation, and choice of adequate design of the cylinder, development of the method to measure and calculate basic values that characterize the process of heat exchange on the model surface.

  7. Passive Magnetic Bearing With Ferrofluid Stabilization

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; DiRusso, Eliseo

    1996-01-01

    A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.

  8. Applications of Evolutionary Algorithms to Electromagnetic Materials Characterization and Design Problems

    NASA Astrophysics Data System (ADS)

    Frasch, Jonathan Lemoine

    Determining the electrical permittivity and magnetic permeability of materials is an important task in electromagnetics research. The method using reflection and transmission scattering parameters to determine these constants has been widely employed for many years, ever since the work of Nicolson, Ross, and Weir in the 1970's. For general materials that are homogeneous, linear, and isotropic, the method they developed (the NRW method) works very well and provides an analytical solution. For materials which possess a metal backing or are applied as a coating to a metal surface, it can be difficult or even impossible to obtain a transmission measurement, especially when the coating is thin. In such a circumstance, it is common to resort to a method which uses two reflection type measurements. There are several such methods for free-space measurements, using multiple angles or polarizations for example. For waveguide measurements, obtaining two independent sources of information from which to extract two complex parameters can be a challenge. This dissertation covers three different topics. Two of these involve different techniques to characterize conductor-backed materials, and the third proposes a method for designing synthetic validation standards for use with standard NRW measurements. All three of these topics utilize modal expansions of electric and magnetic fields to analyze propagation in stepped rectangular waveguides. Two of the projects utilize evolutionary algorithms (EA) to design waveguide structures. These algorithms were developed specifically for these projects and utilize fairly recent innovations within the optimization community. The first characterization technique uses two different versions of a single vertical step in the waveguide. Samples to be tested lie inside the steps with the conductor reflection plane behind them. If the two reflection measurements are truly independent it should be possible to recover the values of two complex parameters, but success of the technique ultimately depends upon how independent the measurements actually are. Next, a method is demonstrated for developing synthetic verification standards. These standards are created from combinations of vertical steps formed from a single piece of metal or metal coated plastic. These fully insertable structures mimic some of the measurement characteristics of typical lab specimens and thus provide a useful tool for verifying the proper calibration and function of the experimental setup used for NRW characterization. These standards are designed with the use an EA, which compares possible designs based on the quality of the match with target parameter values. Several examples have been fabricated and tested, and the design specifications and results are presented. Finally, a second characterization technique is considered. This method uses multiple vertical steps to construct an error reducing structure within the waveguide, which allows parameters to be reliably extracted using both reflection and transmission measurements. These structures are designed with an EA, measuring fitness by the reduction of error in the extracted parameters. An additional EA is used to assist in the extraction of the material parameters supplying better initial guesses to a secant method solver. This hybrid approach greatly increases the stability of the solver and increases the speed of parameter extractions. Several designs have been identified and are analyzed.

  9. Enhanced learning through design problems - teaching a components-based course through design

    NASA Astrophysics Data System (ADS)

    Jensen, Bogi Bech; Högberg, Stig; Fløtum Jensen, Frida av; Mijatovic, Nenad

    2012-08-01

    This paper describes a teaching method used in an electrical machines course, where the students learn about electrical machines by designing them. The aim of the course is not to teach design, albeit this is a side product, but rather to teach the fundamentals and the function of electrical machines through design. The teaching method is evaluated by a student questionnaire, designed to measure the quality and effectiveness of the teaching method. The results of the questionnaire conclusively show that this method labelled 'learning through design' is a very effective way of teaching a components-based course. This teaching method can easily be generalised and used in other courses.

  10. 75 FR 45627 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-03

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  11. 76 FR 62402 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ... Monitoring Reference and Equivalent Methods; Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...

  12. 75 FR 51039 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... Monitoring Reference and Equivalent Methods: Designation of Two New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of two new equivalent methods for monitoring ambient air... accordance with 40 CFR Part 53, two new equivalent methods for measuring concentrations of PM 10 and sulfur...

  13. 75 FR 22126 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-27

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of ozone (O 3 ) in the...

  14. 75 FR 30022 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR Part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  15. 75 FR 9894 - Office of Research and Development; Ambient Air Monitoring Reference and Equivalent Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... Monitoring Reference and Equivalent Methods: Designation of One New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of one new equivalent method for monitoring ambient air... accordance with 40 CFR part 53, one new equivalent method for measuring concentrations of lead (Pb) in total...

  16. Differentiation and Exploration of Model MACP for HE VER 1.0 on Prototype Performance Measurement Application for Higher Education

    NASA Astrophysics Data System (ADS)

    El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis

    2018-02-01

    Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.

  17. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Test methods and design allowables for fibrous composites. Volume 2

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C. (Editor)

    1989-01-01

    Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.

  19. Laboratory validation of four black carbon measurement methods for the determination of non-volatile particulate matter (PM) mass emissions . . .

    EPA Science Inventory

    A laboratory-scale experimental program was designed to standardize each of four black carbon measurement methods, provide appropriate quality assurance/control procedures for these techniques, and compare measurements made by these methods to a NIST traceable standard (filter gr...

  20. User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy.

    PubMed

    Ramkumar, Anjana; Dolz, Jose; Kirisli, Hortense A; Adebahr, Sonja; Schimek-Jasch, Tanja; Nestle, Ursula; Massoptier, Laurent; Varga, Edit; Stappers, Pieter Jan; Niessen, Wiro J; Song, Yu

    2016-04-01

    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians' expertise and computers' potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the "strokes" and the "contour", to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design.

  1. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  2. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  3. Usage of noncontact human body measurements for development of Army Work Wear Trousers

    NASA Astrophysics Data System (ADS)

    Dabolina, Inga; Lapkovska, Eva; Vilumsone, Ausma

    2017-10-01

    The paper is based on issues related to imperfections of clothing fit, garment construction solutions and control measurement systems of finished products, which were identified in the research process analysing army soldier work wear trousers. The aim is to obtain target group body measurements using noncontact anthropometrical data acquisition method (3D scanning) for selection and analysis of scanned data suitable for trouser design. Tasks include comparison of scanned data with manually taken body measurements and different corresponding human body measurement standard data for establishing potential advantages of noncontact method usage in solving different trouser design issues.

  4. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    PubMed

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  5. Measurement system and model for simultaneously measuring 6DOF geometric errors.

    PubMed

    Zhao, Yuqiong; Zhang, Bin; Feng, Qibo

    2017-09-04

    A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.

  6. Robot-operated quality control station based on the UTT method

    NASA Astrophysics Data System (ADS)

    Burghardt, Andrzej; Kurc, Krzysztof; Szybicki, Dariusz; Muszyńska, Magdalena; Nawrocki, Jacek

    2017-03-01

    This paper presents a robotic test stand for the ultrasonic transmission tomography (UTT) inspection of stator vane thickness. The article presents the method of the test stand design in Autodesk Robot Structural Analysis Professional 2013 software suite. The performance of the designed test stand solution was simulated in the RobotStudio software suite. The operating principle of the test stand measurement system is presented with a specific focus on the measurement strategy. The results of actual wall thickness measurements performed on stator vanes are presented.

  7. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    PubMed

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (k H ) using the phase ratio variation (PRV) method via headspace gas chromatography (G C ), the value of k H of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse G C response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the G C -1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates k H with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the k H for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear G C response and the linear phase ratio to the G C -1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Molecular system identification for enzyme directed evolution and design

    NASA Astrophysics Data System (ADS)

    Guan, Xiangying; Chakrabarti, Raj

    2017-09-01

    The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.

  9. Sports participation and alcohol use among adolescents: the impact of measurement and other research design elements.

    PubMed

    Mays, Darren; Gatti, Margaret E; Thompson, Nancy J

    2011-06-01

    Sports participation, while offering numerous developmental benefits for adolescents, has been associated with alcohol use in prior research. However, the relationship between sports participation and alcohol use among adolescents remains unclear, particularly how research design elements impact evidence of this relationship. We reviewed the evidence regarding sports participation and alcohol use among adolescents, with a focus on examining the potential impact of research design elements on this evidence. Studies were assessed for eligibility and coded based on research design elements including: study design, sampling method, sample size, and measures of sports participation and alcohol use. Fifty-four studies were assessed for eligibility, 29 of which were included in the review. Nearly two-thirds used a cross-sectional design and a random sampling method, with sample sizes ranging from 178 to 50,168 adolescents (Median = 1,769). Sixteen studies used a categorical measure of sports participation, while 7 applied an index-type measure and 6 employed some other measure of sports participation. Most studies assessed alcohol-related behaviors (n = 18) through categorical measures, while only 6 applied frequency only measures of alcohol use, 1 study applied quantity only measures, and 3 studies used quantity and frequency measures. Sports participation has been defined and measured in various ways, most of which do not differentiate between interscholastic and community-based contexts, confounding this relationship. Stronger measures of both sports participation and alcohol use need to be applied in future studies to advance our understanding of this relationship among youths.

  10. A flexible layout design method for passive micromixers.

    PubMed

    Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui

    2012-10-01

    This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer.

  11. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  12. Incorporating Servqual-QFD with Taguchi Design for optimizing service quality design

    NASA Astrophysics Data System (ADS)

    Arbi Hadiyat, M.

    2018-03-01

    Deploying good service design in service companies has been updated issue in improving customer satisfaction, especially based on the level of service quality measured by Parasuraman’s SERVQUAL. Many researchers have been proposing methods in designing the service, and some of them are based on engineering viewpoint, especially by implementing the QFD method or even using robust Taguchi method. The QFD method would found the qualitative solution by generating the “how’s”, while Taguchi method gives more quantitative calculation in optimizing best solution. However, incorporating both QFD and Taguchi has been done in this paper and yields better design process. The purposes of this research is to evaluate the incorporated methods by implemented it to a case study, then analyze the result and see the robustness of those methods to customer perception of service quality. Started by measuring service attributes using SERVQUAL and find the improvement with QFD, the deployment of QFD solution then generated by defining Taguchi factors levels and calculating the Signal-to-noise ratio in its orthogonal array, and optimized Taguchi response then found. A case study was given for designing service in local bank. Afterward, the service design obtained from previous analysis was then evaluated and shows that it was still meet the customer satisfaction. Incorporating QFD and Taguchi has performed well and can be adopted and developed for another research for evaluating the robustness of result.

  13. Tradeoff studies in multiobjective insensitive design of airplane control systems

    NASA Technical Reports Server (NTRS)

    Schy, A. A.; Giesy, D. P.

    1983-01-01

    A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.

  14. Method for technology-delivered healthcare measures.

    PubMed

    Kramer-Jackman, Kelli Lee; Popkess-Vawter, Sue

    2011-12-01

    Current healthcare literature lacks development and evaluation methods for research and practice measures administered by technology. Researchers with varying levels of informatics experience are developing technology-delivered measures because of the numerous advantages they offer. Hasty development of technology-delivered measures can present issues that negatively influence administration and psychometric properties. The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability. Background information and Method for Technology-delivered Healthcare Measures steps are detailed.

  15. Reliability, minimal detectable change and responsiveness to change: Indicators to select the best method to measure sedentary behaviour in older adults in different study designs.

    PubMed

    Dontje, Manon L; Dall, Philippa M; Skelton, Dawn A; Gill, Jason M R; Chastin, Sebastien F M

    2018-01-01

    Prolonged sedentary behaviour (SB) is associated with poor health. It is unclear which SB measure is most appropriate for interventions and population surveillance to measure and interpret change in behaviour in older adults. The aims of this study: to examine the relative and absolute reliability, Minimal Detectable Change (MDC) and responsiveness to change of subjective and objective methods of measuring SB in older adults and give recommendations of use for different study designs. SB of 18 older adults (aged 71 (IQR 7) years) was assessed using a systematic set of six subjective tools, derived from the TAxonomy of Self report Sedentary behaviour Tools (TASST), and one objective tool (activPAL3c), over 14 days. Relative reliability (Intra Class Correlation coefficients-ICC), absolute reliability (SEM), MDC, and the relative responsiveness (Cohen's d effect size (ES) and Guyatt's Responsiveness coefficient (GR)) were calculated for each of the different tools and ranked for different study designs. ICC ranged from 0.414 to 0.946, SEM from 36.03 to 137.01 min, MDC from 1.66 to 8.42 hours, ES from 0.017 to 0.259 and GR from 0.024 to 0.485. Objective average day per week measurement ranked as most responsive in a clinical practice setting, whereas a one day measurement ranked highest in quasi-experimental, longitudinal and controlled trial study designs. TV viewing-Previous Week Recall (PWR) ranked as most responsive subjective measure in all study designs. The reliability, Minimal Detectable Change and responsiveness to change of subjective and objective methods of measuring SB is context dependent. Although TV viewing-PWR is the more reliable and responsive subjective method in most situations, it may have limitations as a reliable measure of total SB. Results of this study can be used to guide choice of tools for detecting change in sedentary behaviour in older adults in the contexts of population surveillance, intervention evaluation and individual care.

  16. Development of a method for measuring femoral torsion using real-time ultrasound.

    PubMed

    Hafiz, Eliza; Hiller, Claire E; Nicholson, Leslie L; Nightingale, E Jean; Clarke, Jillian L; Grimaldi, Alison; Eisenhuth, John P; Refshauge, Kathryn M

    2014-07-01

    Excessive femoral torsion has been associated with various musculoskeletal and neurological problems. To explore this relationship, it is essential to be able to measure femoral torsion in the clinic accurately. Computerized tomography (CT) and magnetic resonance imaging (MRI) are thought to provide the most accurate measurements but CT involves significant radiation exposure and MRI is expensive. The aim of this study was to design a method for measuring femoral torsion in the clinic, and to determine the reliability of this method. Details of design process, including construction of a jig, the protocol developed and the reliability of the method are presented. The protocol developed used ultrasound to image a ridge on the greater trochanter, and a customized jig placed on the femoral condyles as reference points. An inclinometer attached to the customized jig allowed quantification of the degree of femoral torsion. Measurements taken with this protocol had excellent intra- and inter-rater reliability (ICC2,1 = 0.98 and 0.97, respectively). This method of measuring femoral torsion also permitted measurement of femoral torsion with a high degree of accuracy. This method is applicable to the research setting and, with minor adjustments, will be applicable to the clinical setting.

  17. Design of transmission-type phase holograms for a compact radar-cross-section measurement range at 650 GHz.

    PubMed

    Noponen, Eero; Tamminen, Aleksi; Vaaja, Matti

    2007-07-10

    A design formalism is presented for transmission-type phase holograms for use in a submillimeter-wave compact radar-cross-section (RCS) measurement range. The design method is based on rigorous electromagnetic grating theory combined with conventional hologram synthesis. Hologram structures consisting of a curved groove pattern on a 320 mmx280 mm Teflon plate are designed to transform an incoming spherical wave at 650 GHz into an output wave generating a 100 mm diameter planar field region (quiet zone) at a distance of 1 m. The reconstructed quiet-zone field is evaluated by a numerical simulation method. The uniformity of the quiet-zone field is further improved by reoptimizing the goal field. Measurement results are given for a test hologram fabricated on Teflon.

  18. A Tool for the Automated Design and Evaluation of Habitat Interior Layouts

    NASA Technical Reports Server (NTRS)

    Simon, Matthew A.; Wilhite, Alan W.

    2013-01-01

    The objective of space habitat design is to minimize mass and system size while providing adequate space for all necessary equipment and a functional layout that supports crew health and productivity. Unfortunately, development and evaluation of interior layouts is often ignored during conceptual design because of the subjectivity and long times required using current evaluation methods (e.g., human-in-the-loop mockup tests and in-depth CAD evaluations). Early, more objective assessment could prevent expensive design changes that may increase vehicle mass and compromise functionality. This paper describes a new interior design evaluation method to enable early, structured consideration of habitat interior layouts. This interior layout evaluation method features a comprehensive list of quantifiable habitat layout evaluation criteria, automatic methods to measure these criteria from a geometry model, and application of systems engineering tools and numerical methods to construct a multi-objective value function measuring the overall habitat layout performance. In addition to a detailed description of this method, a C++/OpenGL software tool which has been developed to implement this method is also discussed. This tool leverages geometry modeling coupled with collision detection techniques to identify favorable layouts subject to multiple constraints and objectives (e.g., minimize mass, maximize contiguous habitable volume, maximize task performance, and minimize crew safety risks). Finally, a few habitat layout evaluation examples are described to demonstrate the effectiveness of this method and tool to influence habitat design.

  19. Assessment of watershed regionalization for the land use change parameterization

    NASA Astrophysics Data System (ADS)

    Randusová, Beata; Kohnová, Silvia; Studvová, Zuzana; Marková, Romana; Nosko, Radovan

    2016-04-01

    The estimation of design discharges and water levels of extreme floods is one of the most important parts of the design process for a large number of engineering projects and studies. Floods and other natural hazards initiated by climate, soil, and land use changes are highly important in the 21st century. Flood risks and design flood estimation is particularly challenging. Methods of design flood estimation can be applied either locally or regionally. To obtain the design values in such cases where no recorded data exist, many countries have adopted procedures that fit the local conditions and requirements. One of these methods is the Soil Conservation Service - Curve number (SCS-CN) method which is often used in design flood estimation for ungauged sites. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. This study is focused on development of the SCS-CN methodology for the changing land use conditions in Slovak basins (with the pilot site of the Myjava catchment), which regionalize actual state of land use data and actual rainfall and discharge measurements of the selected river basins. In this study the state of the water erosion and sediment transport along with a subsequent proposal of erosion control measures was analyzed as well. The regionalized SCS-CN method was subsequently used for assessing the effectiveness of this control measure to reduce runoff from the selected basin. For the determination of the sediment transport from the control measure to the Myjava basin, the SDR (Sediment Delivery Ratio) model was used.

  20. Graphic representation of data resulting from measurement comparison trials in cataract and refractive surgery.

    PubMed

    Krummenauer, Frank; Storkebaum, Kristin; Dick, H Burkhard

    2003-01-01

    The evaluation of new diagnostic measurement devices allows intraindividual comparison with an established standard method. However, reports in journal articles often omit the adequate incorporation of the intraindividual design into the graphic representation. This article illustrates the drawbacks and the possible erroneous conclusions caused by this misleading practice in terms of recent method comparison data resulting from axial length measurement in 220 consecutive patients by both applanation ultrasound and partial coherence interferometry. Graphic representation of such method comparison data should be based on boxplots for intraindividual differences or on Bland-Altman plots. Otherwise, severe deviations between the measurement devices could be erroneously ignored and false-positive conclusions on the concordance of the instruments could result. Graphic representation of method comparison data should sensitively incorporate the underlying study design for intraindividual comparison.

  1. Dynamical Systems in Circuit Designer's Eyes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odyniec, M.

    Examples of nonlinear circuit design are given. Focus of the design process is on theory and engineering methods (as opposed to numerical analysis). Modeling is related to measurements It is seen that the phase plane is still very useful with proper models Harmonic balance/describing function offers powerful insight (via the combination of simulation with circuit and ODE theory). Measurement and simulation capabilities increased, especially harmonics measurements (since sinusoids are easy to generate)

  2. Near- and Far-Field Characterization of Planar mm-Wave Antenna Arrays with Waveguide-to-Microstrip Transition

    NASA Astrophysics Data System (ADS)

    Salhi, Mohammed Adnan; Kazemipour, Alireza; Gentille, Gennaro; Spirito, Marco; Kleine-Ostmann, Thomas; Schrader, Thorsten

    2016-09-01

    We present the design and characterization of planar mm-wave patch antenna arrays with waveguide-to-microstrip transition using both near- and far-field methods. The arrays were designed for metrological assessment of error sources in antenna measurement. One antenna was designed for the automotive radar frequency range at 77 GHz, while another was designed for the frequency of 94 GHz, which is used, e.g., for imaging radar applications. In addition to the antennas, a simple transition from rectangular waveguide WR-10 to planar microstrip line on Rogers 3003™ substrate has been designed based on probe coupling. For determination of the far-field radiation pattern of the antennas, we compare results from two different measurement methods to simulations. Both a far-field antenna measurement system and a planar near-field scanner with near-to-far-field transformation were used to determine the antenna diagrams. The fabricated antennas achieve a good matching and a good agreement between measured and simulated antenna diagrams. The results also show that the far-field scanner achieves more accurate measurement results with regard to simulations than the near-field scanner. The far-field antenna scanning system is built for metrological assessment and antenna calibration. The antennas are the first which were designed to be tested with the measurement system.

  3. Facial anthropometric measurements in Iranian male workers using Digimizer version 4.1.1.0 image analysis software: a pilot study.

    PubMed

    Salvarzi, Elham; Choobineh, Alireza; Jahangiri, Mehdi; Keshavarzi, Sareh

    2018-02-26

    Craniometry is a subset of anthropometry, which measures the anatomical sizes of the head and face (craniofacial indicators). These dimensions are used in designing devices applied in the facial area, including respirators. This study was conducted to measure craniofacial dimensions of Iranian male workers required for face protective equipment design. In this study, facial anthropometric dimensions of 50 randomly selected Iranian male workers were measured by photographic method and Digimizer version 4.1.1.0. Ten facial dimensions were extracted from photographs and measured by Digimizer version 4.1.1.0. Mean, standard deviation and 5th, 50th and 95th percentiles for each dimension were determined and the relevant data bank was established. The anthropometric data bank for the 10 dimensions required for respirator design was provided for the target group with photo-anthropometric methods. The results showed that Iranian face dimensions were different from those of other nations and ethnicities. In this pilot study, anthropometric dimensions required for half-mask respirator design for Iranian male workers were measured by Digimizer version 4.1.1.0. The obtained anthropometric tables could be useful for the design of personal face protective equipment.

  4. CALIBRATION OF INSTRUMENTS FOR RADIATION MEASUREMENTS FROM LOFTED VEHICLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banks, W.O.

    1962-05-01

    The designs and developments accomplished by the Air Proving Ground Certer in support of Project TRUMP are considered. Project TRUMP pertains to the design and developmert of methods for measuring radiation from lofted vehicles. Several methods of simulating the space environment, for purposes of ground calibration of instruments to be lofted, are proposed. A mathematical approach, similar to that used by early Smithsonian solar constant seekers, is presented. (auth)

  5. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  6. Methods of viscosity measurements in sealed ampoules

    NASA Astrophysics Data System (ADS)

    Mazuruk, Konstantin

    1999-07-01

    Viscosity of semiconductors and metallic melts is usually measured by oscillating cup method. This method utilizes the melts contained in vacuum sealed silica ampoules, thus the problems related to volatility, contamination, and high temperature and pressure can be alleviate. In a typical design, the time required for a single measurement is of the order of one hour. In order to reduce this time to a minute range, a high resolution angular detection system is implemented in our design of the viscometer. Furthermore, an electromagnet generating a rotational magnetic field (RMF) is incorporated into the apparatus. This magnetic field can be used to remotely and nonintrusively measure the electrical conductivity of the melt. It can also be used to induce a well controlled rotational flow in the system. The transient behavior of this flow can potentially yield of the fluid. Based on RMF implementation, two novel viscometry methods are proposed in this work: a) the transient torque method, b) the resonance method. A unified theoretical approach to the three methods is presented along with the initial test result of the constructed apparatus. Advantages of each of the method are discussed.

  7. A standardized mean difference effect size for multiple baseline designs across individuals.

    PubMed

    Hedges, Larry V; Pustejovsky, James E; Shadish, William R

    2013-12-01

    Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single-case designs have focused attention on methods for summarizing and meta-analyzing findings and on the need for effect sizes indices that are comparable to those used in between-subjects designs. In the previous work, we discussed how to define and estimate an effect size that is directly comparable to the standardized mean difference often used in between-subjects research based on the data from a particular type of single-case design, the treatment reversal or (AB)(k) design. This paper extends the effect size measure to another type of single-case study, the multiple baseline design. We propose estimation methods for the effect size and its variance, study the estimators using simulation, and demonstrate the approach in two applications. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Mock Target Window OTR and IR Design and Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wass, Alexander Joseph

    In order to fully verify temperature measurements made on the target window using infrared (IR) optical non-contact methods, actual comparative measurements are made with a real beam distribution as the heat source using Argonne National Laboratory’s (ANL) 35 MeV electron accelerator. Using Monte Carlo N-Particle (MCNP) simulations and thermal Finite Element Analysis (FEA), a cooled mock target window with thermocouple implants is designed to be used in such a test to achieve window temperatures up to 700°C. An uncoated and blackcoated mock window is designed to enhance the IR temperature measurements and verify optical transmitted radiation (OTR) imagery. This allowsmore » us to fully verify and characterize our temperature accuracy with our current IR camera method and any future method we may wish to explore using actual production conditions. This test also provides us with valuable conclusions/concerns regarding the calibration method we developed using our IR test stand at TA-53 in MPF-14.« less

  9. Acoustic Treatment Design Scaling Methods. Phase 2

    NASA Technical Reports Server (NTRS)

    Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.

    2003-01-01

    The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.

  10. Investigation of converging and collimated beam instrument geometry on specular gloss measurements

    NASA Astrophysics Data System (ADS)

    Zwinkels, Joanne C.; Côté, Éric; Morgan, John

    2018-02-01

    Specular gloss is an important appearance property of a wide variety of manufactured goods. Depending upon the application, e.g. paints, paper, ceramics, etc. different instrument designs and measurement geometries are specified in standard test methods. For a given specular angle, these instrument designs can be broadly classified as converging beam (TAPPI method) and collimated beam (DIN method). In recent comparisons of specular gloss measurements using different glossmeters, very large standard deviations have been reported, well exceeding the manufacturers claims. In this paper, we investigate the effect of instrument beam geometry on gloss measurements. These results indicate that this difference in beam geometry can give the magnitude of gloss differences reported in these comparisons and highlights the importance of educating the user community of best measurement practices and obtaining appropriate traceability for their glossmeters.

  11. A Laboratory Experiment to Measure the Built-In Potential of a P-N Junction by a Photosaturation Method

    ERIC Educational Resources Information Center

    Ikram, I. Mohamed; Rabinal, M. K.; Mulimani, B. G.

    2009-01-01

    Here, we propose a simple method for measuring the built-in potential and its temperature dependence of a photodiode by a photosaturation technique. The experimental design facilitates both current-voltage and null voltage measurements as a function of white light intensity. This method gives the built-in potential directly; as a result its…

  12. A Study of Trial and Error Learning in Technology, Engineering, and Design Education

    ERIC Educational Resources Information Center

    Franzen, Marissa Marie Sloan

    2016-01-01

    The purpose of this research study was to determine if trial and error learning was an effective, practical, and efficient learning method for Technology, Engineering, and Design Education students at the post-secondary level. A mixed methods explanatory research design was used to measure the viability of the learning source. The study sample was…

  13. Method of high precision interval measurement in pulse laser ranging system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  14. 3D Measurement of Anatomical Cross-sections of Foot while Walking

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Mochimaru, Masaaki; Kanade, Takeo

    Recently, techniques for measuring and modeling of human body are taking attention, because human models are useful for ergonomic design in manufacturing. We aim to measure accurate shape of human foot that will be useful for the design of shoes. For such purpose, shape measurement of foot in motion is obviously important, because foot shape in the shoe is deformed while walking or running. In this paper, we propose a method to measure anatomical cross-sections of foot while walking. No one had ever measured dynamic shape of anatomical cross-sections, though they are very basic and popular in the field of biomechanics. Our proposed method is based on multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue), and the proposed method utilizes the characteristic of target shape in the camera captured images. Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. Our desired accuracy is less than 1mm error, which is similar to the existing 3D scanners for static foot measurement. In our experiments, the proposed method achieved the desired accuracy.

  15. Pressure garment design tool to monitor exerted pressures.

    PubMed

    Macintyre, Lisa; Ferguson, Rhona

    2013-09-01

    Pressure garments are used in the treatment of hypertrophic scarring following serious burns. The use of pressure garments is believed to hasten the maturation process, reduce pruritus associated with immature hypertrophic scars and prevent the formation of contractures over flexor joints. Pressure garments are normally made to measure for individual patients from elastic fabrics and are worn continuously for up to 2 years or until scar maturation. There are 2 methods of constructing pressure garments. The most common method, called the Reduction Factor method, involves reducing the patient's circumferential measurements by a certain percentage. The second method uses the Laplace Law to calculate the dimensions of pressure garments based on the circumferential measurements of the patient and the tension profile of the fabric. The Laplace Law method is complicated to utilise manually and no design tool is currently available to aid this process. This paper presents the development and suggested use of 2 new pressure garment design tools that will aid pressure garment design using the Reduction Factor and Laplace Law methods. Both tools calculate the pressure garment dimensions and the mean pressure that will be exerted around the body at each measurement point. Monitoring the pressures exerted by pressure garments and noting the clinical outcome would enable clinicians to build an understanding of the implications of particular pressures on scar outcome, maturation times and patient compliance rates. Once the optimum pressure for particular treatments is known, the Laplace Law method described in this paper can be used to deliver those average pressures to all patients. This paper also presents the results of a small scale audit of measurements taken for the fabrication of pressure garments in two UK hospitals. This audit highlights the wide range of pressures that are exerted using the Reduction Factor method and that manual pattern 'smoothing' can dramatically change the actual Reduction Factors used. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Aeronautical concerns and National Aeronautics and Space Administration atmospheric electricity projects

    NASA Technical Reports Server (NTRS)

    Vaughan, W. W.

    1980-01-01

    The phenomenology of lightning and lightning measurement techniques are briefly examined with a particular reference to aeronautics. Developments made in airborne and satellite detection methods are reported. NASA research efforts are outlined which cover topics including in-situ measurements, design factors and protection, remote optical and radio frequency measurements, and space vehicle design.

  17. Anthropometric and biomechanical characteristics on body segments of Koreans.

    PubMed

    Park, S J; Kim, C B; Park, S C

    1999-05-01

    This paper documents the physical measurements of the Korean population in order to construct a data base for ergonomic design. The dimension, volume, density, mass, and center of mass of Koreans whose ages range from 7 to 49 were investigated. Sixty-five male subjects and sixty-nine female subjects participated. Eight body segments (head with neck, trunk, thigh, shank, foot, upper arm, forearm and hand) were directly measured with a Martin-type anthropometer, and the immersion method was adopted to measure the volume of body segments. After this, densities were computed by the density equations in Drillis and Contini (1966). The reaction board method was employed for the measurement of the center of mass. Obtained data were compared with the results in the literature. The results in this paper showed different features on body segment parameters comparing with the results in the literature. The constructed data base can be applied to statistical guideline for product design, workspace design, design of clothing and tools, furniture design and construction of biomechanical models for Korean. Also, they can be extended to the application areas for Mongolian.

  18. Design of PCB search coils for AC magnetic flux density measurement

    NASA Astrophysics Data System (ADS)

    Ulvr, Michal

    2018-04-01

    This paper presents single-layer, double-layer and ten-layer planar square search coils designed for AC magnetic flux density amplitude measurement up to 1 T in the low frequency range in a 10 mm air gap. The printed-circuit-board (PCB) method was used for producing the search coils. Special attention is given to a full characterization of the PCB search coils including a comparison between the detailed analytical design method and the finite integration technique method (FIT) on the one hand, and experimental results on the other. The results show very good agreement in the resistance, inductance and search coil constant values (the area turns) and also in the frequency dependence of the search coil constant.

  19. A Robust Strategy for Total Ionizing Dose Testing of Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Wilcox, Edward; Berg, Melanie; Friendlich, Mark; Lakeman, Joseph; KIm, Hak; Pellish, Jonathan; LaBel, Kenneth

    2012-01-01

    We present a novel method of FPGA TID testing that measures propagation delay between flip-flops operating at maximum speed. Measurement is performed on-chip at-speed and provides a key design metric when building system-critical synchronous designs.

  20. Robust surface reconstruction by design-guided SEM photometric stereo

    NASA Astrophysics Data System (ADS)

    Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou

    2017-04-01

    We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness of our method.

  1. DESIGN NOTE: New apparatus for haze measurement for transparent media

    NASA Astrophysics Data System (ADS)

    Yu, H. L.; Hsiao, C. C.; Liu, W. C.

    2006-08-01

    Precise measurement of luminous transmittance and haze of transparent media is increasingly important to the LCD industry. Currently there are at least three documentary standards for measuring transmission haze. Unfortunately, none of those standard methods by itself can obtain the precise values for the diffuse transmittance (DT), total transmittance (TT) and haze. This note presents a new apparatus capable of precisely measuring all three variables simultaneously. Compared with current structures, the proposed design contains one more compensatory port. For optimal design, the light trap absorbs the beam completely, light scattered by the instrument is zero and the interior surface of the integrating sphere, baffle, as well as the reflectance standard, are of equal characteristic. The accurate values of the TT, DT and haze can be obtained using the new apparatus. Even if the design is not optimal, the measurement errors of the new apparatus are smaller than those of other methods especially for high sphere reflectance. Therefore, the sphere can be made of a high reflectance material for the new apparatus to increase the signal-to-noise ratio.

  2. Approaches to chronic disease management evaluation in use in Europe: a review of current methods and performance measures.

    PubMed

    Conklin, Annalijn; Nolte, Ellen; Vrijhoef, Hubertus

    2013-01-01

    An overview was produced of approaches currently used to evaluate chronic disease management in selected European countries. The study aims to describe the methods and metrics used in Europe as a first to help advance the methodological basis for their assessment. A common template for collection of evaluation methods and performance measures was sent to key informants in twelve European countries; responses were summarized in tables based on template evaluation categories. Extracted data were descriptively analyzed. Approaches to the evaluation of chronic disease management vary widely in objectives, designs, metrics, observation period, and data collection methods. Half of the reported studies used noncontrolled designs. The majority measure clinical process measures, patient behavior and satisfaction, cost and utilization; several also used a range of structural indicators. Effects are usually observed over 1 or 3 years on patient populations with a single, commonly prevalent, chronic disease. There is wide variation within and between European countries on approaches to evaluating chronic disease management in their objectives, designs, indicators, target audiences, and actors involved. This study is the first extensive, international overview of the area reported in the literature.

  3. Electronic system for floor surface type detection in robotics applications

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Paczesny, Daniel; Tarasiuk, Łukasz

    2016-11-01

    The paper reports a recognizing method base on ultrasonic transducers utilized for the surface types detection. Ultra-sonic signal is transmitted toward the examined substrate, then reflected and scattered signal goes back to another ultra-sonic receiver. Thee measuring signal is generated by a piezo-electric transducer located at specified distance from the tested substrate. The detector is a second piezo-electric transducer located next to the transmitter. Depending on thee type of substrate which is exposed by an ultrasonic wave, the signal is partially absorbed inn the material, diffused and reflected towards the receiver. To measure the level of received signal, the dedicated electronic circuit was design and implemented in the presented systems. Such system was designed too recognize two types of floor surface: solid (like concrete, ceramic stiles, wood) and soft (carpets, floor coverings). The method will be applied in electronic detection system dedicated to autonomous cleaning robots due to selection of appropriate cleaning method. This work presents the concept of ultrasonic signals utilization, the design of both the measurement system and the measuring stand and as well number of wide tests results which validates correctness of applied ultrasonic method.

  4. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    NASA Astrophysics Data System (ADS)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically highlight the consideration of conceptual model uncertainty.

  5. PET Timing Performance Measurement Method Using NEMA NEC Phantom

    NASA Astrophysics Data System (ADS)

    Wang, Gin-Chung; Li, Xiaoli; Niu, Xiaofeng; Du, Huini; Balakrishnan, Karthik; Ye, Hongwei; Burr, Kent

    2016-06-01

    When comparing the performance of time-of-flight whole-body PET scanners, timing resolution is one important benchmark. Timing performance is heavily influenced by detector and electronics design. Even for the same scanner design, measured timing resolution is a function of many factors including the activity concentration, geometry and positioning of the radioactive source. Due to lack of measurement standards, the timing resolutions reported in the literature may not be directly comparable and may not describe the timing performance under clinically relevant conditions. In this work we introduce a method which makes use of the data acquired during the standard NEMA Noise-Equivalent-Count-Rate (NECR) measurements, and compare it to several other timing resolution measurement methods. The use of the NEMA NEC phantom, with well-defined dimensions and radioactivity distribution, is attractive because it has been widely accepted in the industry and allows for the characterization of timing resolution across a more relevant range of conditions.

  6. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  7. A Wireless Fluid-Level Measurement Technique

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Taylor, Bryant D.

    2006-01-01

    This paper presents the application of a recently developed wireless measurement acquisition system to fluid-level measurement. This type of fluid-level measurement system alleviates many shortcomings of fluid-level measurement methods currently being used, including limited applicability of any one fluid-level sensor design. Measurement acquisition shortcomings include the necessity for power to be supplied to each sensor and for the measurement to be extracted from each sensor via a physical connection to the sensor. Another shortcoming is existing measurement systems require that a data channel and signal conditioning electronics be dedicated to each sensor. Use of wires results in other shortcomings such as logistics needed to add or replace sensors, weight, potential for electrical arcing and wire degradations. The fluid level sensor design is a simple passive inductor-capacitor circuit that is not subject to mechanical failure that is possible when float and lever-arm systems are used. Methods are presented for using the sensor in caustic, acidic or cryogenic fluids. Oscillating magnetic fields are used to power the sensor. Once electrically excited, the sensor produces a magnetic field response. The response frequency corresponds to the amount to fluid within the capacitor s electric field. The sensor design can be modified for measuring the level of any fluid or fluent substance that can be stored in a non-conductive reservoir. The interrogation method for discerning changes in the sensor response frequency is also presented.

  8. Rotor design for maneuver performance

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Schrage, Daniel

    1986-01-01

    A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.

  9. Rotating rake design for unique measurement of fan-generated spinning acoustic modes

    NASA Technical Reports Server (NTRS)

    Konno, Kevin E.; Hausmann, Clifford R.

    1993-01-01

    In light of the current emphasis on noise reduction in subsonic aircraft design, NASA has been actively studying the source of and propagation of noise generated by subsonic fan engines. NASA/LeRC has developed and tested a unique method of accurately measuring these spinning acoustic modes generated by an experimental fan. This mode measuring method is based on the use of a rotating microphone rake. Testing was conducted in the 9 x 15 Low-speed Wind Tunnel. The rotating rake was tested with the Advanced Ducted Propeller (ADP) model. This memorandum discusses the design and performance of the motor/drive system for the fan-synchronized rotating acoustic rake. This novel motor/drive design approach is now being adapted for additional acoustic mode studies in new test rigs as baseline data for the future design of active noise control for subsonic fan engines. Included in this memorandum are the research requirements, motor/drive specifications, test performance results, and a description of the controls and software involved.

  10. A Safety Index and Method for Flightdeck Evaluation

    NASA Technical Reports Server (NTRS)

    Latorella, Kara A.

    2000-01-01

    If our goal is to improve safety through machine, interface, and training design, then we must define a metric of flightdeck safety that is usable in the design process. Current measures associated with our notions of "good" pilot performance and ultimate safety of flightdeck performance fail to provide an adequate index of safe flightdeck performance for design evaluation purposes. The goal of this research effort is to devise a safety index and method that allows us to evaluate flightdeck performance holistically and in a naturalistic experiment. This paper uses Reason's model of accident causation (1990) as a basis for measuring safety, and proposes a relational database system and method for 1) defining a safety index of flightdeck performance, and 2) evaluating the "safety" afforded by flightdeck performance for the purpose of design iteration. Methodological considerations, limitations, and benefits are discussed as well as extensions to this work.

  11. A novel rheometer design for yield stress fluids

    Treesearch

    Joseph R. Samaniuk; Timothy W. Shay; Thatcher W. Root; Daniel J. Klingenberg; C. Tim Scott

    2014-01-01

    An inexpensive, rapid method for measuring the rheological properties of yield stress fluids is described and tested. The method uses an auger that does not rotate during measurements, and avoids material and instrument-related difficulties, for example, wall slip and the presence of large particles, associated with yield stress fluids. The method can be used...

  12. Methods of Viscosity Measurements in Sealed Ampoules

    NASA Technical Reports Server (NTRS)

    Mazuruk, Konstantin

    1999-01-01

    Viscosity of semiconductor and metallic melts is usually measured by oscillating cup method. This method utilizes the melts contained in vacuum sealed silica ampoules, thus the problems related to volatility, contamination, and high temperature and pressure can be alleviated. In a typical design, the time required for a single measurement is of the order of one hour. In order to reduce this time to a minute range, a high resolution (0.05 arc.sec) angular detection system is implemented in our design of the viscometer. Furthermore, an electromagnet generating a rotational magnetic field (RMF) is incorporated into the apparatus. This magnetic field can be used to remotely and non intrusively measure the electrical conductivity of the melt. It can also be used to induce a well controlled rotational flow in the system. The transient behavior of this flow can potentially yield the viscosity of the fluid. Based on RMF implementation, two novel viscometry methods are proposed in this work: a) the transient torque method, b) the resonance method. A unified theoretical approach to the three methods (oscillating cup, transient torque, and resonance) is presented along with the initial test results of the constructed apparatus. Advantages of each of the method are discussed.

  13. Fusing Range Measurements from Ultrasonic Beacons and a Laser Range Finder for Localization of a Mobile Robot

    PubMed Central

    Ko, Nak Yong; Kuc, Tae-Yong

    2015-01-01

    This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization. PMID:25970259

  14. Alcohol Warning Label Awareness and Attention: A Multi-method Study.

    PubMed

    Pham, Cuong; Rundle-Thiele, Sharyn; Parkinson, Joy; Li, Shanshi

    2018-01-01

    Evaluation of alcohol warning labels requires careful consideration ensuring that research captures more than awareness given that labels may not be prominent enough to attract attention. This study investigates attention of current in market alcohol warning labels and examines whether attention can be enhanced through theoretically informed design. Attention scores obtained through self-report methods are compared to objective measures (eye-tracking). A multi-method experimental design was used delivering four conditions, namely control, colour, size and colour and size. The first study (n = 559) involved a self-report survey to measure attention. The second study (n = 87) utilized eye-tracking to measure fixation count and duration and time to first fixation. Analysis of Variance (ANOVA) was utilized. Eye-tracking identified that 60% of participants looked at the current in market alcohol warning label while 81% looked at the optimized design (larger and red). In line with observed attention self-reported attention increased for the optimized design. The current study casts doubt on dominant practices (largely self-report), which have been used to evaluate alcohol warning labels. Awareness cannot be used to assess warning label effectiveness in isolation in cases where attention does not occur 100% of the time. Mixed methods permit objective data collection methodologies to be triangulated with surveys to assess warning label effectiveness. Attention should be incorporated as a measure in warning label effectiveness evaluations. Colour and size changes to the existing Australian warning labels aided by theoretically informed design increased attention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  15. SAM 2.1—A computer program for plotting and formatting surveying data for estimating peak discharges by the slope-area method

    USGS Publications Warehouse

    Hortness, J.E.

    2004-01-01

    The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.

  16. Near Field HF Antenna Pattern Measurement Method Using an Antenna Pattern Range

    DTIC Science & Technology

    2015-12-01

    Year 2015 by the Applied Electromagnetics Branch (Code 52250) of the System of Systems (SoS) & Platform Design Division (Code 52200), Space and...Head SoS & Platform Design Division iii EXECUTIVE SUMMARY The Antenna Pattern Range (APR) is an essential measurement facility operated at Space...14 1 INTRODUCTION Accurate characterization of antennas designed to support the warfighter is a critical

  17. Fabrication of Organic Radar Absorbing Materials: A Report on the TIF Project

    DTIC Science & Technology

    2005-05-01

    thickness, permittivity and permeability. The ability to measure the permittivity and permeability is an essential requirement for designing an optimised...absorber. And good optimisations codes are required in order to achieve the best possible absorber designs . In this report, the results from a...through measurement of their conductivity and permittivity at microwave frequencies. Methods were then developed for optimising the design of

  18. The Numerical Calculation and Experimental Measurement of the Inductance Parameters for Permanent Magnet Synchronous Motor in Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Qiao, Mingzhong; Zhu, Peng

    2017-12-01

    A permanent magnet synchronous motor with radial magnetic circuit and built-in permanent magnet is designed for the electric vehicle. Finite element numerical calculation and experimental measurement are adopted to obtain the direct axis and quadrature axis inductance parameters of the motor which are vital important for the motor control. The calculation method is simple, the measuring principle is clear, the results of numerical calculation and experimental measurement are mutual confirmation. A quick and effective method is provided to obtain the direct axis and quadrature axis inductance parameters of the motor, and then improve the design of motor or adjust the control parameters of the motor controller.

  19. Study on Measuring the Viscosity of Lubricating Oil by Viscometer Based on Hele - Shaw Principle

    NASA Astrophysics Data System (ADS)

    Li, Longfei

    2017-12-01

    In order to explore the method of accurately measuring the viscosity value of oil samples using the viscometer based on Hele-Shaw principle, three different measurement methods are designed in the laboratory, and the statistical characteristics of the measured values are compared, in order to get the best measurement method. The results show that the oil sample to be measured is placed in the magnetic field formed by the magnet, and the oil sample can be sucked from the same distance from the magnet. The viscosity value of the sample can be measured accurately.

  20. 40 CFR 799.9420 - TSCA carcinogenicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... inhalation equipment designed to sustain a minimum air flow of 10 air changes per hr, an adequate oxygen... sufficient. If pretest measurements are not within 10% of each other, three to four measurements should be... methods including significance criteria shall be selected during the design of the study. (2) Evaluation...

  1. 40 CFR 799.9420 - TSCA carcinogenicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... inhalation equipment designed to sustain a minimum air flow of 10 air changes per hr, an adequate oxygen... sufficient. If pretest measurements are not within 10% of each other, three to four measurements should be... methods including significance criteria shall be selected during the design of the study. (2) Evaluation...

  2. 40 CFR 799.9420 - TSCA carcinogenicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... inhalation equipment designed to sustain a minimum air flow of 10 air changes per hr, an adequate oxygen... sufficient. If pretest measurements are not within 10% of each other, three to four measurements should be... methods including significance criteria shall be selected during the design of the study. (2) Evaluation...

  3. 40 CFR 799.9420 - TSCA carcinogenicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... inhalation equipment designed to sustain a minimum air flow of 10 air changes per hr, an adequate oxygen... sufficient. If pretest measurements are not within 10% of each other, three to four measurements should be... methods including significance criteria shall be selected during the design of the study. (2) Evaluation...

  4. 40 CFR 799.9420 - TSCA carcinogenicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... inhalation equipment designed to sustain a minimum air flow of 10 air changes per hr, an adequate oxygen... sufficient. If pretest measurements are not within 10% of each other, three to four measurements should be... methods including significance criteria shall be selected during the design of the study. (2) Evaluation...

  5. Skylab experiments. Volume 7: Living and working in space. [Skylab mission data on human factors engineering and spacecraft components for high school level education

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Experiments conducted on the Skylab vehicle that will measure and evaluate the ability of the crew to live and work effectively in space are discussed. The methods and techniques of human engineering as they relate to the design and evaluation of work spaces, requirements, and tools are described. The application of these methods and the Skylab measurements to the design of future spacecraft are analyzed.

  6. Precision Mass Property Measurements Using a Five-Wire Torsion Pendulum

    NASA Technical Reports Server (NTRS)

    Swank, Aaron J.

    2012-01-01

    A method for measuring the moment of inertia of an object using a five-wire torsion pendulum design is described here. Typical moment of inertia measurement devices are capable of 1 part in 10(exp 3) accuracy and current state of the art techniques have capabilities of about one part in 10(exp 4). The five-wire apparatus design shows the prospect of improving on current state of the art. Current measurements using a laboratory prototype indicate a moment of inertia measurement precision better than a part in 10(exp 4). In addition, the apparatus is shown to be capable of measuring the mass center offset from the geometric center. Typical mass center measurement devices exhibit a measurement precision up to approximately 1 micrometer. Although the five-wire pendulum was not originally designed for mass center measurements, preliminary results indicate an apparatus with a similar design may have the potential of achieving state of the art precision.

  7. Accurate color measurement methods for medical displays.

    PubMed

    Saha, Anindita; Kelley, Edward F; Badano, Aldo

    2010-01-01

    The necessity for standard instrumentation and measurements of color that are repeatable and reproducible is the major motivation behind this work. Currently, different instrumentation and methods can yield very different results when measuring the same feature such as color uniformity or color difference. As color increasingly comes into play in medical imaging diagnostics, display color will have to be quantified in order to assess whether the display should be used for imaging purposes. The authors report on the characterization of three novel probes for measuring display color with minimal contamination from screen areas outside the measurement spot or from off-normal emissions. They compare three probe designs: A modified small-spot luminance probe and two conic probe designs based on black frusta. To compare the three color probe designs, spectral and luminance measurements were taken with specialized instrumentation to determine the luminance changes and color separation abilities of the probes. The probes were characterized with a scanning slit method, veiling glare, and a moving laser and LED arrangement. The scanning slit measurement was done using a black slit plate over a white line on an LCD monitor. The luminance was measured in 1 mm increments from the center of the slit to +/- 15 mm above and below the slit at different distances between the probe and the slit. The veiling glare setup consisted of measurements of the luminance of a black spot pattern with a white disk of radius of 100 mm as the black spot increases in 1 mm radius increments. The moving LED and laser method consisted of a red and green light orthogonal to the probe tip for the light to directly shine into the probe. The green light source was moved away from the red source in 1 cm increments to measure color stray-light contamination at different probe distances. The results of the color testing using the LED and laser methods suggest a better performance of one of the frusta probes at shorter distances between the light sources, which translates to less contamination. The tails of the scans indicate the magnitude of the spread in signal due to light from areas outside the intended measurement spot. The measurements indicate a corresponding glare factor for a large spot of 140, 500, and 2000 for probe A, B1, and B2, respectively. The dual-laser setup suggests that color purity can be maintained up to a few tens of millimeters outside the measurement spot. The comparison shows that there are significant differences in the performance of each probe design, and that those differences have an effect on the measured quantity used to quantify display color. Different probe designs show different measurements of the level of light contamination that affects the quantitative color determination.

  8. A high-precision velocity measuring system design for projectiles based on S-shaped laser screen

    NASA Astrophysics Data System (ADS)

    Liu, Huayi; Qian, Zheng; Yu, Hao; Li, Yutao

    2018-03-01

    The high-precision measurement of the velocity of high-speed flying projectile is of great significance for the evaluation and development of modern weapons. The velocity of the high-speed flying projectile is usually measured by laser screen velocity measuring system. But this method cannot achieve the repeated measurements, so we cannot make an indepth evaluation of the uncertainty about the measuring system. This paper presents a design based on S-shaped laser screen velocity measuring system. This design can achieve repeated measurements. Therefore, it can effectively reduce the uncertainty of the velocity measuring system. In addition, we made a detailed analysis of the uncertainty of the measuring system. The measurement uncertainty is 0.2% when the velocity of the projectile is about 200m/s.

  9. Digital photogrammetry for quantitative wear analysis of retrieved TKA components.

    PubMed

    Grochowsky, J C; Alaways, L W; Siskey, R; Most, E; Kurtz, S M

    2006-11-01

    The use of new materials in knee arthroplasty demands a way in which to accurately quantify wear in retrieved components. Methods such as damage scoring, coordinate measurement, and in vivo wear analysis have been used in the past. The limitations in these methods illustrate a need for a different methodology that can accurately quantify wear, which is relatively easy to perform and uses a minimal amount of expensive equipment. Off-the-shelf digital photogrammetry represents a potentially quick and easy alternative to what is readily available. Eighty tibial inserts were visually examined for front and backside wear and digitally photographed in the presence of two calibrated reference fields. All images were segmented (via manual and automated algorithms) using Adobe Photoshop and National Institute of Health ImageJ. Finally, wear was determined using ImageJ and Rhinoceros software. The absolute accuracy of the method and repeatability/reproducibility by different observers were measured in order to determine the uncertainty of wear measurements. To determine if variation in wear measurements was due to implant design, 35 implants of the three most prevalent designs were subjected to retrieval analysis. The overall accuracy of area measurements was 97.8%. The error in automated segmentation was found to be significantly lower than that of manual segmentation. The photogrammetry method was found to be reasonably accurate and repeatable in measuring 2-D areas and applicable to determining wear. There was no significant variation in uncertainty detected among different implant designs. Photogrammetry has a broad range of applicability since it is size- and design-independent. A minimal amount of off-the-shelf equipment is needed for the procedure and no proprietary knowledge of the implant is needed. (c) 2006 Wiley Periodicals, Inc.

  10. Comparison of a novel surface laser scanning anthropometric technique to traditional methods for facial parameter measurements.

    PubMed

    Joe, Paula S; Ito, Yasushi; Shih, Alan M; Oestenstad, Riedar K; Lungu, Claudiu T

    2012-01-01

    This study was designed to determine if three-dimensional (3D) laser scanning techniques could be used to collect accurate anthropometric measurements, compared with traditional methods. The use of an alternative 3D method would allow for quick collection of data that could be used to change the parameters used for facepiece design, improving fit and protection for a wider variety of faces. In our study, 10 facial dimensions were collected using both the traditional calipers and tape method and a Konica-Minolta Vivid9i laser scanner. Scans were combined using RapidForm XOR software to create a single complete facial geometry of the subject as a triangulated surface with an associated texture image from which to obtain measurements. A paired t-test was performed on subject means in each measurement by method. Nine subjects were used in this study: five males (one African-American and four Caucasian females) and four females displaying a range of facial dimensions. Five measurements showed significant differences (p<0.05), with most accounted for by subject movements or amended by scanning technique modifications. Laser scanning measurements showed high precision and accuracy when compared with traditional methods. Significant differences found can be very small changes in measurements and are unlikely to present a practical difference. The laser scanning technique demonstrated reliable and quick anthropometric data collection for use in future projects in redesigning respirators.

  11. A Machine Learning Approach to Measurement of Text Readability for EFL Learners Using Various Linguistic Features

    ERIC Educational Resources Information Center

    Kotani, Katsunori; Yoshimi, Takehiko; Isahara, Hitoshi

    2011-01-01

    The present paper introduces and evaluates a readability measurement method designed for learners of EFL (English as a foreign language). The proposed readability measurement method (a regression model) estimates the text readability based on linguistic features, such as lexical, syntactic and discourse features. Text readability refers to the…

  12. Assessing Resilience across Cultures Using Mixed Methods: Construction of the Child and Youth Resilience Measure

    ERIC Educational Resources Information Center

    Ungar, Michael; Liebenberg, Linda

    2011-01-01

    An international team of investigators in 11 countries have worked collaboratively to develop a culturally and contextually relevant measure of youth resilience, the Child and Youth Resilience Measure (CYRM-28). The team used a mixed methods design that facilitated understanding of both common and unique aspects of resilience across cultures.…

  13. Measuring forest evapotranspiration--theory and problems

    Treesearch

    Anthony C. Federer; Anthony C. Federer

    1970-01-01

    A satisfactory general method of measuring forest evapotranspiration has yet to be developed. Many procedures have been tried, but only the soil-water budget method and the micrometeorological methods offer any degree of success. This paper is a discussion of these procedures and the problems that arise in applying them. It is designed as a reference for scientists and...

  14. Improved design and in-situ measurements of new beam position monitors for Indus-2

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Babbar, L. K.; Holikatti, A. C.; Yadav, S.; Tyagi, Y.; Puntambekar, T. A.; Senecha, V. K.

    2018-01-01

    Beam position monitors (BPM) are important diagnostic devices used in particle accelerators to monitor position of the beam for various applications. Improved version of button electrode BPM has been designed using CST Studio Suite for Indus-2 ring. The new BPMs are designed to replace old BPMs which were designed and installed more than 12 years back. The improved BPMs have higher transfer impedance, resonance free output signal, equal sensitivity in horizontal and vertical planes and fast decaying wakefield as compared to old BPMs. The new BPMs have been calibrated using coaxial wire method. Measurement of transfer impedance and time domain signals has also been performed in-situ with electron beam during Indus-2 operation. The calibration and beam based measurements results showed close agreement with the design parameters. This paper presents design, electromagnetic simulations, calibration result and in-situ beam based measurements of newly designed BPMs.

  15. Concentration measurements of biodiesel in engine oil and in diesel fuel

    NASA Astrophysics Data System (ADS)

    Mäder, A.; Eskiner, M.; Burger, C.; Ruck, W.; Rossner, M.; Krahl, J.

    2012-05-01

    This work comprised a method for concentration measurements of biodiesel in engine oil as well as biodiesel in diesel fuel by a measurement of the permittivity of the mixture at a frequency range from 100 Hz to 20 kHz. For this purpose a special designed measurement cell with high sensitivity was designed. The results for the concentration measurements of biodiesel in the engine oil and diesel fuel shows linearity to the measurement cell signal for the concentration of biodiesel in the engine oil between 0.5% Vol. to 10% Vol. and for biodiesel in the diesel fuel between 0% Vol. to 100% Vol. The method to measure the concentration of biodiesel in the engine oil or the concentration of biodiesel in the diesel fuel is very accurate and low concentration of about 0.5% Vol. biodiesel in engine oil or in diesel fuel can be measured with high accuracy.

  16. Scale factor measure method without turntable for angular rate gyroscope

    NASA Astrophysics Data System (ADS)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  17. [Biocybernetic approach to the thermometric methods of blood supply measurements of periodontal tissues].

    PubMed

    Pastusiak, J; Zakrzewski, J

    1988-11-01

    Specific biocybernetic approach to the problem of the blood supply determination of paradontium tissues by means of thermometric methods has been presented in the paper. The compartment models of the measuring procedure have been given. Dilutodynamic methology and classification has been applied. Such an approach enables to select appropriate biophysical parameters describing the state of blood supply of paradontium tissues and optimal design of transducers and measuring methods.

  18. A method of reconstructing the spatial measurement network by mobile measurement transmitter for shipbuilding

    NASA Astrophysics Data System (ADS)

    Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin

    2017-07-01

    The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm.

  19. Evaluation of design consistency methods for two-lane rural highways : executive summary

    DOT National Transportation Integrated Search

    2000-08-01

    Design consistency refers to the conformance of a highway's geometry with driver expectancy. Techniques to evaluate the consistency of a design documented within this report include speed-profile model, alignment indices, speed distribution measures,...

  20. Control Method Stretches Suspensions by Measuring the Sag of Strands in Cable-Stayed Bridges

    NASA Astrophysics Data System (ADS)

    Bętkowski, Piotr

    2017-10-01

    In the article is described the method that allows on evaluation and validation of measurement correctness of dynamometers (strain gauges, tension meters) used in systems of suspensions. Control of monitoring devices such as dynamometers is recommended in inspections of suspension bridges. Control device (dynamometer) works with an anchor, and the degree of this cooperation could have a decisive impact on the correctness of the results. Method, which determines the stress in the strand (cable), depending on the sag of stayed cable, is described. This method can be used to control the accuracy of measuring devices directly on the bridge. By measuring the strand sag, it is possible to obtain information about the strength (force) which occurred in the suspension cable. Digital camera is used for the measurement of cable sag. Control measurement should be made independently from the controlled parameter but should verify this parameter directly (it is the best situation). In many cases in practice the controlled parameter is not designation by direct measurement, but the calculations, i.e. relation measured others parameters, as in the method described in the article. In such cases occurred the problem of overlapping error of measurement of intermediate parameters (data) and the evaluation of the reliability of the results. Method of control calculations made in relation to installed in the bridge measuring devices is doubtful without procedure of uncertainty estimation. Such an assessment of the accuracy can be performed using the interval numbers. With the interval numbers are possible the analysis of parametric relationship accuracy of the designation of individual parameters and uncertainty of results. Method of measurements, relations and analytical formulas, and numerical example can be found in the text of the article.

  1. A methodology for design of a linear referencing system for surface transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonderohe, A.; Hepworth, T.

    1997-06-01

    The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less

  2. G-scan--mobile multiview 3-D measuring system for the analysis of the face.

    PubMed

    Kopp, S; Kühmstedt, P; Notni, G; Geller, R

    2003-10-01

    The development of optical 3-D measuring techniques and their use in industrial quality assurance, in design, and for rapid prototyping has experienced strong growth. A large number of optical 3-D measuring methods and systems are on the market in dentistry. CAD/CAM production has become firmly established in dental medicine, not least due to the systematic introduction of the Cerec technique and the digiDent method. The scanners on which these technologies are based are designed for a relatively small measuring area. To be able to measure and three-dimensionally assess the face--and the numerous changes in the face/forehead/neck region--it was necessary to design and develop a self-calibrating measuring system with gray code for clinical use: the G-Scan measuring system. Objects up to a size of 500 x 500 x 400 mm can be acquired three-dimensionally with it, with a measuring inaccuracy of 10 to 70 microm in a typical measuring time of 15 s. The present article describes the measuring principle, the system parameters, and the features of the new measuring system, and illustrates the measuring results on 3-D displays of the face in static occlusion and in functional occlusion positions.

  3. Mutual information based feature selection for medical image retrieval

    NASA Astrophysics Data System (ADS)

    Zhi, Lijia; Zhang, Shaomin; Li, Yan

    2018-04-01

    In this paper, authors propose a mutual information based method for lung CT image retrieval. This method is designed to adapt to different datasets and different retrieval task. For practical applying consideration, this method avoids using a large amount of training data. Instead, with a well-designed training process and robust fundamental features and measurements, the method in this paper can get promising performance and maintain economic training computation. Experimental results show that the method has potential practical values for clinical routine application.

  4. Lead field theory provides a powerful tool for designing microelectrode array impedance measurements for biological cell detection and observation.

    PubMed

    Böttrich, Marcel; Tanskanen, Jarno M A; Hyttinen, Jari A K

    2017-06-26

    Our aim is to introduce a method to enhance the design process of microelectrode array (MEA) based electric bioimpedance measurement systems for improved detection and viability assessment of living cells and tissues. We propose the application of electromagnetic lead field theory and reciprocity for MEA design and measurement result interpretation. Further, we simulated impedance spectroscopy (IS) with two- and four-electrode setups and a biological cell to illustrate the tool in the assessment of the capabilities of given MEA electrode constellations for detecting cells on or in the vicinity of the microelectrodes. The results show the power of the lead field theory in electromagnetic simulations of cell-microelectrode systems depicting the fundamental differences of two- and four-electrode IS measurement configurations to detect cells. Accordingly, the use in MEA system design is demonstrated by assessing the differences between the two- and four-electrode IS configurations. Further, our results show how cells affect the lead fields in these MEA system, and how we can utilize the differences of the two- and four-electrode setups in cell detection. The COMSOL simulator model is provided freely in public domain as open source. Lead field theory can be successfully applied in MEA design for the IS based assessment of biological cells providing the necessary visualization and insight for MEA design. The proposed method is expected to enhance the design and usability of automated cell and tissue manipulation systems required for bioreactors, which are intended for the automated production of cell and tissue grafts for medical purposes. MEA systems are also intended for toxicology to assess the effects of chemicals on living cells. Our results demonstrate that lead field concept is expected to enhance also the development of such methods and devices.

  5. [Design, use and introduction of the practice of measureable complexes for psychophysiological studies].

    PubMed

    Bokser, O Ia; Gurtovoĭ, E S

    1997-01-01

    The paper outlines a background of chronoreaction measurement, an important trend of psychophysiological studies. It mainly deals with the chronoreaction measuring methods and tools introduced into production.

  6. An in-situ measuring method for planar straightness error

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  7. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  8. Optimal design of tilt carrier frequency computer-generated holograms to measure aspherics.

    PubMed

    Peng, Jiantao; Chen, Zhe; Zhang, Xingxiang; Fu, Tianjiao; Ren, Jianyue

    2015-08-20

    Computer-generated holograms (CGHs) provide an approach to high-precision metrology of aspherics. A CGH is designed under the trade-off among size, mapping distortion, and line spacing. This paper describes an optimal design method based on the parametric model for tilt carrier frequency CGHs placed outside the interferometer focus points. Under the condition of retaining an admissible size and a tolerable mapping distortion, the optimal design method has two advantages: (1) separating the parasitic diffraction orders to improve the contrast of the interferograms and (2) achieving the largest line spacing to minimize sensitivity to fabrication errors. This optimal design method is applicable to common concave aspherical surfaces and illustrated with CGH design examples.

  9. 40 CFR 21.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., architectural, legal, fiscal, or economic investigations or studies; surveys, designs, plans, writings, drawings... one or more applicable standards. This can be determined with reference to design specifications..., alterations, or methods of operation the design specifications of which will provide a measure of treatment or...

  10. 40 CFR 21.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., architectural, legal, fiscal, or economic investigations or studies; surveys, designs, plans, writings, drawings... one or more applicable standards. This can be determined with reference to design specifications..., alterations, or methods of operation the design specifications of which will provide a measure of treatment or...

  11. 40 CFR 21.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., architectural, legal, fiscal, or economic investigations or studies; surveys, designs, plans, writings, drawings... one or more applicable standards. This can be determined with reference to design specifications..., alterations, or methods of operation the design specifications of which will provide a measure of treatment or...

  12. 40 CFR 21.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., architectural, legal, fiscal, or economic investigations or studies; surveys, designs, plans, writings, drawings... one or more applicable standards. This can be determined with reference to design specifications..., alterations, or methods of operation the design specifications of which will provide a measure of treatment or...

  13. 40 CFR 21.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., architectural, legal, fiscal, or economic investigations or studies; surveys, designs, plans, writings, drawings... one or more applicable standards. This can be determined with reference to design specifications..., alterations, or methods of operation the design specifications of which will provide a measure of treatment or...

  14. FROM THE HISTORY OF PHYSICS: Georgii L'vovich Shnirman: designer of fast-response instruments

    NASA Astrophysics Data System (ADS)

    Bashilov, I. P.

    1994-07-01

    A biography is given of the outstanding Russian scientist Georgii L'vovich Shnirman, whose scientific life had been 'top secret'. He was an experimental physicist and instrument designer, the founder of many branches of the Soviet instrument-making industry, the originator of a theory of electric methods of integration and differentiation, a theory of astasisation of pendulums, and also of original measurement methods. He was the originator and designer of automatic systems for the control of the measuring apparatus used at nuclear test sites and of automatic seismic station systems employed in monitoring nuclear tests. He also designed the first loop oscilloscopes in the Soviet Union, high-speed photographic and cine cameras (streak cameras, etc.), and many other unique instruments, including some mounted on moving objects.

  15. 49 CFR Appendix F to Part 229 - Recommended Practices for Design and Safety Analysis

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... expected order of use; (v) Group similar controls together; (vi) Design for high stimulus-response compatibility (geometric and conceptual); (vii) Design safety-critical controls to require more than one... description of all backup methods of operation; and (s) The configuration/revision control measures designed...

  16. 49 CFR Appendix F to Part 229 - Recommended Practices for Design and Safety Analysis

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... expected order of use; (v) Group similar controls together; (vi) Design for high stimulus-response compatibility (geometric and conceptual); (vii) Design safety-critical controls to require more than one... description of all backup methods of operation; and (s) The configuration/revision control measures designed...

  17. 49 CFR Appendix F to Part 229 - Recommended Practices for Design and Safety Analysis

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... expected order of use; (v) Group similar controls together; (vi) Design for high stimulus-response compatibility (geometric and conceptual); (vii) Design safety-critical controls to require more than one... description of all backup methods of operation; and (s) The configuration/revision control measures designed...

  18. Future directions for H sub x O sub y detection, executive summary

    NASA Technical Reports Server (NTRS)

    1986-01-01

    New methods for the measurement of OH radicals were assessed as were currently available and possible future methods for the other H sub x O sub y species, HO2 and H2O2. The workshop participants were invited from different groups: modelers of atmospheric photochemistry, experimentalists measuring H sub x O y species with laser and nonlaser methods, and chemists and physicists familiar with such experiments but not involved in atmospheric monitoring. There were three major conclusions from the workshop concerning the OH radical. First, it was felt that local measurements made by laser techniques would be ready within 2 or 3 years to furnish reliable measurements at the level of 1,000,000 cu. cm. Second, measurements at this level of sensitivity and with attainable levels of precision could indeed be used to make useful and interesting tests of the fast photochemistry of the troposphere. It is important, however, that the measurements be carefully designed, with respect to spatial and temporal averaging, if there is to be a meaningful comparison between results from two experimental methods or a measurement and a model. Third, nonlocal measurements using released reactants and tracers would also be very useful. These could be made on a regional or global basis, although they still require experimental design including choice of compounds.

  19. 40 CFR 60.700 - Applicability and designation of affected facility.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compounds (TOC) (less methane and ethane) in the vent stream less than 300 ppmv as measured by Method 18 or a concentration of TOC in the vent stream less than 150 ppmv as measured by Method 25A is exempt... limits in these standards are expressed in terms of TOC, measured as TOC less methane and ethane. This...

  20. Use of Telehealth for Research and Clinical Measures in Cochlear Implant Recipients: A Validation Study

    ERIC Educational Resources Information Center

    Hughes, Michelle L.; Goehring, Jenny L.; Baudhuin, Jacquelyn L.; Diaz, Gina R.; Sanford, Todd; Harpster, Roger; Valente, Daniel L.

    2012-01-01

    Purpose: The goal of this study was to compare clinical and research-based cochlear implant (CI) measures using telehealth versus traditional methods. Method: This prospective study used an ABA design (A = laboratory, B = remote site). All measures were made twice per visit for the purpose of assessing within-session variability. Twenty-nine adult…

  1. Mathematics Curriculum Based Measurement to Predict State Test Performance: A Comparison of Measures and Methods

    ERIC Educational Resources Information Center

    Stevens, Olinger; Leigh, Erika

    2012-01-01

    Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…

  2. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  3. Self-Developed Testing System for Determining the Temperature Behavior of Concrete.

    PubMed

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-04-16

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation.

  4. Self-Developed Testing System for Determining the Temperature Behavior of Concrete

    PubMed Central

    Zhu, He; Li, Qingbin; Hu, Yu

    2017-01-01

    Cracking due to temperature and restraint in mass concrete is an important issue. A temperature stress testing machine (TSTM) is an effective test method to study the mechanism of temperature cracking. A synchronous closed loop federated control TSTM system has been developed by adopting the design concepts of a closed loop federated control, a detachable mold design, a direct measuring deformation method, and a temperature deformation compensation method. The results show that the self-developed system has the comprehensive ability of simulating different restraint degrees, multiple temperature and humidity modes, and closed-loop control of multi-TSTMs during one test period. Additionally, the direct measuring deformation method can obtain a more accurate deformation and restraint degree result with little local damage. The external temperature deformation affecting the concrete specimen can be eliminated by adopting the temperature deformation compensation method with different considerations of steel materials. The concrete quality of different TSTMs can be guaranteed by being vibrated on the vibrating stand synchronously. The detachable mold design and assembled method has greatly overcome the difficulty of eccentric force and deformation. PMID:28772778

  5. Multi-linear model set design based on the nonlinearity measure and H-gap metric.

    PubMed

    Shaghaghi, Davood; Fatehi, Alireza; Khaki-Sedigh, Ali

    2017-05-01

    This paper proposes a model bank selection method for a large class of nonlinear systems with wide operating ranges. In particular, nonlinearity measure and H-gap metric are used to provide an effective algorithm to design a model bank for the system. Then, the proposed model bank is accompanied with model predictive controllers to design a high performance advanced process controller. The advantage of this method is the reduction of excessive switch between models and also decrement of the computational complexity in the controller bank that can lead to performance improvement of the control system. The effectiveness of the method is verified by simulations as well as experimental studies on a pH neutralization laboratory apparatus which confirms the efficiency of the proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nabeel Riza

    This final report contains the main results from a 3-year program to further investigate the merits of SiC-based hybrid sensor designs for extreme environment measurements in gas turbines. The study is divided in three parts. Part 1 studies the material properties of SiC such as temporal response, refractive index change with temperature, and material thermal response reversibility. Sensor data from a combustion rig-test using this SiC sensor technology is analyzed and a robust distributed sensor network design is proposed. Part 2 of the study focuses on introducing redundancy in the sensor signal processing to provide improved temperature measurement robustness. Inmore » this regard, two distinct measurement methods emerge. A first method uses laser wavelength sensitivity of the SiC refractive index behavior and a second method that engages the Black-Body (BB) radiation of the SiC package. Part 3 of the program investigates a new way to measure pressure via a distance measurement technique that applies to hot objects including corrosive fluids.« less

  7. Methodology issues in implementation science.

    PubMed

    Newhouse, Robin; Bobay, Kathleen; Dykes, Patricia C; Stevens, Kathleen R; Titler, Marita

    2013-04-01

    Putting evidence into practice at the point of care delivery requires an understanding of implementation strategies that work, in what context and how. To identify methodological issues in implementation science using 4 studies as cases and make recommendations for further methods development. Four cases are presented and methodological issues identified. For each issue raised, evidence on the state of the science is described. Issues in implementation science identified include diverse conceptual frameworks, potential weaknesses in pragmatic study designs, and the paucity of standard concepts and measurement. Recommendations to advance methods in implementation include developing a core set of implementation concepts and metrics, generating standards for implementation methods including pragmatic trials, mixed methods designs, complex interventions and measurement, and endorsing reporting standards for implementation studies.

  8. What Does It Mean to Be Pragmatic? Pragmatic Methods, Measures, and Models to Facilitate Research Translation

    ERIC Educational Resources Information Center

    Glasgow, Russell E.

    2013-01-01

    Background: One of the reasons for the slow and uncertain translation of research into practice is likely due to the emphasis in science on explanatory models and efficacy designs rather than more pragmatic approaches. Methods: Following a brief definition of what constitutes a pragmatic approach, I provide examples of pragmatic methods, measures,…

  9. A method to evaluate process performance by integrating time and resources

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  10. Design of a mobile hydrological data measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Yunping; Wang, Tianmiao; Dai, Fenfen

    2017-06-01

    The current hydrological data acquisition is mainly used in the instrument measurement. Instrument measurement equipment is mainly fixed in a certain water area and the device is easy to be lost. In view of a series of problems, the dynamic measurement system is established by the method of unmanned surface vessel and embedded technology, which can realize any positions measurement of a lake. This method has many advantages, such as mobile convenience, saving money and so on.

  11. Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato

    2007-01-01

    This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…

  12. The Application of the FDTD Method to Millimeter-Wave Filter Circuits Including the Design and Analysis of a Compact Coplanar

    NASA Technical Reports Server (NTRS)

    Oswald, J. E.; Siegel, P. H.

    1994-01-01

    The finite difference time domain (FDTD) method is applied to the analysis of microwave, millimeter-wave and submillimeter-wave filter circuits. In each case, the validity of this method is confirmed by comparison with measured data. In addition, the FDTD calculations are used to design a new ultra-thin coplanar-strip filter for feeding a THz planar-antenna mixer.

  13. Methods and Design: Measuring Recognition Performance Using Computer- Based and Paper-Based Methods.

    DTIC Science & Technology

    1991-01-01

    FEDERICO Navy Personnel Research and Development Center, San Diego, California Using a within-subjects design , we administered to 83 naval pilots and...blank. The students were encouraged to go through the the research subjects into two groups according to whether 344 FEDERICO or not their performance...E Dt0WARDS, A L. (1964). Expermental design in psycholo ical research psychological assessment of elernentary-school-age children Contem- New York

  14. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.

  15. Wide band design on the scaled absorbing material filled with flaky CIPs

    NASA Astrophysics Data System (ADS)

    Xu, Yonggang; Yuan, Liming; Gao, Wei; Wang, Xiaobing; Liang, Zichang; Liao, Yi

    2018-02-01

    The scaled target measurement is an important method to get the target characteristic. Radar absorbing materials are widely used in the low detectable target, considering the absorbing material frequency dispersion characteristics, it makes designing and manufacturing scaled radar absorbing materials on the scaled target very difficult. This paper proposed a wide band design method on the scaled absorbing material of the thin absorption coating with added carbonyl iron particles. According to the theoretical radar cross section (RCS) of the plate, the reflection loss determined by the permittivity and permeability was chosen as the main design factor. Then, the parameters of the scaled absorbing materials were designed using the effective medium theory, and the scaled absorbing material was constructed. Finally, the full-size coating plate and scaled coating plates (under three different scale factors) were simulated; the RCSs of the coating plates were numerically calculated and measured at 4 GHz and a scale factor of 2. The results showed that the compensated RCS of the scaled coating plate was close to that of the full-size coating plate, that is, the mean deviation was less than 0.5 dB, and the design method for the scaled material was very effective.

  16. Sensor of total hip arthoplasty wear designed on principle of scanning profilometry

    NASA Astrophysics Data System (ADS)

    Rössler, Tomas; Mandat, Dusan; Gallo, Jiri; Hrabovsky, Miroslav; Pochmon, Michal; Havranek, Vitezslav

    2008-12-01

    Total hip arthroplasty significantly improves the quality of life in majority of patients with osteoarthritis. However, prosthetic wear is a problem because of inducing the development of aseptic loosening and periprosthetic osteolysis which needs the revision surgery. Thus, the polyethylene wear measurement is the central to contemporary orthopaedics and this interesting has encouraged the development and improvement of both radiologic (in vivo) and non-radiologic (in vitro) methods for polyethylene wear quantification. The principles of polyethylene liner wear measurements are predominantly geometric; nevertheless, the realization of individual types of in vivo measurements brings with it the necessity of many simplifications and compromising steps to acquire approximately accurate values. In fact, the volumetric wear can be obtained by mathematical conversion based on the most linear shift of femoral head in the cup. However, such approach is understood to be somewhat insufficient. Our ongoing research pointed to the development of optical non-contact method for wear measurement and its results are introduced in this paper including the methodology designed for the usability validation of the method for the given purpose and the description of sensor, its principle, technical realization, design and parameters.

  17. Assisted Vacations for Men with Dementia and Their Caregiving Spouses: Evaluation of Health-Related Effects

    ERIC Educational Resources Information Center

    Wilz, Gabriele; Fink-Heitz, Margit

    2008-01-01

    Purpose: In this study, we conducted the first evaluation of assisted vacations for persons with dementia and their caregivers in the field of caregiving research. Design and Methods: We used a quasi-experimental, two-group, repeated measures design with two measuring times (preintervention, 3-month follow-up) to examine whether assisted vacations…

  18. Development and Initial Testing of a Measure of Person-Directed Care

    ERIC Educational Resources Information Center

    White, Diana L.; Newton-Curtis, Linda; Lyons, Karen S.

    2008-01-01

    Purpose: The purpose of the study was to empirically test items of a new measure designed to assess person-directed care (PDC) practices in long-term care. Design and Methods: After reviewing the literature, we identified five areas related to PDC: personhood, comfort care, autonomy, knowing the person, and support for relationships. We also…

  19. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Technical Performance Assessment

    PubMed Central

    2017-01-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831

  20. Distributed genetic algorithms for the floorplan design problem

    NASA Technical Reports Server (NTRS)

    Cohoon, James P.; Hegde, Shailesh U.; Martin, Worthy N.; Richards, Dana S.

    1991-01-01

    Designing a VLSI floorplan calls for arranging a given set of modules in the plane to minimize the weighted sum of area and wire-length measures. A method of solving the floorplan design problem using distributed genetic algorithms is presented. Distributed genetic algorithms, based on the paleontological theory of punctuated equilibria, offer a conceptual modification to the traditional genetic algorithms. Experimental results on several problem instances demonstrate the efficacy of this method and indicate the advantages of this method over other methods, such as simulated annealing. The method has performed better than the simulated annealing approach, both in terms of the average cost of the solutions found and the best-found solution, in almost all the problem instances tried.

  1. Measurements of Young's and shear moduli of rail steel at elevated temperatures.

    PubMed

    Bao, Yuanye; Zhang, Haifeng; Ahmadi, Mehdi; Karim, Md Afzalul; Felix Wu, H

    2014-03-01

    The design and modelling of the buckling effect of Continuous Welded Rail (CWR) requires accurate material constants, especially at elevated temperatures. However, such material constants have rarely been found in literature. In this article, the Young's moduli and shear moduli of rail steel at elevated temperatures are determined by a new sonic resonance method developed in our group. A network analyser is used to excite a sample hanged inside a furnace through a simple tweeter type speaker. The vibration signal is picked up by a Polytec OFV-5000 Laser Vibrometer and then transferred back to the network analyser. Resonance frequencies in both the flexural and torsional modes are measured, and the Young's moduli and shear moduli are determined through the measured resonant frequencies. To validate the measured elastic constants, the measurements have been repeated by using the classic sonic resonance method. The comparisons of obtained moduli from the two methods show an excellent consistency of the results. In addition, the material elastic constants measured are validated by an ultrasound test based on a pulse-echo method and compared with previous published results at room temperature. The measured material data provides an invaluable reference for the design of CWR to avoid detrimental buckling failure. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Proposed Modifications to Engineering Design Guidelines Related to Resistivity Measurements and Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb

    2005-01-01

    A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for consideration and measurements of resistivity for many typical spacecraft charging scenarios.

  3. Body measurements of Chinese males in dynamic postures and application.

    PubMed

    Wang, Y J; Mok, P Y; Li, Y; Kwok, Y L

    2011-11-01

    It is generally accepted that there is a relationship between body dimensions, body movement and clothing wearing ease design, and yet previous research in this area has been neither sufficient nor systematic. This paper proposes a method to measure the human body in the static state and in 17 dynamic postures, so as to understand dimensional changes of different body parts during dynamic movements. Experimental work is carried out to collect 30 measurements of 10 male Chinese subjects in both static and dynamic states. Factor analysis is used to analyse body measurement data in a static state, and such key measurements describe the characteristics of different body figures. Moreover, one-way ANOVA is used to analyse how dynamic postures affect these key body measurements. Finally, an application of the research results is suggested: a dynamic block patternmaking method for high-performance clothing design. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. Displacement and deformation measurement for large structures by camera network

    NASA Astrophysics Data System (ADS)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  5. Multi-spectral pyrometer for gas turbine blade temperature measurement

    NASA Astrophysics Data System (ADS)

    Gao, Shan; Wang, Lixin; Feng, Chi

    2014-09-01

    To achieve the highest possible turbine inlet temperature requires to accurately measuring the turbine blade temperature. If the temperature of blade frequent beyond the design limits, it will seriously reduce the service life. The problem for the accuracy of the temperature measurement includes the value of the target surface emissivity is unknown and the emissivity model is variability and the thermal radiation of the high temperature environment. In this paper, the multi-spectral pyrometer is designed provided mainly for range 500-1000°, and present a model corrected in terms of the error due to the reflected radiation only base on the turbine geometry and the physical properties of the material. Under different working conditions, the method can reduce the measurement error from the reflect radiation of vanes, make measurement closer to the actual temperature of the blade and calculating the corresponding model through genetic algorithm. The experiment shows that this method has higher accuracy measurements.

  6. Neutron measurement at the thermal column of the Malaysian Triga Mark II reactor using gold foil activation method and TLD

    NASA Astrophysics Data System (ADS)

    Shalbi, Safwan; Salleh, Wan Norhayati Wan; Mohamad Idris, Faridah; Aliff Ashraff Rosdi, Muhammad; Syahir Sarkawi, Muhammad; Liyana Jamsari, Nur; Nasir, Nur Aishah Mohd

    2018-01-01

    In order to design facilities for boron neutron capture therapy (BNCT), the neutron measurement must be considered to obtain the optimal design of BNCT facility such as collimator and shielding. The previous feasibility study showed that the thermal column could generate higher thermal neutrons yield for BNCT application at the TRIGA MARK II reactor. Currently, the facility for BNCT are planned to be developed at thermal column. Thus, the main objective was focused on the thermal neutron and epithermal neutron flux measurement at the thermal column. In this measurement, pure gold and cadmium were used as a filter to obtain the thermal and epithermal neutron fluxes from inside and outside of the thermal column door of the 200kW reactor power using a gold foil activation method. The results were compared with neutron fluxes using TLD 600 and TLD 700. The outcome of this work will become the benchmark for the design of BNCT collimator and the shielding

  7. Population Fisher information matrix and optimal design of discrete data responses in population pharmacodynamic experiments.

    PubMed

    Ogungbenro, Kayode; Aarons, Leon

    2011-08-01

    In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.

  8. Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.

    2014-01-01

    Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.

  9. Survey and Experimental Testing of Nongravimetric Mass Measurement Devices

    NASA Technical Reports Server (NTRS)

    Oakey, W. E.; Lorenz, R.

    1977-01-01

    Documentation presented describes the design, testing, and evaluation of an accelerated gravimetric balance, a low mass air bearing oscillator of the spring-mass type, and a centrifugal device for liquid mass measurement. A direct mass readout method was developed to replace the oscillation period readout method which required manual calculations to determine mass. A protoype 25 gram capacity micro mass measurement device was developed and tested.

  10. Novel methods for measuring afterglow in developmental scintillators for X-ray and neutron detection

    NASA Astrophysics Data System (ADS)

    Bartle, C. M.; Edgar, A.; Dixie, L.; Varoy, C.; Piltz, R.; Buchanan, S.; Rutherford, K.

    2011-09-01

    In this paper we discuss two novel methods of measuring afterglow in scintillators. One method is designed for X-ray detection and the other for neutron detection applications. In the first method a commercial fan-beam scanner of basic design similar to those seen at airports is used to deliver a typically 12 ms long X-ray pulse to a scintillator by passing the test equipment through the scanner on the conveyor belt. In the second method the thermal neutron beam from a research reactor is incident on the scintillator. The beam is cut-off in about 1 ms using a 10B impregnated aluminum pneumatic shutter, and the afterglow is recorded on a dual range storage oscilloscope to capture both the steady state intensity and the weak decay. We describe these measurement methods and the results obtained for a range of developmental ceramic and glass scintillators, as well as some standard scintillators such as NaI(Tl), LiI(Eu) and the plastic scintillator NE102A. Preliminary modeling of the afterglow is presented.

  11. A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors

    PubMed Central

    Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Wang, Zhicheng

    2017-01-01

    In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors’ outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber’s M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS) is under entirely self-contained conditions. PMID:28353682

  12. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  13. A Robust Inner and Outer Loop Control Method for Trajectory Tracking of a Quadrotor

    PubMed Central

    Xia, Dunzhu; Cheng, Limei; Yao, Yanhong

    2017-01-01

    In order to achieve the complicated trajectory tracking of quadrotor, a geometric inner and outer loop control scheme is presented. The outer loop generates the desired rotation matrix for the inner loop. To improve the response speed and robustness, a geometric SMC controller is designed for the inner loop. The outer loop is also designed via sliding mode control (SMC). By Lyapunov theory and cascade theory, the closed-loop system stability is guaranteed. Next, the tracking performance is validated by tracking three representative trajectories. Then, the robustness of the proposed control method is illustrated by trajectory tracking in presence of model uncertainty and disturbances. Subsequently, experiments are carried out to verify the method. In the experiment, ultra wideband (UWB) is used for indoor positioning. Extended Kalman Filter (EKF) is used for fusing inertial measurement unit (IMU) and UWB measurements. The experimental results show the feasibility of the designed controller in practice. The comparative experiments with PD and PD loop demonstrate the robustness of the proposed control method. PMID:28925984

  14. Geophysical methods for determining the geotechnical engineering properties of earth materials.

    DOT National Transportation Integrated Search

    2010-03-01

    Surface and borehole geophysical methods exist to measure in-situ properties and structural : characteristics of earth materials. Application of such methods has demonstrated cost savings through : reduced design uncertainty and lower investigation c...

  15. Coupled parametric design of flow control and duct shape

    NASA Technical Reports Server (NTRS)

    Florea, Razvan (Inventor); Bertuccioli, Luca (Inventor)

    2009-01-01

    A method for designing gas turbine engine components using a coupled parametric analysis of part geometry and flow control is disclosed. Included are the steps of parametrically defining the geometry of the duct wall shape, parametrically defining one or more flow control actuators in the duct wall, measuring a plurality of performance parameters or metrics (e.g., flow characteristics) of the duct and comparing the results of the measurement with desired or target parameters, and selecting the optimal duct geometry and flow control for at least a portion of the duct, the selection process including evaluating the plurality of performance metrics in a pareto analysis. The use of this method in the design of inter-turbine transition ducts, serpentine ducts, inlets, diffusers, and similar components provides a design which reduces pressure losses and flow profile distortions.

  16. Congruency of scapula locking plates: implications for implant design.

    PubMed

    Park, Andrew Y; DiStefano, James G; Nguyen, Thuc-Quyen; Buckley, Jenni M; Montgomery, William H; Grimsrud, Chris D

    2012-04-01

    We conducted a study to evaluate the congruency of fit of current scapular plate designs. Three-dimensional image-processing and -analysis software, and computed tomography scans of 12 cadaveric scapulae were used to generate 3 measurements: mean distance from plate to bone, maximum distance, and percentage of plate surface within 2 mm of bone. These measurements were used to quantify congruency. The scapular spine plate had the most congruent fit in all 3 measured variables. The lateral border and glenoid plates performed statistically as well as the scapular spine plate in at least 1 of the measured variables. The medial border plate had the least optimal measurements in all 3 variables. With locking-plate technology used in a wide variety of anatomical locations, the locking scapula plate system can allow for a fixed-angle construct in this region. Our study results showed that the scapular spine, glenoid, and lateral border plates are adequate in terms of congruency. However, design improvements may be necessary for the medial border plate. In addition, we describe a novel method for quantifying hardware congruency, a method that can be applied to any anatomical location.

  17. Large field distributed aperture laser semiactive angle measurement system design with imaging fiber bundles.

    PubMed

    Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli

    2016-09-01

    A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.

  18. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.

  19. Innovative Instrumentation and Analysis of the Temperature Measurement for High Temperature Gasification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong W. Lee

    The project entitled, ''Innovative Instrumentation and Analysis of the Temperature Measurement for High Temperature Gasification'', was successfully completed by the Principal Investigator, Dr. S. Lee and his research team in the Center for Advanced Energy Systems and Environmental Control Technologies at Morgan State University. The major results and outcomes were presented in semi-annual progress reports and annual project review meetings/presentations. Specifically, the literature survey including the gasifier temperature measurement, the ultrasonic application in cleaning application, and spray coating process and the gasifier simulator (cold model) testing has been successfully conducted during the first year. The results show that four factorsmore » (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. Then the gasifier simulator (hot model) design and the fabrication as well as the systematic tests on hot model were completed to test the significant factors on temperature measurement in the second year. The advanced Industrial analytic methods such as statistics-based experimental design, analysis of variance (ANOVA) and regression methods were applied in the hot model tests. The results show that operational parameters (i.e. air flow rate, water flow rate, fine dust particle amount, ammonia addition) presented significant impact on the temperature measurement inside the gasifier simulator. The experimental design and ANOVA are very efficient way to design and analyze the experiments. The results show that the air flow rate and fine dust particle amount are statistically significant to the temperature measurement. The regression model provided the functional relation between the temperature and these factors with substantial accuracy. In the last year of the project period, the ultrasonic and subsonic cleaning methods and coating materials were tested/applied on the thermocouple cleaning according to the proposed approach. Different frequency, application time and power of the ultrasonic/subsonic output were tested. The results show that the ultrasonic approach is one of the best methods to clean the thermocouple tips during the routine operation of the gasifier. In addition, the real time data acquisition system was also designed and applied in the experiments. This advanced instrumentation provided the efficient and accurate data acquisition for this project. In summary, the accomplishment of the project provided useful information of the ultrasonic cleaning method applied in thermocouple tip cleaning. The temperature measurement could be much improved both in accuracy and duration provided that the proposed approach is widely used in the gasification facilities.« less

  20. Canadian Health Measures Survey pre-test: design, methods, results.

    PubMed

    Tremblay, Mark; Langlois, Renée; Bryan, Shirley; Esliger, Dale; Patterson, Julienne

    2007-01-01

    The Canadian Health Measures Survey (CHMS) pre-test was conducted to provide information about the challenges and costs associated with administering a physical health measures survey in Canada. To achieve the specific objectives of the pre-test, protocols were developed and tested, and methods for household interviewing and clinic testing were designed and revised. The cost, logistics and suitability of using fixed sites for the CHMS were assessed. Although data collection, transfer and storage procedures are complex, the pre-test experience confirmed Statistics Canada's ability to conduct a direct health measures survey and the willingness of Canadians to participate in such a health survey. Many operational and logistical procedures worked well and, with minor modifications, are being employed in the main survey. Fixed sites were problematic, and survey costs were higher than expected.

  1. 49 CFR 210.25 - Measurement criteria and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... American National Standard Institute Standards, “Method for Measurement of Sound Pressure Levels,” (ANSI S1... measurement indicating a violation. (ii) The sound level measurement system shall be checked not less than... calibrator of the microphone coupler type designed for the sound level measurement system in use shall be...

  2. A new gas dilution method for measuring body volume.

    PubMed Central

    Nagao, N; Tamaki, K; Kuchiki, T; Nagao, M

    1995-01-01

    This study was designed to examine the validity of a new gas dilution method (GD) for measuring human body volume and to compare its accuracy with the results obtained by the underwater weighing method (UW). We measured the volume of plastic bottles and 16 subjects (including two females), aged 18-42 years with each method. For the bottles, the volume measured by hydrostatic weighing was correlated highly (r = 1.000) with that measured by the new gas dilution method. For the subjects, the body volume determined by the two methods was significantly correlated (r = 0.998). However, the subject's volume measured by the gas dilution method was significantly larger than that by underwater weighing method. There was significant correlation (r = 0.806) between GD volume-UW volume and the body mass index (BMI), so that UW volume could be predicted from GD volume and BMI. It can be concluded that the new gas dilution method offers promising possibilities for future research in the population who cannot submerge underwater. PMID:7551760

  3. Using crosscorrelation techniques to determine the impulse response of linear systems

    NASA Technical Reports Server (NTRS)

    Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.

    1993-01-01

    A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.

  4. Embedded 3D shape measurement system based on a novel spatio-temporal coding method

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Tian, Jindong; Tian, Yong; Li, Dong

    2016-11-01

    Structured light measurement has been wildly used since 1970s in industrial component detection, reverse engineering, 3D molding, robot navigation, medical and many other fields. In order to satisfy the demand for high speed, high precision and high resolution 3-D measurement for embedded system, a new patterns combining binary and gray coding principle in space are designed and projected onto the object surface orderly. Each pixel corresponds to the designed sequence of gray values in time - domain, which is treated as a feature vector. The unique gray vector is then dimensionally reduced to a scalar which could be used as characteristic information for binocular matching. In this method, the number of projected structured light patterns is reduced, and the time-consuming phase unwrapping in traditional phase shift methods is avoided. This algorithm is eventually implemented on DM3730 embedded system for 3-D measuring, which consists of an ARM and a DSP core and has a strong capability of digital signal processing. Experimental results demonstrated the feasibility of the proposed method.

  5. Design of an experimental apparatus for measurement of the surface tension of metastable fluids

    NASA Astrophysics Data System (ADS)

    Vinš, V.; Hrubý, J.; Hykl, J.; Blaha, J.; Šmíd, B.

    2013-04-01

    A unique experimental apparatus for measurement of the surface tension of aqueous mixtures has been designed, manufactured, and tested in our laboratory. The novelty of the setup is that it allows measurement of surface tension by two different methods: a modified capillary elevation method in a long vertical capillary tube and a method inspired by the approach of Hacker (National Advisory Committee for Aeronautics, Technical Note 2510, 1-20, 1951), i.e. in a short horizontal capillary tube. Functionality of all main components of the apparatus, e.g., glass chamber with the capillary tube, temperature control unit consisting of two thermostatic baths with special valves for rapid temperature jumps, helium distribution setup allowing pressure variation above the liquid meniscus inside the capillary tube, has been successfully tested. Preliminary results for the surface tension of the stable and metastable supercooled water measured by the capillary elevation method at atmospheric pressure are provided. The surface tension of water measured at temperatures between +26 °C and -11 °C is in good agreement with the extrapolated IAPWS correlation (IAPWS Release on Surface Tension of Ordinary Water Substance, September 1994); however it disagrees with data by Hacker.

  6. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  7. [Research on the temperature field detection method of hot forging based on long-wavelength infrared spectrum].

    PubMed

    Zhang, Yu-Cun; Wei, Bin; Fu, Xian-Bin

    2014-02-01

    A temperature field detection method based on long-wavelength infrared spectrum for hot forging is proposed in the present paper. This method combines primary spectrum pyrometry and three-stage FP-cavity LCTF. By optimizing the solutions of three group nonlinear equations in the mathematical model of temperature detection, the errors are reduced, thus measuring results will be more objective and accurate. Then the system of three-stage FP-cavity LCTF was designed on the principle of crystal birefringence. The system realized rapid selection of any wavelength in a certain wavelength range. It makes the response of the temperature measuring system rapid and accurate. As a result, without the emissivity of hot forging, the method can acquire exact information of temperature field and effectively suppress the background light radiation around the hot forging and ambient light that impact the temperature detection accuracy. Finally, the results of MATLAB showed that the infrared spectroscopy through the three-stage FP-cavity LCTF could meet the requirements of design. And experiments verified the feasibility of temperature measuring method. Compared with traditional single-band thermal infrared imager, the accuracy of measuring result was improved.

  8. Mask Design for the Space Interferometry Mission Internal Metrology

    NASA Technical Reports Server (NTRS)

    Marx, David; Zhao, Feng; Korechoff, Robert

    2005-01-01

    This slide presentation reviews the mask design used for the internal metrology of the Space Interferometry Mission (SIM). Included is information about the project, the method of measurements with SIM, the internal metrology, numerical model of internal metrology, wavefront examples, performance metrics, and mask design

  9. DESIGN AND PERFORMANCE OF A LOW FLOW RATE INLET

    EPA Science Inventory

    Several ambient air samplers that have been designated by the U. S. EPA as Federal Reference Methods (FRMs) for measuring particulate matter nominally less than 10 um (PM10) include the use of a particular inlet design that aspirates particulate matter from the atmosphere at 1...

  10. Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.

    PubMed

    Burdorf, A

    1995-02-01

    The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.

  11. Resource-use measurement based on patient recall: issues and challenges for economic evaluation.

    PubMed

    Thorn, Joanna C; Coast, Joanna; Cohen, David; Hollingworth, William; Knapp, Martin; Noble, Sian M; Ridyard, Colin; Wordsworth, Sarah; Hughes, Dyfrig

    2013-06-01

    Accurate resource-use measurement is challenging within an economic evaluation, but is a fundamental requirement for estimating efficiency. Considerable research effort has been concentrated on the appropriate measurement of outcomes and the policy implications of economic evaluation, while methods for resource-use measurement have been relatively neglected. Recently, the Database of Instruments for Resource Use Measurement (DIRUM) was set up at http://www.dirum.org to provide a repository where researchers can share resource-use measures and methods. A workshop to discuss the issues was held at the University of Birmingham in October 2011. Based on material presented at the workshop, this article highlights the state of the art of UK instruments for resource-use data collection based on patient recall. We consider methodological issues in the design and analysis of resource-use instruments, and the challenges associated with designing new questionnaires. We suggest a method of developing a good practice guideline, and identify some areas for future research. Consensus amongst health economists has yet to be reached on many aspects of resource-use measurement. We argue that researchers should now afford costing methodologies the same attention as outcome measurement, and we hope that this Current Opinion article will stimulate a debate on methods of resource-use data collection and establish a research agenda to improve the precision and accuracy of resource-use estimates.

  12. Designing Chemistry Practice Exams for Enhanced Benefits: An Instrument for Comparing Performance and Mental Effort Measures

    ERIC Educational Resources Information Center

    Knaus, Karen J.; Murphy, Kristen L.; Holme, Thomas A.

    2009-01-01

    The design and use of a chemistry practice exam instrument that includes a measure for student mental effort is described in this paper. Use of such an instrument can beneficial to chemistry students and chemistry educators as well as chemical education researchers from both a content and cognitive science perspective. The method for calculating…

  13. Long-duration heat load measurement approach by novel apparatus design and highly efficient algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Yanwei; Yi, Fajun; Meng, Songhe; Zhuo, Lijun; Pan, Weizhen

    2017-11-01

    Improving the surface heat load measurement technique for vehicles in aerodynamic heating environments is imperative, regarding aspects of both the apparatus design and identification efficiency. A simple novel apparatus is designed for heat load identification, taking into account the lessons learned from several aerodynamic heating measurement devices. An inverse finite difference scheme (invFDM) for the apparatus is studied to identify its surface heat flux from the interior temperature measurements with high efficiency. A weighted piecewise regression filter is also proposed for temperature measurement prefiltering. Preliminary verification of the invFDM scheme and the filter is accomplished via numerical simulation experiments. Three specific pieces of apparatus have been concretely designed and fabricated using different sensing materials. The aerodynamic heating process is simulated by an inductively coupled plasma wind tunnel facility. The identification of surface temperature and heat flux from the temperature measurements is performed by invFDM. The results validate the high efficiency, reliability and feasibility of heat load measurements with different heat flux levels utilizing the designed apparatus and proposed method.

  14. The importance of quality control in validating concentrations ...

    EPA Pesticide Factsheets

    A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer

  15. Design of thermocouple probes for measurement of rocket exhaust plume temperatures

    NASA Astrophysics Data System (ADS)

    Warren, R. C.

    1994-06-01

    This paper summarizes a literature survey on high temperature measurement and describes the design of probes used in plume measurements. There were no cases reported of measurements in extreme environments such as exist in solid rocket exhausts, but there were a number of thermocouple designs which had been used under less extreme conditions and which could be further developed. Tungsten-rhenium(W-Rh) thermocouples had the combined properties of strength at high temperatures, high thermoelectric emf, and resistance to chemical attack. A shielded probe was required, both to protect the thermocouple junction, and to minimise radiative heat losses. After some experimentation, a twin shielded design made from molybdenum gave acceptable results. Corrections for thermal conduction losses were made based on a method obtained from the literature. Radiation losses were minimized with this probe design, and corrections for these losses were too complex and unreliable to be included.

  16. Magnetic Field Response Measurement Acquisition System

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Taylor, Bryant D.; Shams, Qamar A.; Fox, Robert L.

    2005-01-01

    A measurement acquisition method that alleviates many shortcomings of traditional measurement systems is presented in this paper. The shortcomings are a finite number of measurement channels, weight penalty associated with measurements, electrical arcing, wire degradations due to wear or chemical decay and the logistics needed to add new sensors. The key to this method is the use of sensors designed as passive inductor-capacitor circuits that produce magnetic field responses. The response attributes correspond to states of physical properties for which the sensors measure. A radio frequency antenna produces a time-varying magnetic field used to power the sensor and receive the magnetic field response of the sensor. An interrogation system for discerning changes in the sensor response is presented herein. Multiple sensors can be interrogated using this method. The method eliminates the need for a data acquisition channel dedicated to each sensor. Methods of developing magnetic field response sensors and the influence of key parameters on measurement acquisition are discussed.

  17. Laser application to measure vertical sea temperature and turbidity, design phase

    NASA Technical Reports Server (NTRS)

    Hirschberg, J. G.; Wouters, A. W.; Simon, K. M.; Byrne, J. D.; Deverdun, C. E.

    1976-01-01

    An experiment to test a new method was designed, using backscattered radiation from a laser beam to measure oceanographic parameters in a fraction of a second. Tyndall, Rayleigh, Brillouin, and Raman scattering all are utilized to evaluate the parameters. A beam from a continuous argon ion laser is used together with an interferometer and interference filters to gather the information. The results are checked by direct measurements. Future shipboard and airborne experiments are described.

  18. Design of a dual-mode electrochemical measurement and analysis system.

    PubMed

    Yang, Jr-Fu; Wei, Chia-Ling; Wu, Jian-Fu; Liu, Bin-Da

    2013-01-01

    A dual-mode electrochemical measurement and analysis system is proposed. This system includes a dual-mode chip, which was designed and fabricated by using TSMC 0.35 µm 3.3 V/5 V 2P4M mixed-signal CMOS process. Two electrochemical measurement and analysis methods, chronopotentiometry and voltammetry, can be performed by using the proposed chip and system. The proposed chip and system are verified successfully by performing voltammetry and chronopotentiometry on solutions.

  19. A Reference Method for Measuring Emissions of SVOCs in ...

    EPA Pesticide Factsheets

    Semivolatile organic compounds (SVOCs) are indoor air pollutants that may may have significant adverse effects on human health, and emission of SVOCs from building materials and consumer products is of growing concern. Few chamber studies have been conducted due to the challenges associated with SVOC analysis and the lack of validation procedures. Thus there is an urgent need for a reliable and accurate chamber test method to verify the performance of these measurements. A reference method employing a specially-designed chamber and experimental protocol has been developed and is undergoing extensive evaluation. A pilot interlaboratory study (ILS) has been conducted with five laboratories performing chamber tests under identical conditions. Results showed inter-laboratory variations at 25% for SVOC emission rates, with greater agreement observed between intra-laboratory measurements for most of the participating laboratories. The measured concentration profiles also compared reasonably well to the mechanistic model, demonstrating the feasibility of the proposed reference method to independently assess laboratory performance and validate SVOC emission tests. There is an urgent need for improved understanding of the measurement uncertainties associated with SVOC emissions testing. The creation of specially-designed chambers and well-characterized materials serves as a critical prerequisite for improving the procedure used to measure SVOCs emitted from indoor

  20. A smart health monitoring chair for nonintrusive measurement of biological signals.

    PubMed

    Baek, Hyun Jae; Chung, Gih Sung; Kim, Ko Keun; Park, Kwang Suk

    2012-01-01

    We developed nonintrusive methods for simultaneous electrocardiogram, photoplethysmogram, and ballistocardiogram measurements that do not require direct contact between instruments and bare skin. These methods were applied to the design of a diagnostic chair for unconstrained heart rate and blood pressure monitoring purposes. Our methods were operationalized through capacitively coupled electrodes installed in the chair back that include high-input impedance amplifiers, and conductive textiles installed in the seat for capacitive driven-right-leg circuit configuration that is capable of recording electrocardiogram information through clothing. Photoplethysmograms were measured through clothing using seat mounted sensors with specially designed amplifier circuits that vary in light intensity according to clothing type. Ballistocardiograms were recorded using a film type transducer material, polyvinylidenefluoride (PVDF), which was installed beneath the seat cover. By simultaneously measuring signals, beat-to-beat heart rates could be monitored even when electrocardiograms were not recorded due to movement artifacts. Beat-to-beat blood pressure was also monitored using unconstrained measurements of pulse arrival time and other physiological parameters, and our experimental results indicated that the estimated blood pressure tended to coincide with actual blood pressure measurements. This study demonstrates the feasibility of our method and device for biological signal monitoring through clothing for unconstrained long-term daily health monitoring that does not require user awareness and is not limited by physical activity.

  1. Wireless acceleration sensor of moving elements for condition monitoring of mechanisms

    NASA Astrophysics Data System (ADS)

    Sinitsin, Vladimir V.; Shestakov, Aleksandr L.

    2017-09-01

    Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.

  2. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  3. Object's optical geometry measurements based on Extended Depth of Field (EDoF) approach

    NASA Astrophysics Data System (ADS)

    Szydłowski, Michał; Powałka, Bartosz; Chady, Tomasz; Waszczuk, Paweł

    2017-02-01

    The authors propose a method of using EDoF in macro inspections using bi-telecentric lenses and a specially designed experimental machine setup, allowing accurate focal distance changing. Also a software method is presented allowing EDoF image reconstruction using the continuous wavelet transform (CWT). Exploited method results are additionally compared with measurements performed with Keyence's LJ-V Series in-line Profilometer for reference matters.

  4. Robust design of configurations and parameters of adaptable products

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua

    2014-03-01

    An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.

  5. Surface topography acquisition method for double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry.

    PubMed

    Zhang, Tao; Gao, Feng; Jiang, Xiangqian

    2017-10-02

    This paper proposes an approach to measure double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry (DPWSI). The principle and mathematical model is discussed and the measurement system is calibrated with a combination of standard step-height samples for both probes vertical calibrations and a specially designed calibration artefact for building up the space coordinate relationship of the dual-probe measurement system. The topography of the specially designed artefact is acquired by combining the measurement results with white light scanning interferometer (WLSI) and scanning electron microscope (SEM) for reference. The relative location of the two probes is then determined with 3D registration algorithm. Experimental validation of the approach is provided and the results show that the method is able to measure double-sided near-right-angle structured surfaces with nanometer vertical resolution and micrometer lateral resolution.

  6. Magnetostriction measurement by four probe method

    NASA Astrophysics Data System (ADS)

    Dange, S. N.; Radha, S.

    2018-04-01

    The present paper describes the design and setting up of an indigenouslydevelopedmagnetostriction(MS) measurement setup using four probe method atroom temperature.A standard strain gauge is pasted with a special glue on the sample and its change in resistance with applied magnetic field is measured using KeithleyNanovoltmeter and Current source. An electromagnet with field upto 1.2 tesla is used to source the magnetic field. The sample is placed between the magnet poles using self designed and developed wooden probe stand, capable of moving in three mutually perpendicular directions. The nanovoltmeter and current source are interfaced with PC using RS232 serial interface. A software has been developed in for logging and processing of data. Proper optimization of measurement has been done through software to reduce the noise due to thermal emf and electromagnetic induction. The data acquired for some standard magnetic samples are presented. The sensitivity of the setup is 1microstrain with an error in measurement upto 5%.

  7. In situ surface roughness measurement using a laser scattering method

    NASA Astrophysics Data System (ADS)

    Tay, C. J.; Wang, S. H.; Quan, C.; Shang, H. M.

    2003-03-01

    In this paper, the design and development of an optical probe for in situ measurement of surface roughness are discussed. Based on this light scattering principle, the probe which consists of a laser diode, measuring lens and a linear photodiode array, is designed to capture the scattered light from a test surface with a relatively large scattering angle ϕ (=28°). This capability increases the measuring range and enhances repeatability of the results. The coaxial arrangement that incorporates a dual-laser beam and a constant compressed air stream renders the proposed system insensitive to movement or vibration of the test surface as well as surface conditions. Tests were conducted on workpieces which were mounted on a turning machine that operates with different cutting speeds. Test specimens which underwent different machining processes and of different surface finish were also studied. The results obtained demonstrate the feasibility of surface roughness measurement using the proposed method.

  8. Monitoring post-fire vegetation rehabilitation projects: A common approach for non-forested ecosystems

    USGS Publications Warehouse

    Wirth, Troy A.; Pyke, David A.

    2007-01-01

    Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using an approach that incorporates these six monitoring program design elements and objectives, as well as repeatable procedures to measure cover, density, gap intercept, and soil erosion within each ecoregion and plant community. Additionally, using a common monitoring program design with comparable methods, consistently documenting results, and creating and maintaining a central database for query and reporting, will ultimately allow a determination of the effectiveness of post-fire rehabilitation activities region-wide.

  9. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  10. Development of a test method against hot alkaline chemical splashes.

    PubMed

    Mäkinen, Helena; Nieminen, Kalevi; Mäki, Susanna; Siiskonen, Sirkku

    2008-01-01

    High temperature alkaline chemical liquids have caused injuries and hazardous situations in Finnish pulp manufacturing mills. There are no requirements and/or test method standards concerning protection against high temperature alkaline chemical splashes. This paper describes the test method development process to test and identify materials appropriate for hot liquid chemical hazard protection. In the first phase, the liquid was spilled through a stainless steel funnel and the protection performance was evaluated using a polyvinyl chloride (PVC) film under the test material. After several tentative improvements, a graphite crucible was used for heating and spilling the chemical, and a copper-coated K-type thermometer with 4 independent measuring areas was designed to measure the temperature under the material samples. The thermometer was designed to respond quickly so that peak temperatures could be measured. The main problem was to keep the spilled amount of chemical constant, which unfortunately resulted in significant variability in data.

  11. Design of low loss helix circuits for interference fitted and brazed circuits

    NASA Technical Reports Server (NTRS)

    Jacquez, A.

    1983-01-01

    The RF loss properties and thermal capability of brazed helix circuits and interference fitted circuits were evaluated. The objective was to produce design circuits with minimum RF loss and maximum heat transfer. These circuits were to be designed to operate at 10 kV and at 20 GHz using a gamma a approximately equal to 1.0. This represents a circuit diameter of only 0.75 millimeters. The fabrication of this size circuit and the 0.48 millimeter high support rods required considerable refinements in the assembly techniques and fixtures used on lower frequency circuits. The transition from the helices to the waveguide was designed and the circuits were matched from 20 to 40 GHz since the helix design is a broad band circuit and at a gamma a of 1.0 will operate over this band. The loss measurement was a transmission measurement and therefore had two such transitions. This resulting double-ended match required tuning elements to achieve the broad band match and external E-H tuners at each end to optimize the match for each frequency where the loss measurement was made. The test method used was a substitution method where the test fixture was replaced by a calibrated attenuator.

  12. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  13. Design of Moisture Content Detection System

    NASA Astrophysics Data System (ADS)

    Wang, W. C.; Wang, L.

    In this paper, a method for measuring the moisture content of grain was presented based on single chip microcomputer and capacitive sensor. The working principle of measuring moisture content is introduced and a concentric cylinder type of capacitive sensor is designed, the signal processing circuits of system are described in details. System is tested in practice and discussions are made on the various factors affecting the capacitive measuring of grain moisture based on the practical experiments, experiment results showed that the system has high measuring accuracy and good controlling capacity.

  14. Measuring Tyre Rolling Noise at the Contact Patch

    NASA Astrophysics Data System (ADS)

    Kozak, P.; Matuszkova, R.; Radimsky, M.; Kudrna, J.

    2017-06-01

    This paper deals with noise generated by road traffic. A focus is concentrated solely on one of its sources related to tyre/road interaction referred as rolling noise. The paper states brief overview of various approaches and methods used to measure this particular source of road traffic noise. On the basis of literature reviews, a unique device has been designed. Development of the measuring device and possibilities of its usage are described in detail in this paper. Obtained results of noise measurements can then be used to design measures that increase safety and a lead to better comfort on the road.

  15. The Effect of Laminar Flow on Rotor Hover Performance

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.; Martin, Preston B.

    2017-01-01

    The topic of laminar flow effects on hover performance is introduced with respect to some historical efforts where laminar flow was either measured or attempted. An analysis method is outlined using combined blade element, momentum method coupled to an airfoil analysis method, which includes the full e(sup N) transition model. The analysis results compared well with the measured hover performance including the measured location of transition on both the upper and lower blade surfaces. The analysis method is then used to understand the upper limits of hover efficiency as a function of disk loading. The impact of laminar flow is higher at low disk loading, but significant improvement in terms of power loading appears possible even up to high disk loading approaching 20 ps f. A optimum planform design equation is derived for cases of zero profile drag and finite drag levels. These results are intended to be a guide for design studies and as a benchmark to compare higher fidelity analysis results. The details of the analysis method are given to enable other researchers to use the same approach for comparison to other approaches.

  16. Feasibility analysis on integration of luminous environment measuring and design based on exposure curve calibration

    NASA Astrophysics Data System (ADS)

    Zou, Yuan; Shen, Tianxing

    2013-03-01

    Besides illumination calculating during architecture and luminous environment design, to provide more varieties of photometric data, the paper presents combining relation between luminous environment design and SM light environment measuring system, which contains a set of experiment devices including light information collecting and processing modules, and can offer us various types of photometric data. During the research process, we introduced a simulation method for calibration, which mainly includes rebuilding experiment scenes in 3ds Max Design, calibrating this computer aid design software in simulated environment under conditions of various typical light sources, and fitting the exposure curves of rendered images. As analytical research went on, the operation sequence and points for attention during the simulated calibration were concluded, connections between Mental Ray renderer and SM light environment measuring system were established as well. From the paper, valuable reference conception for coordination between luminous environment design and SM light environment measuring system was pointed out.

  17. Residual Stress Measurement and Calibration for A7N01 Aluminum Alloy Welded Joints by Using Longitudinal Critically Refracted ( LCR) Wave Transmission Method

    NASA Astrophysics Data System (ADS)

    Zhu, Qimeng; Chen, Jia; Gou, Guoqing; Chen, Hui; Li, Peng; Gao, W.

    2016-10-01

    Residual stress measurement and control are highly important for the safety of structures of high-speed trains, which is critical for the structure design. The longitudinal critically refracted wave technology is the most widely used method in measuring residual stress with ultrasonic method, but its accuracy is strongly related to the test parameters, namely the flight time at the free-stress condition ( t 0), stress coefficient ( K), and initial stress (σ0) of the measured materials. The difference of microstructure in the weld zone, heat affected zone, and base metal (BM) results in the divergence of experimental parameters. However, the majority of researchers use the BM parameters to determine the residual stress in other zones and ignore the initial stress (σ0) in calibration samples. Therefore, the measured residual stress in different zones is often high in errors and may result in the miscalculation of the safe design of important structures. A serious problem in the ultrasonic estimation of residual stresses requires separation between the microstructure and the acoustoelastic effects. In this paper, the effects of initial stress and microstructure on stress coefficient K and flight time t 0 at free-stress conditions have been studied. The residual stress with or without different corrections was investigated. The results indicated that the residual stresses obtained with correction are more accurate for structure design.

  18. Design and Control of Chemical Grouting : Volume 1 - Construction Control

    DOT National Transportation Integrated Search

    1983-04-01

    This report presents the results of a laboratory and field research program investigating innovative method for design and control of chemical grouting in soils. Chemical grouting practice is reviewed and standard evaluation and measurement technique...

  19. Humidity Measurements: A Psychrometer Suitable for On-Line Data Acquisition.

    ERIC Educational Resources Information Center

    Caporaloni, Marina; Ambrosini, Roberto

    1992-01-01

    Explains the typical design, operation, and calibration of a traditional psychrometer. Presents the method utilized for this class project with design considerations, calibration techniques, remote data sensing schematic, and specifics of the implementation process. (JJK)

  20. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    NASA Astrophysics Data System (ADS)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  1. Technological Literacy for Students Aged 6-18: A New Method for Holistic Measuring of Knowledge, Capabilities, Critical Thinking and Decision-Making

    ERIC Educational Resources Information Center

    Avsec, Stanislav; Jamšek, Janez

    2016-01-01

    Technological literacy is identified as a vital achievement of technology- and engineering-intensive education. It guides the design of technology and technical components of educational systems and defines competitive employment in technological society. Existing methods for measuring technological literacy are incomplete or complicated,…

  2. An improved design method of a tuned mass damper for an in-service footbridge

    NASA Astrophysics Data System (ADS)

    Shi, Weixing; Wang, Liangkun; Lu, Zheng

    2018-03-01

    Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.

  3. Silt fences: An economical technique for measuring hillslope soil erosion

    Treesearch

    Peter R. Robichaud; Robert E. Brown

    2002-01-01

    Measuring hillslope erosion has historically been a costly, time-consuming practice. An easy to install low-cost technique using silt fences (geotextile fabric) and tipping bucket rain gauges to measure onsite hillslope erosion was developed and tested. Equipment requirements, installation procedures, statistical design, and analysis methods for measuring hillslope...

  4. A method for the measurement of extremely feeble torques on massive bodies.

    NASA Technical Reports Server (NTRS)

    Boyle, J. C.; Greyerbiehl, J. M.

    1966-01-01

    Single-axis meter design and development for measuring feeble torques on massive bodies, discussing calibration, testing results, evaluation of static dipole moments and spacecraft spin-rate control moments

  5. Review of methods for measuring β-cell function: Design considerations from the Restoring Insulin Secretion (RISE) Consortium.

    PubMed

    Hannon, Tamara S; Kahn, Steven E; Utzschneider, Kristina M; Buchanan, Thomas A; Nadeau, Kristen J; Zeitler, Philip S; Ehrmann, David A; Arslanian, Silva A; Caprio, Sonia; Edelstein, Sharon L; Savage, Peter J; Mather, Kieren J

    2018-01-01

    The Restoring Insulin Secretion (RISE) study was initiated to evaluate interventions to slow or reverse the progression of β-cell failure in type 2 diabetes (T2D). To design the RISE study, we undertook an evaluation of methods for measurement of β-cell function and changes in β-cell function in response to interventions. In the present paper, we review approaches for measurement of β-cell function, focusing on methodologic and feasibility considerations. Methodologic considerations included: (1) the utility of each technique for evaluating key aspects of β-cell function (first- and second-phase insulin secretion, maximum insulin secretion, glucose sensitivity, incretin effects) and (2) tactics for incorporating a measurement of insulin sensitivity in order to adjust insulin secretion measures for insulin sensitivity appropriately. Of particular concern were the capacity to measure β-cell function accurately in those with poor function, as is seen in established T2D, and the capacity of each method for demonstrating treatment-induced changes in β-cell function. Feasibility considerations included: staff burden, including time and required methodological expertise; participant burden, including time and number of study visits; and ease of standardizing methods across a multicentre consortium. After this evaluation, we selected a 2-day measurement procedure, combining a 3-hour 75-g oral glucose tolerance test and a 2-stage hyperglycaemic clamp procedure, augmented with arginine. © 2017 John Wiley & Sons Ltd.

  6. A method for measuring the local gas pressure within a gas-flow stage in situ in the transmission electron microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colby, Robert J.; Alsem, Daan H.; Liyu, Andrey V.

    2015-06-01

    The development of environmental transmission electron microscopy (TEM) has enabled in situ experiments in a gaseous environment with high resolution imaging and spectroscopy. Addressing scientific challenges in areas such as catalysis, corrosion, and geochemistry can require pressures much higher than the ~20 mbar achievable with a differentially pumped, dedicated environmental TEM. Gas flow stages, in which the environment is contained between two semi-transparent thin membrane windows, have been demonstrated at pressures of several atmospheres. While this constitutes significant progress towards operando measurements, the design of many current gas flow stages is such that the pressure at the sample cannot necessarilymore » be directly inferred from the pressure differential across the system. Small differences in the setup and design of the gas flow stage can lead to very different sample pressures. We demonstrate a method for measuring the gas pressure directly, using a combination of electron energy loss spectroscopy and TEM imaging. This method requires only two energy filtered TEM images, limiting the measurement time to a few seconds and can be performed during an ongoing experiment at the region of interest. This approach provides a means to ensure reproducibility between different experiments, and even between very differently designed gas flow stages.« less

  7. Design verification of large time constant thermal shields for optical reference cavities.

    PubMed

    Zhang, J; Wu, W; Shi, X H; Zeng, X Y; Deng, K; Lu, Z H

    2016-02-01

    In order to achieve high frequency stability in ultra-stable lasers, the Fabry-Pérot reference cavities shall be put inside vacuum chambers with large thermal time constants to reduce the sensitivity to external temperature fluctuations. Currently, the determination of thermal time constants of vacuum chambers is based either on theoretical calculation or time-consuming experiments. The first method can only apply to simple system, while the second method will take a lot of time to try out different designs. To overcome these limitations, we present thermal time constant simulation using finite element analysis (FEA) based on complete vacuum chamber models and verify the results with measured time constants. We measure the thermal time constants using ultrastable laser systems and a frequency comb. The thermal expansion coefficients of optical reference cavities are precisely measured to reduce the measurement error of time constants. The simulation results and the experimental results agree very well. With this knowledge, we simulate several simplified design models using FEA to obtain larger vacuum thermal time constants at room temperature, taking into account vacuum pressure, shielding layers, and support structure. We adopt the Taguchi method for shielding layer optimization and demonstrate that layer material and layer number dominate the contributions to the thermal time constant, compared with layer thickness and layer spacing.

  8. Discharge rate measurements in a canal using radiotracer methods.

    PubMed

    Pant, H J; Goswami, Sunil; Biswal, Jayashree; Samantray, J S; Sharma, V K

    2016-06-01

    Discharge rates of water were measured in a canal using radiotracer methods with an objective to validate the efficacy of Concrete Volute Pumps (CVPs) installed at various pumping stations along the canal. Pulse velocity and dilution methods were applied to measure the discharge rates using Iodine-131 as a radiotracer. The discharge rate measured in one of the sections of the canal using the pulse velocity method was found to be 22.5m(3)/s, whereas the discharge rates measured using the dilution method in four different sections of the canal varied from 20.27 to 20.62m(3)/s with single CVP in operation. The standard error in discharge rate measurements using dilution method ranged from ±1.1 to ±1.8%. The experimentally measured values of the discharge rate were in good agreement with the design value of the discharge rate (20m(3)/s) thus validating the performance of the CVPs used in the canal. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment.

    PubMed

    Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C

    2015-02-01

    Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  10. Clinical tooth preparations and associated measuring methods: a systematic review.

    PubMed

    Tiu, Janine; Al-Amleh, Basil; Waddell, J Neil; Duncan, Warwick J

    2015-03-01

    The geometries of tooth preparations are important features that aid in the retention and resistance of cemented complete crowns. The clinically relevant values and the methods used to measure these are not clear. The purpose of this systematic review was to retrieve, organize, and critically appraise studies measuring clinical tooth preparation parameters, specifically the methodology used to measure the preparation geometry. A database search was performed in Scopus, PubMed, and ScienceDirect with an additional hand search on December 5, 2013. The articles were screened for inclusion and exclusion criteria and information regarding the total occlusal convergence (TOC) angle, margin design, and associated measuring methods were extracted. The values and associated measuring methods were tabulated. A total of 1006 publications were initially retrieved. After removing duplicates and filtering by using exclusion and inclusion criteria, 983 articles were excluded. Twenty-three articles reported clinical tooth preparation values. Twenty articles reported the TOC, 4 articles reported margin designs, 4 articles reported margin angles, and 3 articles reported the abutment height of preparations. A variety of methods were used to measure these parameters. TOC values seem to be the most important preparation parameter. Recommended TOC values have increased over the past 4 decades from an unachievable 2- to 5-degree taper to a more realistic 10 to 22 degrees. Recommended values are more likely to be achieved under experimental conditions if crown preparations are performed outside of the mouth. We recommend that a standardized measurement method based on the cross sections of crown preparations and standardized reporting be developed for future studies analyzing preparation geometry. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  11. Design and fabrication of a real-time measurement system for the capsaicinoid content of Korean red pepper (Capsicum annuum L.) powder by visible and near-infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    This research aims to design and fabricate a system to measure the capsaicinoid content of red pepper powder in a non-destructive and rapid method through visible and near infrared spectroscopy (VNIR). The developed system scans a well-leveled powder surface continuously to minimize the influence of...

  12. The application of measurement techniques to track flutter testing

    NASA Technical Reports Server (NTRS)

    Roglin, H. R.

    1975-01-01

    The application is discussed of measurement techniques to captive flight flutter tests at the Supersonic Naval Ordnance Research Track (SNORT), U. S. Naval Ordnance Test Station, China Lake, California. The high-speed track, by its ability to prove the validity of design and to accurately determine the actual margin of safety, offers a unique method of flutter testing for the aircraft design engineer.

  13. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  14. Comparison of EPA Method 1615 RT-qPCR Assays in Standard and Kit Format

    EPA Science Inventory

    EPA Method 1615 contains protocols for measuring enterovirus and norovirus by reverse transcription quantitative polymerase chain reaction. A commercial kit based upon these protocols was designed and compared to the method's standard approach. Reagent grade, secondary effluent, ...

  15. Evaluation of bearing capacity of piles from cone penetration test data.

    DOT National Transportation Integrated Search

    2007-12-01

    A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...

  16. Statistical aspects of quantitative real-time PCR experiment design.

    PubMed

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Digital Phase-Locked Loop With Phase And Frequency Feedback

    NASA Technical Reports Server (NTRS)

    Thomas, J. Brooks

    1991-01-01

    Advanced design for digital phase-lock loop (DPLL) allows loop gains higher than those used in other designs. Divided into two major components: counterrotation processor and tracking processor. Notable features include use of both phase and rate-of-change-of-phase feedback instead of frequency feedback alone, normalized sine phase extractor, improved method for extracting measured phase, and improved method for "compressing" output rate.

  18. Electrical methods of determining soil moisture content

    NASA Technical Reports Server (NTRS)

    Silva, L. F.; Schultz, F. V.; Zalusky, J. T.

    1975-01-01

    The electrical permittivity of soils is a useful indicator of soil moisture content. Two methods of determining the permittivity profile in soils are examined. A method due to Becher is found to be inapplicable to this situation. A method of Slichter, however, appears to be feasible. The results of Slichter's method are extended to the proposal of an instrument design that could measure available soil moisture profile (percent available soil moisture as a function of depth) from a surface measurement to an expected resolution of 10 to 20 cm.

  19. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    PubMed

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed.

  20. Three-Dimensional Measurement Applied in Design Eye Point of Aircraft Cockpits.

    PubMed

    Wang, Yanyan; Guo, Xiaochao; Liu, Qingfeng; Xiao, Huajun; Bai, Yu

    2018-04-01

    Inappropriate design eye point (DEP) will lead to nonstandard sitting postures, including nonneutral head positions and other uncomfortable sitting postures, which are high risk factors for neck pain in fighter pilots exposed to high G forces. Therefore, application of a 3D measurement method to collect data regarding eye position while in the cruising sitting posture in the aircraft cockpit to guide the design eye point has been proposed. A total of 304 male fixed wing aircraft pilots were divided into two groups. Subgroup A (N = 48) were studied to define the cruising posture during flight. Subgroup B (N = 256) were studied with Romer 3D measurement equipment to locate the cruising eye position of the pilots in a simulated cockpit. The 3D data were compared to DEP data in the current standard cockpit. According to 3D measurement, the vertical distance from the cruising eye point to the neutral seat reference point was 759 mm, which is 36 mm lower than that of the Chinese standard DEP and also lower than the U.S. military standard. The horizontal distance was 131 mm, which is 24 mm shorter than that of the Chinese standard. The current DEP data cannot fulfill the needs of fighter pilots and should be amended according to the results of the 3D measurement so that pilots can acquire the optimal cruising posture in flight. This new method has the value of practical application to investigate cockpit ergonomics and the measurement data can guide DEP design.Wang Y, Guo X, Liu Q, Xiao H, Bai Y. Three-dimensional measurement applied in design eye point of aircraft cockpits. Aerosp Med Hum Perform. 2018; 89(4):371-376.

  1. Carotid Intima-Media Thickness Studies: Study Design and Data Analysis

    PubMed Central

    Bots, Michiel L.

    2013-01-01

    Background Carotid intima-media thickness (CIMT) measurements have been widely used as primary endpoint in studies into the effects of new interventions as alternative for cardiovascular morbidity and mortality. There are no accepted standards on the use of CIMT measurements in intervention studies and choices in the design and analysis of a CIMT study are generally based on experience and expert opinion. In the present review, we provide an overview of the current evidence on several aspects in the design and analysis of a CIMT study on the early effects of new interventions. Summary of Issues A balanced evaluation of the carotid segments, carotid walls, and image view to be used as CIMT study endpoint; the reading method (manual or semi-automated and continuously or in batch) to be employed, the required sample size, and the frequency of ultrasound examinations is provided. We also discuss the preferred methods to analyse longitudinal CIMT data and address the possible impact of, and methods to deal with missing and biologically implausible CIMT values. Conclusions Linear mixed effects models are the preferred way to analyse CIMT data and do appropriately handle missing and biologically implausible CIMT values. Furthermore, we recommend to use extensive CIMT designs that measure CIMT at regular points during the multiple carotid sites as such approach is likely to increase the success rates of CIMT intervention studies designed to evaluate the effects of new interventions on atherosclerotic burden. PMID:24324938

  2. A new approach to determine the density of liquids and solids without measuring mass and volume: introducing the solidensimeter

    NASA Astrophysics Data System (ADS)

    Kiriktaş, Halit; Şahin, Mehmet; Eslek, Sinan; Kiriktaş, İrem

    2018-05-01

    This study aims to design a mechanism with which the density of any solid or liquid can be determined without measuring its mass and volume in order to help students comprehend the concept of density more easily. The solidensimeter comprises of two scaled and nested glass containers (graduated cylinder or beaker) and sufficient water. In this method, the density measurement was made using the Archimedes’ principle stating that an object fully submerged in a liquid displaces the same amount of liquid as its volume, while an object partially submerged or floating displaces the same amount of liquid as its mass. Using this method, the density of any solids or liquids can be determined using a simple mathematical ratio. At the end of the process a mechanism that helps students to comprehend the density topic more easily was designed. The system is easy-to-design, uses low-cost equipment and enables one to determine the density of any solid or liquid without measuring its mass and volume.

  3. Development of a neutronics calculation method for designing commercial type Japanese sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, T.; Shimazu, Y.; Hibi, K.

    2012-07-01

    Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of thismore » project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)« less

  4. Measurement in Physical Education. 5th Edition.

    ERIC Educational Resources Information Center

    Mathews, Donald K.

    Concepts of measurement in physical education are presented in this college-level text to enable the preservice physical education major to develop skills in determining pupil status, designing effective physical activity programs, and measuring student progress. Emphasis is placed upon discussion of essential statistical methods, test…

  5. A Standardized Mean Difference Effect Size for Single Case Designs

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.

    2012-01-01

    Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on…

  6. A Standardized Mean Difference Effect Size for Multiple Baseline Designs across Individuals

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R.

    2013-01-01

    Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and…

  7. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  8. Methodology on Investigating the Influences of Automated Material Handling System in Automotive Assembly Process

    NASA Astrophysics Data System (ADS)

    Saffar, Seha; Azni Jafar, Fairul; Jamaludin, Zamberi

    2016-02-01

    A case study was selected as a method to collect data in actual industry situation. The study aimed to assess the influences of automated material handling system in automotive industry by proposing a new design of integration system through simulation, and analyze the significant effect and influence of the system. The method approach tool will be CAD Software (Delmia & Quest). The process of preliminary data gathering in phase 1 will collect all data related from actual industry situation. It is expected to produce a guideline and limitation in designing a new integration system later. In phase 2, an idea or concept of design will be done by using 10 principles of design consideration for manufacturing. A full factorial design will be used as design of experiment in order to analyze the performance measured of the integration system with the current system in case study. From the result of the experiment, an ANOVA analysis will be done to study the performance measured. Thus, it is expected that influences can be seen from the improvement made in the system.

  9. An analytic model for footprint dispersions and its application to mission design

    NASA Technical Reports Server (NTRS)

    Rao, J. R. Jagannatha; Chen, Yi-Chao

    1992-01-01

    This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the Taguchi method. We have studied the mathematical model that forms the basis of the Simulation and Optimization of Rocket Trajectories (SORT) program and developed an analytic method for determining mission reliability with a reduced number of flight simulations. This method can be incorporated in a design algorithm to mathematically optimize different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.

  10. Development of an ultra high performance liquid chromatography method for determining triamcinolone acetonide in hydrogels using the design of experiments/design space strategy in combination with process capability index.

    PubMed

    Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías

    2016-07-01

    An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Fiber optic micro sensor for the measurement of tendon forces

    PubMed Central

    2012-01-01

    A fiber optic sensor developed for the measurement of tendon forces was designed, numerically modeled, fabricated, and experimentally evaluated. The sensor incorporated fiber Bragg gratings and micro-fabricated stainless steel housings. A fiber Bragg grating is an optical device that is spectrally sensitive to axial strain. Stainless steel housings were designed to convert radial forces applied to the housing into axial forces that could be sensed by the fiber Bragg grating. The metal housings were fabricated by several methods including laser micromachining, swaging, and hydroforming. Designs are presented that allow for simultaneous temperature and force measurements as well as for simultaneous resolution of multi-axis forces. The sensor was experimentally evaluated by hydrostatic loading and in vitro testing. A commercial hydraulic burst tester was used to provide uniform pressures on the sensor in order to establish the linearity, repeatability, and accuracy characteristics of the sensor. The in vitro experiments were performed in excised tendon and in a dynamic gait simulator to simulate biological conditions. In both experimental conditions, the sensor was found to be a sensitive and reliable method for acquiring minimally invasive measurements of soft tissue forces. Our results suggest that this sensor will prove useful in a variety of biomechanical measurements. PMID:23033868

  12. Measurements and simulations of the optical gain and anti-reflection coating modal reflectivity in quantum cascade lasers with multiple active region stacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bidaux, Y., E-mail: yves.bidaux@alpeslasers.ch; Alpes Lasers SA, 1-3 Maximilien-de-Meuron, CH-2000 Neuchatel; Terazzi, R.

    2015-09-07

    We report spectrally resolved gain measurements and simulations for quantum cascade lasers (QCLs) composed of multiple heterogeneous stacks designed for broadband emission in the mid-infrared. The measurement method is first demonstrated on a reference single active region QCL based on a double-phonon resonance design emitting at 7.8 μm. It is then extended to a three-stack active region based on bound-to-continuum designs with a broadband emission range from 7.5 to 10.5 μm. A tight agreement is found with simulations based on a density matrix model. The latter implements exhaustive microscopic scattering and dephasing sources with virtually no fitting parameters. The quantitative agreement ismore » furthermore assessed by measuring gain coefficients obtained by studying the threshold current dependence with the cavity length. These results are particularly relevant to understand fundamental gain mechanisms in complex semiconductor heterostructure QCLs and to move towards efficient gain engineering. Finally, the method is extended to the measurement of the modal reflectivity of an anti-reflection coating deposited on the front facet of the broadband QCL.« less

  13. Problem of unity of measurements in ensuring safety of hydraulic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kheifits, V.Z.; Markov, A.I.; Braitsev, V.V.

    1994-07-01

    Ensuring the safety of hydraulic structures (HSs) is not only an industry but also a national and global concern, since failure of large water impounding structures can entail large losses of lives and enormous material losses related to destruction downstream. The main information on the degree of safety of a structure is obtained by comparing information about the actual state of the structure obtained on the basis of measurements in key zones of the structure with the predicted state on basis of the design model used when designing the structure for given conditions of external actions. Numerous, from hundreds tomore » thousands, string type transducers are placed in large HSs. This system of transducers monitor the stress-strain rate, seepage, and thermal regimes. These measurements are supported by the State Standards Committee which certifies the accuracy of the checking methods. To improve the instrumental monitoring of HSs, the author recommends: Calibration of methods and means of reliable diagnosis for each measuring channel in the HS, improvements to reduce measurement error, support for the system software programs, and development of appropriate standards for the design and examination of HSs.« less

  14. Confocal laser induced fluorescence with comparable spatial localization to the conventional method

    NASA Astrophysics Data System (ADS)

    Thompson, Derek S.; Henriquez, Miguel F.; Scime, Earl E.; Good, Timothy N.

    2017-10-01

    We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce.

  15. S-F graphic representation analysis of photoelectric facula focometer poroo-plate glass

    NASA Astrophysics Data System (ADS)

    Tong, Yilin; Han, Xuecai

    2016-10-01

    Optical system focal length is usually based on the magnification method with focal length measurement poroo-plate glass is used as base element measuring focal length of focometer. On the basis of using analysis of magnification method to measure the accuracy of optical lens focal length, an expression between the ruling span of poroo-plate glass and the focal length of measured optical system was deduced, an efficient method to work out S-F graph with AUTOCAD was developed, the selecting principle of focometer parameter was analyzed, and Applied examples for designing poroo-plate glass in S-F figure was obtained.

  16. MEMS piezoresistive cantilever for the direct measurement of cardiomyocyte contractile force

    NASA Astrophysics Data System (ADS)

    Matsudaira, Kenei; Nguyen, Thanh-Vinh; Hirayama Shoji, Kayoko; Tsukagoshi, Takuya; Takahata, Tomoyuki; Shimoyama, Isao

    2017-10-01

    This paper reports on a method to directly measure the contractile forces of cardiomyocytes using MEMS (micro electro mechanical systems)-based force sensors. The fabricated sensor chip consists of piezoresistive cantilevers that can measure contractile forces with high frequency (several tens of kHz) and high sensing resolution (less than 0.1 nN). Moreover, the proposed method does not require a complex observation system or image processing, which are necessary in conventional optical-based methods. This paper describes the design, fabrication, and evaluation of the proposed device and demonstrates the direct measurements of contractile forces of cardiomyocytes using the fabricated device.

  17. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  18. Analytical methods for quantifying greenhouse gas flux in animal production systems.

    PubMed

    Powers, W; Capelari, M

    2016-08-01

    Given increased interest by all stakeholders to better understand the contribution of animal agriculture to climate change, it is important that appropriate methodologies be used when measuring greenhouse gas (GHG) emissions from animal agriculture. Similarly, a fundamental understanding of the differences between methods is necessary to appropriately compare data collected using different approaches and design meaningful experiments. Sources of carbon dioxide, methane, and nitrous oxide emissions in animal production systems includes the animals, feed storage areas, manure deposition and storage areas, and feed and forage production fields. These 3 gases make up the primary GHG emissions from animal feeding operations. Each of the different GHG may be more or less prominent from each emitting source. Similarly, the species dictates the importance of methane emissions from the animals themselves. Measures of GHG flux from animals are often made using respiration chambers, head boxes, tracer gas techniques, or in vitro gas production techniques. In some cases, a combination of techniques are used (i.e., head boxes in combination with tracer gas). The prominent methods for measuring GHG emissions from housing include the use of tracer gas techniques or direct or indirect ventilation measures coupled with concentration measures of gases of interest. Methods for collecting and measuring GHG emissions from manure storage and/or production lots include the use of downwind measures, often using photoacoustic or open path Fourier transform infrared spectroscopy, combined with modeling techniques or the use of static chambers or flux hood methods. Similar methods can be deployed for determining GHG emissions from fields. Each method identified has its own benefits and challenges to use for the stated application. Considerations for use include intended goal, equipment investment and maintenance, frequency and duration of sampling needed to achieve desired representativeness of emissions over time, accuracy and precision of the method, and environmental influences on the method. In the absence of a perfect method for all situations, full knowledge of the advantages and disadvantages of each method is extremely important during the development of the experimental design and interpretation of results. The selection of the suitable technique depends on the animal production system, resource availability, and objective for measurements.

  19. 10 CFR Appendix A to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Electric Refrigerators and Electric...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the test for a unit having no defrost provisions (section 4.1). The second part is designed to capture... 10 Energy 3 2011-01-01 2011-01-01 false Uniform Test Method for Measuring the Energy Consumption... Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CONSERVATION PROGRAM FOR CONSUMER PRODUCTS Test...

  20. 40 CFR 63.11 - Control device and work practice requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in ppmv on a wet basis, as measured for organics by Test Method 18 and measured for hydrogen and... be designed for and operated with an exit velocity less than 18.3 m/sec (60 ft/sec), except as... determined by the method specified in paragraph (b)(7)(i) of this section, equal to or greater than 18.3 m...

  1. A Rapid Method to Achieve Aero-Engine Blade Form Detection

    PubMed Central

    Sun, Bin; Li, Bing

    2015-01-01

    This paper proposes a rapid method to detect aero-engine blade form, according to the characteristics of an aero-engine blade surface. This method first deduces an inclination error model in free-form surface measurements based on the non-contact laser triangulation principle. Then a four-coordinate measuring system was independently developed, a special fixture was designed according to the blade shape features, and a fast measurement of the blade features path was planned. Finally, by using the inclination error model for correction of acquired data, the measurement error that was caused by tilt form is compensated. As a result the measurement accuracy of the Laser Displacement Sensor was less than 10 μm. After the experimental verification, this method makes full use of optical non-contact measurement fast speed, high precision and wide measuring range of features. Using a standard gauge block as a measurement reference, the coordinate system conversion data is simple and practical. It not only improves the measurement accuracy of the blade surface, but also its measurement efficiency. Therefore, this method increases the value of the measurement of complex surfaces. PMID:26039420

  2. A rapid method to achieve aero-engine blade form detection.

    PubMed

    Sun, Bin; Li, Bing

    2015-06-01

    This paper proposes a rapid method to detect aero-engine blade form, according to the characteristics of an aero-engine blade surface. This method first deduces an inclination error model in free-form surface measurements based on the non-contact laser triangulation principle. Then a four-coordinate measuring system was independently developed, a special fixture was designed according to the blade shape features, and a fast measurement of the blade features path was planned. Finally, by using the inclination error model for correction of acquired data, the measurement error that was caused by tilt form is compensated. As a result the measurement accuracy of the Laser Displacement Sensor was less than 10 μm. After the experimental verification, this method makes full use of optical non-contact measurement fast speed, high precision and wide measuring range of features. Using a standard gauge block as a measurement reference, the coordinate system conversion data is simple and practical. It not only improves the measurement accuracy of the blade surface, but also its measurement efficiency. Therefore, this method increases the value of the measurement of complex surfaces.

  3. International Comparison of Methane-Stabilized He-Ne Lasers

    NASA Astrophysics Data System (ADS)

    Koshelyaevskii, N. B.; Oboukhov, A.; Tatarenkov, V. M.; Titov, A. N.; Chartier, J.-M.; Felder, R.

    1981-01-01

    Two portable methane-stabilized lasers designed at BIPM have been compared with a type a stationary Soviet device developed in VNIIFTRI1. This comparison is one of a series aimed at establishing the coherence of laser wavelength and frequency measurements throughout the world and took place in June 1979. The VNIIFTRI and BIPM lasers using different methods of stabilization, have different optical and mechanical designs and laser tubes. The results of previous measurements, made in VNIIFTRI, of the most important frequency shifts for Soviet lasers together with a method of reproducing their frequency which leads to a precision of 1.10-12 are also presented.

  4. Estimation of CO2 emissions from waste incinerators: Comparison of three methods.

    PubMed

    Lee, Hyeyoung; Yi, Seung-Muk; Holsen, Thomas M; Seo, Yong-Seok; Choi, Eunhwa

    2018-03-01

    Climate-relevant CO 2 emissions from waste incineration were compared using three methods: making use of CO 2 concentration data, converting O 2 concentration and waste characteristic data, and using a mass balance method following Intergovernmental Panel on Climate Change (IPCC) guidelines. For the first two methods, CO 2 and O 2 concentrations were measured continuously from 24 to 86 days. The O 2 conversion method in comparison to the direct CO 2 measurement method had a 4.8% mean difference in daily CO 2 emissions for four incinerators where analyzed waste composition data were available. However, the IPCC method had a higher difference of 13% relative to the direct CO 2 measurement method. For three incinerators using designed values for waste composition, the O 2 conversion and IPCC methods in comparison to the direct CO 2 measurement method had mean differences of 7.5% and 89%, respectively. Therefore, the use of O 2 concentration data measured for monitoring air pollutant emissions is an effective method for estimating CO 2 emissions resulting from waste incineration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Sedimentation in mountain streams: A review of methods of measurement

    USGS Publications Warehouse

    Hedrick, Lara B.; Anderson, James T.; Welsh, Stuart A.; Lin, Lian-Shin

    2013-01-01

    The goal of this review paper is to provide a list of methods and devices used to measure sediment accumulation in wadeable streams dominated by cobble and gravel substrate. Quantitative measures of stream sedimentation are useful to monitor and study anthropogenic impacts on stream biota, and stream sedimentation is measurable with multiple sampling methods. Evaluation of sedimentation can be made by measuring the concentration of suspended sediment, or turbidity, and by determining the amount of deposited sediment, or sedimentation on the streambed. Measurements of deposited sediments are more time consuming and labor intensive than measurements of suspended sediments. Traditional techniques for characterizing sediment composition in streams include core sampling, the shovel method, visual estimation along transects, and sediment traps. This paper provides a comprehensive review of methodology, devices that can be used, and techniques for processing and analyzing samples collected to aid researchers in choosing study design and equipment.

  6. Design and construction of a guarded hot plate apparatus operating down to liquid nitrogen temperature.

    PubMed

    Li, Manfeng; Zhang, Hua; Ju, Yonglin

    2012-07-01

    A double-sided guarded hot plate apparatus (GHP) is specifically designed, fabricated, and constructed for the measurement of thermal conductivities of insulation specimens operated down to liquid nitrogen temperature (-196 °C), at different controlled pressures from 0.005 Pa to 0.105 MPa. The specimens placed in this apparatus are 300 mm in diameter at various thicknesses ranging from 4 mm to 40 mm. The apparatus is different from traditional GHP in terms of structure, supporting and heating method. The details of the design and construction of the hot plate, the cold plates, the suspensions, the clampings, and the vacuum chamber of the system are presented. The measurement methods of the temperatures, the input power, the meter area, and the thickness of the specimens are given. The apparatus is calibrated with teflon plates as sample and the maximum deviation from the published data is about 6% for thermal conductivity. The uncertainties for the measurement are also discussed in this paper.

  7. Two Instruments for Measuring Distributions of Low-Energy Charged Particles in Space

    NASA Technical Reports Server (NTRS)

    Bader, Michel; Fryer, Thomas B.; Witteborn, Fred C.

    1961-01-01

    Current estimates indicate that the bulk of interplanetary gas consists of protons with energies between 0 and 20 kev and concentrations of 1 to 105 particles/cu cm. Methods and instrumentation for measuring the energy and density distribution of such a gas are considered from the standpoint of suitability for space vehicle payloads. It is concluded that electrostatic analysis of the energy distribution can provide sufficient information in initial experiments. Both magnetic and electrostatic analyzers should eventually be used. Several instruments designed and constructed at the Ames Research Center for space plasma measurements, and the methods of calibration and data reduction are described. In particular, the instrument designed for operation on solar cell power has the following characteristics: weight, 1.1 pounds; size, 2 by 3 by 4 inches; and power consumption, 145 mw. The instrument is designed to yield information on the concentration, energy distribution, and the anisotropy of ion trajectories in the 0.2 to 20 kev range.

  8. Assessing the effects of employee assistance programs: a review of employee assistance program evaluations.

    PubMed

    Colantonio, A

    1989-01-01

    Employee assistance programs have grown at a dramatic rate, yet the effectiveness of these programs has been called into question. The purpose of this paper was to assess the effectiveness of employee assistance programs (EAPs) by reviewing recently published EAP evaluations. All studies evaluating EAPs published since 1975 from peer-reviewed journals in the English language were included in this analysis. Each of the articles was assessed in the following areas: (a) program description (subjects, setting, type of intervention, format), (b) evaluation design (research design, variables measured, operational methods), and (c) program outcomes. Results indicate numerous methodological and conceptual weaknesses and issues. These weaknesses included lack of controlled research designs and short time lags between pre- and post-test measures. Other problems identified are missing information regarding subjects, type of intervention, how variables are measured (operational methods), and reliability and validity of evaluation instruments. Due to the aforementioned weaknesses, positive outcomes could not be supported. Recommendations are made for future EAP evaluations.

  9. Assessing the effects of employee assistance programs: a review of employee assistance program evaluations.

    PubMed Central

    Colantonio, A.

    1989-01-01

    Employee assistance programs have grown at a dramatic rate, yet the effectiveness of these programs has been called into question. The purpose of this paper was to assess the effectiveness of employee assistance programs (EAPs) by reviewing recently published EAP evaluations. All studies evaluating EAPs published since 1975 from peer-reviewed journals in the English language were included in this analysis. Each of the articles was assessed in the following areas: (a) program description (subjects, setting, type of intervention, format), (b) evaluation design (research design, variables measured, operational methods), and (c) program outcomes. Results indicate numerous methodological and conceptual weaknesses and issues. These weaknesses included lack of controlled research designs and short time lags between pre- and post-test measures. Other problems identified are missing information regarding subjects, type of intervention, how variables are measured (operational methods), and reliability and validity of evaluation instruments. Due to the aforementioned weaknesses, positive outcomes could not be supported. Recommendations are made for future EAP evaluations. PMID:2728498

  10. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  11. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  12. Development and Evaluation of a Measure of Library Automation.

    ERIC Educational Resources Information Center

    Pungitore, Verna L.

    1986-01-01

    Construct validity and reliability estimates indicate that study designed to measure utilization of automation in public and academic libraries was successful in tentatively identifying and measuring three subdimensions of level of automation: quality of hardware, method of software development, and number of automation specialists. Questionnaire…

  13. Irregular and adaptive sampling for automatic geophysic measure systems

    NASA Astrophysics Data System (ADS)

    Avagnina, Davide; Lo Presti, Letizia; Mulassano, Paolo

    2000-07-01

    In this paper a sampling method, based on an irregular and adaptive strategy, is described. It can be used as automatic guide for rovers designed to explore terrestrial and planetary environments. Starting from the hypothesis that a explorative vehicle is equipped with a payload able to acquire measurements of interesting quantities, the method is able to detect objects of interest from measured points and to realize an adaptive sampling, while badly describing the not interesting background.

  14. Simple Radiowave-Based Method For Measuring Peripheral Blood Flow Project

    NASA Technical Reports Server (NTRS)

    Oliva-Buisson, Yvette J.

    2014-01-01

    Project objective is to design small radio frequency based flow probes for the measurement of blood flow velocity in peripheral arteries such as the femoral artery and middle cerebral artery. The result will be the technological capability to measure peripheral blood flow rates and flow changes during various environmental stressors such as microgravity without contact to the individual being monitored. This technology may also lead to an easier method of detecting venous gas emboli during extravehicular activities.

  15. Quantitative data standardization of X-ray based densitometry methods

    NASA Astrophysics Data System (ADS)

    Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.

    2018-02-01

    In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.

  16. Dynamic gas temperature measurement system

    NASA Technical Reports Server (NTRS)

    Elmore, D. L.; Robinson, W. W.; Watkins, W. B.

    1983-01-01

    A gas temperature measurement system with compensated frequency response of 1 KHz and capability to operate in the exhaust of a gas turbine combustor was developed. Environmental guidelines for this measurement are presented, followed by a preliminary design of the selected measurement method. Transient thermal conduction effects were identified as important; a preliminary finite-element conduction model quantified the errors expected by neglecting conduction. A compensation method was developed to account for effects of conduction and convection. This method was verified in analog electrical simulations, and used to compensate dynamic temperature data from a laboratory combustor and a gas turbine engine. Detailed data compensations are presented. Analysis of error sources in the method were done to derive confidence levels for the compensated data.

  17. Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design

    Treesearch

    Peter H. Cafferata; Leslie M. Reid

    2017-01-01

    Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...

  18. A linear parameter-varying multiobjective control law design based on youla parametrization for a flexible blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Demourant, F.; Ferreres, G.

    2013-12-01

    This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.

  19. Design and application of 3D-printed stepless beam modulators in proton therapy

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Kumlin, J.; Martinez, D. M.; Jirasek, A.; Hoehr, C.

    2016-06-01

    A new method for the design of stepless beam modulators for proton therapy is described and verified. Simulations of the classic designs are compared against the stepless method for various modulation widths which are clinically applicable in proton eye therapy. Three modulator wheels were printed using a Stratasys Objet30 3D printer. The resulting depth dose distributions showed improved uniformity over the classic stepped designs. Simulated results imply a possible improvement in distal penumbra width; however, more accurate measurements are needed to fully verify this effect. Lastly, simulations were done to model bio-equivalence to Co-60 cell kill. A wheel was successfully designed to flatten this metric.

  20. Development of new methodologies for evaluating the energy performance of new commercial buildings

    NASA Astrophysics Data System (ADS)

    Song, Suwon

    The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.

  1. Standardizing lightweight deflectometer modulus measurements for compaction quality assurance : research summary.

    DOT National Transportation Integrated Search

    2017-09-01

    The mechanistic-empirical pavement design method requires the elastic resilient modulus as the key input for characterization of geomaterials. Current density-based QA procedures do not measure resilient modulus. Additionally, the density-based metho...

  2. If We Build It, They Will Come! Exploring the Role of ICTs in Curriculum Design and Development: The Myths, Miracles and Affordances

    ERIC Educational Resources Information Center

    Naidu, S.

    2007-01-01

    Central to the argument about the influence of media on learning is how this influence is measured or ascertained. Conventional methods which comprise the use of true and quasi-experimental designs are inadequate. Several lessons can be learned from this observation on the media debate. The first is that, conventional methods of ascertaining the…

  3. A Magnetic Field Response Recorder: A New Tool for Measurement Acquisition

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Taylor, Bryant D.

    2006-01-01

    A magnetic field response recorder was developed to facilitate a measurement acquisition method that uses magnetic fields to power and to interrogate all sensors. Sensors are designed as electrically passive inductive-capacitive or passive inductive-capacitive-resistive circuits that produce magnetic field responses when electrically activated by oscillating magnetic fields. When electrically activated, the sensor's magnetic field response attributes (frequency, amplitude and bandwidth) correspond to the one or more physical states that each sensor measures. The response recorder makes it possible to simultaneously measure two unrelated physical properties using this class of sensors. The recorder is programmable allowing it to analyze one or more response attributes simultaneously. A single sensor design will be used to demonstrate that the acquisition method and the sensor example can be used to for all phases of a component's life from manufacturing to damage that can destroy it.

  4. Measures of precision for dissimilarity-based multivariate analysis of ecological communities

    PubMed Central

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826

  5. Proposed Application of Fast Fourier Transform in Near Infra Red Based Non Invasive Blood Glucose Monitoring System

    NASA Astrophysics Data System (ADS)

    Jenie, R. P.; Iskandar, J.; Kurniawan, A.; Rustami, E.; Syafutra, H.; Nurdin, N. M.; Handoyo, T.; Prabowo, J.; Febryarto, R.; Rahayu, M. S. K.; Damayanthi, E.; Rimbawan; Sukandar, D.; Suryana, Y.; Irzaman; Alatas, H.

    2017-03-01

    Worldwide emergence of glycaemic status related health disorders, such as diabetes and metabolic syndrome, is growing in alarming rate. The objective was to propose new methods for non invasive blood glucose level measurement system, based on implementation of Fast Fourier Transform methods. This was an initial-lab-scale-research. Data on non invasive blood glucose measurement are referred from Scopus, Medline, and Google Scholar, from 2011 until 2016, and was used as design references, combined with in house verification. System was developed in modular fashion, based on aforementioned compiled references. Several preliminary tests to understand relationship between LED and photo-diode responses have been done. Several references were used as non invasive blood glucose measurement tools design basis. Solution is developed in modular fashion. we have proven different sensor responses to water and glucose. Human test for non invasive blood glucose level measurement system is needed.

  6. Reducing Physical Risk Factors in Construction Work Through a Participatory Intervention: Protocol for a Mixed-Methods Process Evaluation.

    PubMed

    Ajslev, Jeppe; Brandt, Mikkel; Møller, Jeppe Lykke; Skals, Sebastian; Vinstrup, Jonas; Jakobsen, Markus Due; Sundstrup, Emil; Madeleine, Pascal; Andersen, Lars Louis

    2016-05-26

    Previous research has shown that reducing physical workload among workers in the construction industry is complicated. In order to address this issue, we developed a process evaluation in a formative mixed-methods design, drawing on existing knowledge of the potential barriers for implementation. We present the design of a mixed-methods process evaluation of the organizational, social, and subjective practices that play roles in the intervention study, integrating technical measurements to detect excessive physical exertion measured with electromyography and accelerometers, video documentation of working tasks, and a 3-phased workshop program. The evaluation is designed in an adapted process evaluation framework, addressing recruitment, reach, fidelity, satisfaction, intervention delivery, intervention received, and context of the intervention companies. Observational studies, interviews, and questionnaires among 80 construction workers organized in 20 work gangs, as well as health and safety staff, contribute to the creation of knowledge about these phenomena. At the time of publication, the process of participant recruitment is underway. Intervention studies are challenging to conduct and evaluate in the construction industry, often because of narrow time frames and ever-changing contexts. The mixed-methods design presents opportunities for obtaining detailed knowledge of the practices intra-acting with the intervention, while offering the opportunity to customize parts of the intervention.

  7. Longitudinal data subject to irregular observation: A review of methods with a focus on visit processes, assumptions, and study design.

    PubMed

    Pullenayegum, Eleanor M; Lim, Lily Sh

    2016-12-01

    When data are collected longitudinally, measurement times often vary among patients. This is of particular concern in clinic-based studies, for example retrospective chart reviews. Here, typically no two patients will share the same set of measurement times and moreover, it is likely that the timing of the measurements is associated with disease course; for example, patients may visit more often when unwell. While there are statistical methods that can help overcome the resulting bias, these make assumptions about the nature of the dependence between visit times and outcome processes, and the assumptions differ across methods. The purpose of this paper is to review the methods available with a particular focus on how the assumptions made line up with visit processes encountered in practice. Through this we show that no one method can handle all plausible visit scenarios and suggest that careful analysis of the visit process should inform the choice of analytic method for the outcomes. Moreover, there are some commonly encountered visit scenarios that are not handled well by any method, and we make recommendations with regard to study design that would minimize the chances of these problematic visit scenarios arising. © The Author(s) 2014.

  8. Modified surface testing method for large convex aspheric surfaces based on diffraction optics.

    PubMed

    Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun

    2017-12-01

    Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.

  9. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    NASA Astrophysics Data System (ADS)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  10. Design of Measurement Apparatus for Electromagnetic Shielding Effectiveness Using Flanged Double Ridged Waveguide

    NASA Astrophysics Data System (ADS)

    Kwon, Jong Hwa; Choi, Jae Ick; Yook, Jong Gwan

    In this paper, we design and manufacture a flanged double ridged waveguide with a tapered section as a sample holder for measuring the electromagnetic shielding effectiveness (SE) of planar material in broadband frequency ranges up to 10GHz. The proposed technique overcomes the limitations of the conventional ASTM D4935 test method at high frequencies. The simulation results for the designed sample holders agree well with the fabricated ones in consideration of the design specification of S11 < -20dB within the frequency range of 1-10GHz. To verify the proposed measurement apparatus, the measured SE data of the commercial shielding materials from 1 to 10GHz were indirectly compared with those obtained from the ASTM D4935 from 30MHz to 1GHz. We observed that the SE data obtained by using both experimental techniques agree with each other.

  11. New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise

    NASA Astrophysics Data System (ADS)

    Pal'a, Jozef; Ušák, Elemír

    2016-03-01

    A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.

  12. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  13. Performance of toxicity probability interval based designs in contrast to the continual reassessment method

    PubMed Central

    Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.

    2016-01-01

    Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150

  14. A Novel Analysis Method for Paired-Sample Microbial Ecology Experiments.

    PubMed

    Olesen, Scott W; Vora, Suhani; Techtmann, Stephen M; Fortney, Julian L; Bastidas-Oyanedel, Juan R; Rodríguez, Jorge; Hazen, Terry C; Alm, Eric J

    2016-01-01

    Many microbial ecology experiments use sequencing data to measure a community's response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samples and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method's validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of "bottle effects".

  15. Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments

    NASA Technical Reports Server (NTRS)

    Cole, Kevin D.

    2003-01-01

    In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.

  16. The reduction of intoxication and disorder in premises licensed to serve alcohol: An exploratory randomised controlled trial

    PubMed Central

    2010-01-01

    Background Licensed premises offer a valuable point of intervention to reduce alcohol-related harm. Objective To describe the research design for an exploratory trial examining the feasibility and acceptability of a premises-level intervention designed to reduce severe intoxication and related disorder. The study also aims to assess the feasibility of a potential future large scale effectiveness trial and provide information on key trial design parameters including inclusion criteria, premises recruitment methods, strategies to implement the intervention and trial design, outcome measures, data collection methods and intra-cluster correlations. Design A randomised controlled trial in licensed premises that had experienced at least one assault in the year preceding the intervention, documented in police or hospital Emergency Department (ED) records. Premises were recruited from four study areas by piloting four recruitment strategies of varying intensity. Thirty two licensed premises were grouped into matched pairs to reduce potential bias and randomly allocated to the control or intervention condition. The study included a nested process evaluation to provide information on intervention acceptability and implementation. Outcome measures included police-recorded violent incidents, assault-related attendances at each premises' local ED and patron Breath Alcohol Concentration assessed on exiting and entering study premises. Results The most successful recruitment method involved local police licensing officers and yielded a 100% success rate. Police-records of violence provided the most appropriate source of data about disorder at the premises level. Conclusion The methodology of an exploratory trial is presented and despite challenges presented by the study environment it is argued an exploratory trial is warranted. Initial investigations in recruitment methods suggest that study premises should be recruited with the assistance of police officers. Police data were of sufficient quality to identify disorder and street surveys are a feasible method for measuring intoxication at the individual level. Trial registration UKCRN 7090; ISRCTN: 80875696 Funding Medical Research Council (G0701758) to Simon Moore, Simon Murphy, Laurence Moore and Jonathan Shepherd PMID:20946634

  17. Increasing Accuracy of Tissue Shear Modulus Reconstruction Using Ultrasonic Strain Tensor Measurement

    NASA Astrophysics Data System (ADS)

    Sumi, C.

    Previously, we developed three displacement vector measurement methods, i.e., the multidimensional cross-spectrum phase gradient method (MCSPGM), the multidimensional autocorrelation method (MAM), and the multidimensional Doppler method (MDM). To increase the accuracies and stabilities of lateral and elevational displacement measurements, we also developed spatially variant, displacement component-dependent regularization. In particular, the regularization of only the lateral/elevational displacements is advantageous for the lateral unmodulated case. The demonstrated measurements of the displacement vector distributions in experiments using an inhomogeneous shear modulus agar phantom confirm that displacement-component-dependent regularization enables more stable shear modulus reconstruction. In this report, we also review our developed lateral modulation methods that use Parabolic functions, Hanning windows, and Gaussian functions in the apodization function and the optimized apodization function that realizes the designed point spread function (PSF). The modulations significantly increase the accuracy of the strain tensor measurement and shear modulus reconstruction (demonstrated using an agar phantom).

  18. Differential heating: A versatile method for thermal conductivity measurements in high-energy-density matter

    DOE PAGES

    Ping, Y.; Fernandez-Panella, A.; Sio, H.; ...

    2015-09-04

    We propose a method for thermal conductivity measurements of high energy density matter based on differential heating. A temperature gradient is created either by surface heating of one material or at an interface between two materials by different energy deposition. The subsequent heat conduction across the temperature gradient is observed by various time-resolved probing techniques. Conceptual designs of such measurements using laser heating, proton heating, and x-ray heating are presented. As a result, the sensitivity of the measurements to thermal conductivity is confirmed by simulations.

  19. An EMG-based system for continuous monitoring of clinical efficacy of Parkinson's disease treatments.

    PubMed

    Askari, Sina; Zhang, Mo; Won, Deborah S

    2010-01-01

    Current methods for assessing the efficacy of treatments for Parkinson's disease (PD) rely on physician rated scores. These methods pose three major shortcomings: 1) the subjectivity of the assessments, 2) the lack of precision on the rating scale (6 discrete levels), and 3) the inability to assess symptoms except under very specific conditions and/or for very specific tasks. To address these shortcomings, a portable system was developed to continuously monitor Parkinsonian symptoms with quantitative measures based on electrical signals from muscle activity (EMG). Here, we present the system design and the implementation of methods for system validation. This system was designed to provide continuous measures of tremor, rigidity, and bradykinesia which are related to the neurophysiological source without the need for multiple bulky experimental apparatuses, thus allowing more precise, quantitative indicators of the symptoms which can be measured during practical daily living tasks. This measurement system has the potential to improve the diagnosis of PD as well as the evaluation of PD treatments, which is an important step in the path to improving PD treatments.

  20. Method and device for bio-impedance measurement with hard-tissue applications.

    PubMed

    Guimerà, A; Calderón, E; Los, P; Christie, A M

    2008-06-01

    Bio-impedance measurements can be used to detect and monitor several properties of living hard-tissues, some of which include bone mineral density, bone fracture healing or dental caries detection. In this paper a simple method and hardware architecture for hard tissue bio-impedance measurement is proposed. The key design aspects of such architecture are discussed and a commercial handheld ac impedance device is presented that is fully certified to international medical standards. It includes a 4-channel multiplexer and is capable of measuring impedances from 10 kOmega to 10 MOmega across a frequency range of 100 Hz to 100 kHz with a maximum error of 5%. The device incorporates several user interface methods and a Bluetooth link for bi-directional wireless data transfer. Low-power design techniques have been implemented, ensuring the device exceeds 8 h of continuous use. Finally, bench test results using dummy cells consisting of parallel connected resistors and capacitors, from 10 kOmega to 10 MOmega and from 20 pF to 100 pF, are discussed.

  1. Intentional Teaching, Intentional Scholarship: Applying Backward Design Principles in a Faculty Writing Group

    ERIC Educational Resources Information Center

    Linder, Kathryn E.; Cooper, Frank Rudy; McKenzie, Elizabeth M.; Raesch, Monika; Reeve, Patricia A.

    2014-01-01

    Backward design is a course creation method that encourages teachers to identify their goals for student understanding and measurable objectives for learning from the outset. In this article we explore the application of backward design to the production of scholarly articles. Specifically, we report on a writing group program that encourages…

  2. Handheld laser scanner automatic registration based on random coding

    NASA Astrophysics Data System (ADS)

    He, Lei; Yu, Chun-ping; Wang, Li

    2011-06-01

    Current research on Laser Scanner often focuses mainly on the static measurement. Little use has been made of dynamic measurement, that are appropriate for more problems and situations. In particular, traditional Laser Scanner must Keep stable to scan and measure coordinate transformation parameters between different station. In order to make the scanning measurement intelligently and rapidly, in this paper ,we developed a new registration algorithm for handleheld laser scanner based on the positon of target, which realize the dynamic measurement of handheld laser scanner without any more complex work. the double camera on laser scanner can take photograph of the artificial target points to get the three-dimensional coordinates, this points is designed by random coding. And then, a set of matched points is found from control points to realize the orientation of scanner by the least-square common points transformation. After that the double camera can directly measure the laser point cloud in the surface of object and get the point cloud data in an unified coordinate system. There are three major contributions in the paper. Firstly, a laser scanner based on binocular vision is designed with double camera and one laser head. By those, the real-time orientation of laser scanner is realized and the efficiency is improved. Secondly, the coding marker is introduced to solve the data matching, a random coding method is proposed. Compared with other coding methods,the marker with this method is simple to match and can avoid the shading for the object. Finally, a recognition method of coding maker is proposed, with the use of the distance recognition, it is more efficient. The method present here can be used widely in any measurement from small to huge obiect, such as vehicle, airplane which strengthen its intelligence and efficiency. The results of experiments and theory analzing demonstrate that proposed method could realize the dynamic measurement of handheld laser scanner. Theory analysis and experiment shows the method is reasonable and efficient.

  3. Interferometer for measuring the dynamic surface topography of a human tear film

    NASA Astrophysics Data System (ADS)

    Primeau, Brian C.; Greivenkamp, John E.

    2012-03-01

    The anterior refracting surface of the eye is the thin tear film that forms on the surface of the cornea. Following a blink, the tear film quickly smoothes and starts to become irregular after 10 seconds. This irregularity can affect comfort and vision quality. An in vivo method of characterizing dynamic tear films has been designed based upon a near-infrared phase-shifting interferometer. This interferometer continuously measures light reflected from the tear film, allowing sub-micron analysis of the dynamic surface topography. Movies showing the tear film behavior can be generated along with quantitative metrics describing changes in the tear film surface. This tear film measurement allows analysis beyond capabilities of typical fluorescein visual inspection or corneal topography and provides better sensitivity and resolution than shearing interferometry methods. The interferometer design is capable of identifying features in the tear film much less than a micron in height with a spatial resolution of about ten microns over a 6 mm diameter. This paper presents the design of the tear film interferometer along with the considerations that must be taken when designing an interferometer for on-eye diagnostics. Discussions include eye movement, design of null optics for a range of ocular geometries, and laser emission limits for on-eye interferometry.

  4. Tethered acoustic doppler current profiler platforms for measuring streamflow

    USGS Publications Warehouse

    Rehmel, Michael S.; Stewart, James A.; Morlock, Scott E.

    2003-01-01

    A tethered-platform design with a trimaran hull and 900-megahertz radio modems is now commercially available. Continued field use has resulted in U.S. Geological Survey procedures for making tethered-platform discharge measurements, including methods for tethered-boat deployment, moving-bed tests, and measurement of edge distances.

  5. Continuous flow hygroscopicity-resolved relaxed eddy accumulation (Hy-Res REA) method of measuring size-resolved sodium chloride particle fluxes

    EPA Science Inventory

    The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here, we present the design, testing, and analysis of data collected through the first instrument capable of measuring ...

  6. Incremental and Predictive Utility of Formative Assessment Methods of Reading Comprehension

    ERIC Educational Resources Information Center

    Marcotte, Amanda M.; Hintze, John M.

    2009-01-01

    Formative assessment measures are commonly used in schools to assess reading and to design instruction accordingly. The purpose of this research was to investigate the incremental and concurrent validity of formative assessment measures of reading comprehension. It was hypothesized that formative measures of reading comprehension would contribute…

  7. Sensor for measuring hydrogen partial pressure in parabolic trough power plant expansion tanks

    NASA Astrophysics Data System (ADS)

    Glatzmaier, Greg C.; Cooney, Daniel A.

    2017-06-01

    The National Renewable Energy Laboratory and Acciona Energy North America are working together to design and implement a process system that provides a permanent solution to the issue of hydrogen buildup at parabolic trough power plants. We are pursuing a method that selectively removes hydrogen from the expansion tanks that serve as reservoirs for the heat transfer fluid (HTF) that circulates in the collector field and power block components. Our modeling shows that removing hydrogen from the expansion tanks at a design rate reduces and maintains dissolved hydrogen in the circulating HTF to a selected target level. Our collaborative work consists of several tasks that are needed to advance this process concept to a development stage, where it is ready for implementation at a commercial power plant. Our main effort is to design and evaluate likely process-unit operations that remove hydrogen from the expansion tanks at a specified rate. Additionally, we designed and demonstrated a method and instrumentation to measure hydrogen partial pressure and concentration in the expansion-tank headspace gas. We measured hydrogen partial pressure in the headspace gas mixture using a palladium-alloy membrane, which is permeable exclusively to hydrogen. The membrane establishes a pure hydrogen gas phase that is in equilibrium with the hydrogen in the gas mixture. We designed and fabricated instrumentation, and demonstrated its effectiveness in measuring hydrogen partial pressures over a range of three orders of magnitude. Our goal is to install this instrument at the Nevada Solar One power plant and to demonstrate its effectiveness in measuring hydrogen levels in the expansion tanks under normal plant operating conditions.

  8. Vibro-acoustic performance of newly designed tram track structures

    NASA Astrophysics Data System (ADS)

    Haladin, Ivo; Lakušić, Stjepan; Ahac, Maja

    2017-09-01

    Rail vehicles in interaction with a railway structure induce vibrations that are propagating to surrounding structures and cause noise disturbance in the surrounding areas. Since tram tracks in urban areas often share the running surface with road vehicles one of top priorities is to achieve low maintenance and long lasting structure. Research conducted in scope of this paper gives an overview of newly designed tram track structures designated for use on Zagreb tram network and their performance in terms of noise and vibration mitigation. Research has been conducted on a 150 m long test section consisted of three tram track types: standard tram track structure commonly used on tram lines in Zagreb, optimized tram structure for better noise and vibration mitigation and a slab track with double sleepers embedded in a concrete slab, which presents an entirely new approach of tram track construction in Zagreb. Track has been instrumented with acceleration sensors, strain gauges and revision shafts for inspection. Relative deformations give an insight into track structure dynamic load distribution through the exploitation period. Further the paper describes vibro-acoustic measurements conducted at the test site. To evaluate the track performance from the vibro-acoustical standpoint, detailed analysis of track decay rate has been analysed. Opposed to measurement technique using impact hammer for track decay rate measurements, newly developed measuring technique using vehicle pass by vibrations as a source of excitation has been proposed and analysed. Paper gives overview of the method, it’s benefits compared to standard method of track decay rate measurements and method evaluation based on noise measurements of the vehicle pass by.

  9. Response monitoring using quantitative ultrasound methods and supervised dictionary learning in locally advanced breast cancer

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Fung, Brandon; Tadayyon, Hadi; Tran, William T.; Czarnota, Gregory J.

    2016-03-01

    A non-invasive computer-aided-theragnosis (CAT) system was developed for the early assessment of responses to neoadjuvant chemotherapy in patients with locally advanced breast cancer. The CAT system was based on quantitative ultrasound spectroscopy methods comprising several modules including feature extraction, a metric to measure the dissimilarity between "pre-" and "mid-treatment" scans, and a supervised learning algorithm for the classification of patients to responders/non-responders. One major requirement for the successful design of a high-performance CAT system is to accurately measure the changes in parametric maps before treatment onset and during the course of treatment. To this end, a unified framework based on Hilbert-Schmidt independence criterion (HSIC) was used for the design of feature extraction from parametric maps and the dissimilarity measure between the "pre-" and "mid-treatment" scans. For the feature extraction, HSIC was used to design a supervised dictionary learning (SDL) method by maximizing the dependency between the scans taken from "pre-" and "mid-treatment" with "dummy labels" given to the scans. For the dissimilarity measure, an HSIC-based metric was employed to effectively measure the changes in parametric maps as an indication of treatment effectiveness. The HSIC-based feature extraction and dissimilarity measure used a kernel function to nonlinearly transform input vectors into a higher dimensional feature space and computed the population means in the new space, where enhanced group separability was ideally obtained. The results of the classification using the developed CAT system indicated an improvement of performance compared to a CAT system with basic features using histogram of intensity.

  10. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  11. Data format standard for sharing light source measurements

    NASA Astrophysics Data System (ADS)

    Gregory, G. Groot; Ashdown, Ian; Brandenburg, Willi; Chabaud, Dominique; Dross, Oliver; Gangadhara, Sanjay; Garcia, Kevin; Gauvin, Michael; Hansen, Dirk; Haraguchi, Kei; Hasna, Günther; Jiao, Jianzhong; Kelley, Ryan; Koshel, John; Muschaweck, Julius

    2013-09-01

    Optical design requires accurate characterization of light sources for computer aided design (CAD) software. Various methods have been used to model sources, from accurate physical models to measurement of light output. It has become common practice for designers to include measured source data for design simulations. Typically, a measured source will contain rays which sample the output distribution of the source. The ray data must then be exported to various formats suitable for import into optical analysis or design software. Source manufacturers are also making measurements of their products and supplying CAD models along with ray data sets for designers. The increasing availability of data has been beneficial to the design community but has caused a large expansion in storage needs for the source manufacturers since each software program uses a unique format to describe the source distribution. In 2012, the Illuminating Engineering Society (IES) formed a working group to understand the data requirements for ray data and recommend a standard file format. The working group included representatives from software companies supplying the analysis and design tools, source measurement companies providing metrology, source manufacturers creating the data and users from the design community. Within one year the working group proposed a file format which was recently approved by the IES for publication as TM-25. This paper will discuss the process used to define the proposed format, highlight some of the significant decisions leading to the format and list the data to be included in the first version of the standard.

  12. Polarized BRDF measurement of steel E235B in the near-infrared region: Based on a self-designed instrument with absolute measuring method

    NASA Astrophysics Data System (ADS)

    Liu, Yanlei; Yu, Kun; Liu, Zilong; Zhao, Yuejin; Liu, Yufang

    2018-06-01

    The spectral bidirectional reflectance distribution (BRDF) offers a complete description of the optical properties of the opaque material. Numerous studies on BRDF have been conducted for its important role in scientific research and industrial production. However, most of these studies focus on the visible region and unpolarized BRDF, and the spectral polarized BRDF in the near-infrared region is rarely reported. In this letter, we propose an absolute method to measure the spectral BRDF in the near-infrared region, and the detailed derivation is presented. A self-designed instrument is set up for the absolute measurement of BRDF. The reliability of this method is verified by comparing the experimental data of the three metal (aluminum, silver and gold) mirrors with the reference data. The in-plane polarized BRDF of steel E235B are measured, and the influence of incident angle and roughness on the BRDF are discussed. The degree of linear polarization (DOLP) are determined based on the polarized BRDF. The results indicate that both the roughness and incident angle have distinct influence on the BRDF and DOLP.

  13. Designing and optimizing a healthcare kiosk for the community.

    PubMed

    Lyu, Yongqiang; Vincent, Christopher James; Chen, Yu; Shi, Yuanchun; Tang, Yida; Wang, Wenyao; Liu, Wei; Zhang, Shuangshuang; Fang, Ke; Ding, Ji

    2015-03-01

    Investigating new ways to deliver care, such as the use of self-service kiosks to collect and monitor signs of wellness, supports healthcare efficiency and inclusivity. Self-service kiosks offer this potential, but there is a need for solutions to meet acceptable standards, e.g. provision of accurate measurements. This study investigates the design and optimization of a prototype healthcare kiosk to collect vital signs measures. The design problem was decomposed, formalized, focused and used to generate multiple solutions. Systematic implementation and evaluation allowed for the optimization of measurement accuracy, first for individuals and then for a population. The optimized solution was tested independently to check the suitability of the methods, and quality of the solution. The process resulted in a reduction of measurement noise and an optimal fit, in terms of the positioning of measurement devices. This guaranteed the accuracy of the solution and provides a general methodology for similar design problems. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Enhanced teaching and student learning through a simulator-based course in chemical unit operations design

    NASA Astrophysics Data System (ADS)

    Ghasem, Nayef

    2016-07-01

    This paper illustrates a teaching technique used in computer applications in chemical engineering employed for designing various unit operation processes, where the students learn about unit operations by designing them. The aim of the course is not to teach design, but rather to teach the fundamentals and the function of unit operation processes through simulators. A case study presenting the teaching method was evaluated using student surveys and faculty assessments, which were designed to measure the quality and effectiveness of the teaching method. The results of the questionnaire conclusively demonstrate that this method is an extremely efficient way of teaching a simulator-based course. In addition to that, this teaching method can easily be generalised and used in other courses. A student's final mark is determined by a combination of in-class assessments conducted based on cooperative and peer learning, progress tests and a final exam. Results revealed that peer learning can improve the overall quality of student learning and enhance student understanding.

  15. Electromotive force analysis of current transformer during lightning surge inflow using Fourier series expansion

    NASA Astrophysics Data System (ADS)

    Kim, Youngsun

    2017-05-01

    The most common structure used for current transformers (CTs) consists of secondary windings around a ferromagnetic core past the primary current being measured. A CT used as a surge protection device (SPD) may experience large inrushes of current, like surges. However, when a large current flows into the primary winding, measuring the magnitude of the current is difficult because the ferromagnetic core becomes magnetically saturated. Several approaches to reduce the saturation effect are described in the literature. A Rogowski coil is representative of several devices that measure large currents. It is an electrical device that measures alternating current (AC) or high-frequency current. However, such devices are very expensive in application. In addition, the volume of a CT must be increased to measure sufficiently large currents, but for installation spaces that are too small, other methods must be used. To solve this problem, it is necessary to analyze the magnetic field and electromotive force (EMF) characteristics when designing a CT. Thus, we proposed an analysis method for the CT under an inrush current using the time-domain finite element method (TDFEM). The input source current of a surge waveform is expanded by a Fourier series to obtain an instantaneous value. An FEM model of the device is derived in a two-dimensional system and coupled with EMF circuits. The time-derivative term in the differential equation is solved in each time step by the finite difference method. It is concluded that the proposed algorithm is useful for analyzing CT characteristics, including the field distribution. Consequently, the proposed algorithm yields a reference for obtaining the effects of design parameters and magnetic materials for special shapes and sizes before the CT is designed and manufactured.

  16. An orthogonal return method for linearly polarized beam based on the Faraday effect and its application in interferometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Benyong, E-mail: chenby@zstu.edu.cn; Zhang, Enzheng; Yan, Liping

    2014-10-15

    Correct return of the measuring beam is essential for laser interferometers to carry out measurement. In the actual situation, because the measured object inevitably rotates or laterally moves, not only the measurement accuracy will decrease, or even the measurement will be impossibly performed. To solve this problem, a novel orthogonal return method for linearly polarized beam based on the Faraday effect is presented. The orthogonal return of incident linearly polarized beam is realized by using a Faraday rotator with the rotational angle of 45°. The optical configuration of the method is designed and analyzed in detail. To verify its practicabilitymore » in polarization interferometry, a laser heterodyne interferometer based on this method was constructed and precision displacement measurement experiments were performed. These results show that the advantage of the method is that the correct return of the incident measuring beam is ensured when large lateral displacement or angular rotation of the measured object occurs and then the implementation of interferometric measurement can be ensured.« less

  17. Item Randomized-Response Models for Measuring Noncompliance: Risk-Return Perceptions, Social Influences, and Self-Protective Responses

    ERIC Educational Resources Information Center

    Bockenholt, Ulf; Van Der Heijden, Peter G. M.

    2007-01-01

    Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and…

  18. A new method of measurement of tension on a moving magnetic tape

    NASA Technical Reports Server (NTRS)

    Kurtinaytis, A. K.; Lauzhinskas, Y. S.

    1973-01-01

    The possibility of no-contact measurement of the tension on a moving magnetic tape, assuming the tape is uniform, is discussed. A scheme for calculation of the natural frequency of transverse vibrations of magnetic tape is shown. Mathematical models are developed to show the relationships of the parameters. The method is applicable to the analysis of accurate tape feed mechanisms design.

  19. Measuring the iron spectral opacity in solar conditions using a double ablation front scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colaitis, A.; Ducret, J. E.; Turck-Chieze, S

    We propose a new method to achieve hydrodynamic conditions relevant for the investigation of the radiation transport properties of the plasma at the base of the solar convection zone. The method is designed in the framework of opacity measurements with high-power lasers and exploits the temporal and spatial stability of hydrodynamic parameters in counter-propagating Double Ablation Front (DAF) structures.

  20. EPA-ORD MEASUREMENT SCIENCE SUPPORT FOR HOMELAND SECURITY

    EPA Science Inventory

    This presentation will describe the organization and the research and development activities of the ORD National Exposure Measurements Center and will focus on the Center's planned role in providing analytical method development, statistical sampling and design guidance, quality ...

  1. Idiographic duo-trio tests using a constant-reference based on preference of each consumer: Sample presentation sequence in difference test can be customized for individual consumers to reduce error.

    PubMed

    Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong

    2016-11-01

    As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Physical and Psychological Effects of Head Treatment in the Supine Position Using Specialized Ayurveda-Based Techniques

    PubMed Central

    Iwawaki, Yoko; Uebaba, Kazuo; Yamamoto, Yoko; Takishita, Yukie; Harada, Kiyomi; Shibata, Akemi; Narumoto, Jin; Fukui, Kenji

    2016-01-01

    Abstract Objective: To clarify the physical and psychological effects of head massage performed in the supine position using Ayurveda-based techniques (head treatment). Design: Twenty-four healthy female students were included in the study. Using a crossover study design, the same participants were enrolled in both the head treatment intervention group and control group. There was an interval of 1 week or more between measurements. Outcome measures: The physiologic indices measured included blood pressure and heart rate fluctuations (high frequency and low frequency/high frequency). The psychological markers measured included liveliness, depression, and boredom using the visual analogue scale method. State anxiety was measured using the State-Trait Anxiety Inventory method. Results: The parasympathetic nerve activity increased immediately after head treatment. Upon completion of head treatment, the parasympathetic nerve predominance tended to gradually ease. Head treatment boosted freshness and relieved anxiety. Conclusions: The results suggest that head treatment has a relaxing and refreshing effect and may be used to provide comfort. PMID:27163344

  3. Combination of thermal and electric properties' measurement techniques in a single setup suitable for radioactive materials in controlled environments and based on the 3ω approach

    NASA Astrophysics Data System (ADS)

    Shrestha, K.; Gofryk, K.

    2018-04-01

    We have designed and developed a new experimental setup, based on the 3ω method, to measure thermal conductivity, heat capacity, and electrical resistivity of a variety of samples in a broad temperature range (2-550 K) and under magnetic fields up to 9 T. The validity of this method is tested by measuring various types of metallic (copper, platinum, and constantan) and insulating (SiO2) materials, which have a wide range of thermal conductivity values (1-400 W m-1 K-1). We have successfully employed this technique for measuring the thermal conductivity of two actinide single crystals: uranium dioxide and uranium nitride. This new experimental approach for studying nuclear materials will help us to advance reactor fuel development and understanding. We have also shown that this experimental setup can be adapted to the Physical Property Measurement System (Quantum Design) environment and/or other cryocooler systems.

  4. Assessment and Evaluation Methods for Access Services

    ERIC Educational Resources Information Center

    Long, Dallas

    2014-01-01

    This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…

  5. Assessment and control design for steam vent noise in an oil refinery.

    PubMed

    Monazzam, Mohammad Reza; Golmohammadi, Rostam; Nourollahi, Maryam; Momen Bellah Fard, Samaneh

    2011-06-13

    Noise is one of the most important harmful agents in work environment. Noise pollution in oil refinery industries is related to workers' health. This study aimed to determine the overall noise pollution of an oil refinery operation and its frequency analysis to determine the control plan for a vent noise in these industries. This experimental study performed in control unit of Tehran Oil Refinery in 2008. To determine the noise distributions, environmental noise measurements were carried out by lattice method according to basic information and technical process. The sound pressure level and frequency distribution was measured for each study sources subject separately was performed individually. According to the vent's specification, the measured steam noise characteristics reviewed and compared to the theoretical results of steam noise estimation. Eventually, a double expansion muffler was designed. Data analysis and graphical design were carried out using Excel software. The results of environmental noise measurements indicated that the level of sound pressure was above the national permitted level (85 dB (A)). The Mean level of sound pressure of the studied steam jet was 90.3 dB (L). The results of noise frequency analysis for the steam vents showed that the dominant frequency was 4000 Hz. To obtain 17 dB noise reductions, a double chamber aluminum muffler with 500 mm length and 200 mm diameter consisting pipe drilled was designed. The characteristics of steam vent noise were separated from other sources, a double expansion muffler was designed using a new method based on the level of steam noise, and principle sound frequency, a double expansion muffler was designed.

  6. Evaluation of a cost-effective loads approach. [for Viking Orbiter light weight structural design

    NASA Technical Reports Server (NTRS)

    Garba, J. A.; Wada, B. K.; Bamford, R.; Trubert, M. R.

    1976-01-01

    A shock spectra/impedance method for loads prediction is used to estimate member loads for the Viking Orbiter, a 7800-lb interplanetary spacecraft that has been designed using transient loads analysis techniques. The transient loads analysis approach leads to a lightweight structure but requires complex and costly analyses. To reduce complexity and cost a shock spectra/impedance method is currently being used to design the Mariner Jupiter Saturn spacecraft. This method has the advantage of using low-cost in-house loads analysis techniques and typically results in more conservative structural loads. The method is evaluated by comparing the increase in Viking member loads to the loads obtained by the transient loads analysis approach. An estimate of the weight penalty incurred by using this method is presented. The paper also compares the calculated flight loads from the transient loads analyses and the shock spectra/impedance method to measured flight data.

  7. Research Methods in Healthcare Epidemiology: Survey and Qualitative Research.

    PubMed

    Safdar, Nasia; Abbo, Lilian M; Knobloch, Mary Jo; Seo, Susan K

    2016-11-01

    Surveys are one of the most frequently employed study designs in healthcare epidemiology research. Generally easier to undertake and less costly than many other study designs, surveys can be invaluable to gain insights into opinions and practices in large samples and may be descriptive and/or be used to test associations. In this context, qualitative research methods may complement this study design either at the survey development phase and/or at the interpretation/extension of results stage. This methods article focuses on key considerations for designing and deploying surveys in healthcare epidemiology and antibiotic stewardship, including identification of whether or not de novo survey development is necessary, ways to optimally lay out and display a survey, denominator measurement, discussion of biases to keep in mind particularly in research using surveys, and the role of qualitative research methods to complement surveys. We review examples of surveys in healthcare epidemiology and antimicrobial stewardship and review the pros and cons of methods used. A checklist is provided to help aid design and deployment of surveys in healthcare epidemiology and antimicrobial stewardship. Infect Control Hosp Epidemiol 2016;1-6.

  8. Research Methods in Healthcare Epidemiology: Survey and Qualitative Research

    PubMed Central

    Safdar, Nasia; Abbo, Lilian M.; Knobloch, Mary Jo; Seo, Susan K.

    2017-01-01

    Surveys are one of the most frequently employed study designs in healthcare epidemiology research. Generally easier to undertake and less costly than many other study designs, surveys can be invaluable to gain insights into opinions and practices in large samples and may be descriptive and/or be used to test associations. In this context, qualitative research methods may complement this study design either at the survey development phase and/or at the interpretation/extension of results stage. This methods article focuses on key considerations for designing and deploying surveys in healthcare epidemiology and antibiotic stewardship, including identification of whether or not de novo survey development is necessary, ways to optimally lay out and display a survey, denominator measurement, discussion of biases to keep in mind particularly in research using surveys, and the role of qualitative research methods to complement surveys. We review examples of surveys in healthcare epidemiology and antimicrobial stewardship and review the pros and cons of methods used. A checklist is provided to help aid design and deployment of surveys in healthcare epidemiology and antimicrobial stewardship. PMID:27514583

  9. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Shipboard Electrical System Modeling for Early-Stage Design Space Exploration

    DTIC Science & Technology

    2013-04-01

    method is demonstrated in several system studies. I. INTRODUCTION The integrated engineering plant ( IEP ) of an electric warship can be viewed as a...which it must operate [2], [4]. The desired IEP design should be dependable [5]. The operability metric has previously been defined as a measure of...the performance of an IEP during a specific scenario [2]. Dependability metrics have been derived from the operability metric as measures of the IEP

  11. Near common-path optical fiber interferometer for potentially fast on-line microscale-nanoscale surface measurement

    NASA Astrophysics Data System (ADS)

    Jiang, Xiangqian; Wang, Kaiwei; Martin, Haydn

    2006-12-01

    We introduce a new surface measurement method for potential online application. Compared with our previous research, the new design is a significant improvement. It also features high stability because it uses a near common-path configuration. The method should be of great benefit to advanced manufacturing, especially for quality and process control in ultraprecision manufacturing and on the production line. Proof-of-concept experiments have been successfully conducted by measuring the system repeatability and the displacements of a mirror surface.

  12. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  13. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  14. Preliminary Solar Sail Design and Fabrication Assessment: Spinning Sail Blade, Square Sail Sheet

    NASA Technical Reports Server (NTRS)

    Daniels, J. B.; Dowdle, D. M.; Hahn, D. W.; Hildreth, E. N.; Lagerquist, D. R.; Mahaonoul, E. J.; Munson, J. B.; Origer, T. F.

    1977-01-01

    Blade design aspects most affecting producibility and means of measurement and control of length, scallop, fullness and straightness requirements and tolerances were extensively considered. Alternate designs of the panel seams and edge reinforcing members are believed to offer advantages of seam integrity, producibility, reliability, cost and weight. Approaches to and requirements for highly specialized metalizing methods, processes and equipment were studied and identified. Alternate methods of sail blade fabrication and related special machinery, tooling, fixtures and trade offs were examined. A preferred and recommended approach is also described. Quality control plans, inspection procedures, flow charts and special test equipment associated with the preferred manufacturing method were analyzed and are discussed.

  15. Multilevel Interventions: Measurement and Measures

    PubMed Central

    Charns, Martin P.; Alligood, Elaine C.; Benzer, Justin K.; Burgess, James F.; Mcintosh, Nathalie M.; Burness, Allison; Partin, Melissa R.; Clauser, Steven B.

    2012-01-01

    Background Multilevel intervention research holds the promise of more accurately representing real-life situations and, thus, with proper research design and measurement approaches, facilitating effective and efficient resolution of health-care system challenges. However, taking a multilevel approach to cancer care interventions creates both measurement challenges and opportunities. Methods One-thousand seventy two cancer care articles from 2005 to 2010 were reviewed to examine the state of measurement in the multilevel intervention cancer care literature. Ultimately, 234 multilevel articles, 40 involving cancer care interventions, were identified. Additionally, literature from health services, social psychology, and organizational behavior was reviewed to identify measures that might be useful in multilevel intervention research. Results The vast majority of measures used in multilevel cancer intervention studies were individual level measures. Group-, organization-, and community-level measures were rarely used. Discussion of the independence, validity, and reliability of measures was scant. Discussion Measurement issues may be especially complex when conducting multilevel intervention research. Measurement considerations that are associated with multilevel intervention research include those related to independence, reliability, validity, sample size, and power. Furthermore, multilevel intervention research requires identification of key constructs and measures by level and consideration of interactions within and across levels. Thus, multilevel intervention research benefits from thoughtful theory-driven planning and design, an interdisciplinary approach, and mixed methods measurement and analysis. PMID:22623598

  16. Absolute surface reconstruction by slope metrology and photogrammetry

    NASA Astrophysics Data System (ADS)

    Dong, Yue

    Developing the manufacture of aspheric and freeform optical elements requires an advanced metrology method which is capable of inspecting these elements with arbitrary freeform surfaces. In this dissertation, a new surface measurement scheme is investigated for such a purpose, which is to measure the absolute surface shape of an object under test through its surface slope information obtained by photogrammetric measurement. A laser beam propagating toward the object reflects on its surface while the vectors of the incident and reflected beams are evaluated from the four spots they leave on the two parallel transparent windows in front of the object. The spots' spatial coordinates are determined by photogrammetry. With the knowledge of the incident and reflected beam vectors, the local slope information of the object surface is obtained through vector calculus and finally yields the absolute object surface profile by a reconstruction algorithm. An experimental setup is designed and the proposed measuring principle is experimentally demonstrated by measuring the absolute surface shape of a spherical mirror. The measurement uncertainty is analyzed, and efforts for improvement are made accordingly. In particular, structured windows are designed and fabricated to generate uniform scattering spots left by the transmitted laser beams. Calibration of the fringe reflection instrument, another typical surface slope measurement method, is also reported in the dissertation. Finally, a method for uncertainty analysis of a photogrammetry measurement system by optical simulation is investigated.

  17. Tools and methods for experimental in-vivo measurement and biomechanical characterization of an Octopus vulgaris arm.

    PubMed

    Margheri, Laura; Mazzolai, Barbara; Cianchetti, Matteo; Dario, Paolo; Laschi, Cecilia

    2009-01-01

    This work illustrates new tools and methods for an in vivo and direct, but non-invasive, measurement of an octopus arm mechanical properties. The active elongation (longitudinal stretch) and the pulling force capability are measured on a specimen of Octopus vulgaris in order to quantitatively characterize the parameters describing the arm mechanics, for biomimetic design purposes. The novel approach consists of observing and measuring a living octopus with minimally invasive methods, which allow the animal to move with its complete ability. All tools are conceived in order to create a collaborative interaction with the animal for the acquisition of active measures. The data analysis is executed taking into account the presence of an intrinsic error due to the mobility of the subject and the aquatic environment. Using a system of two synchronized high-speed high-resolution cameras and purpose-made instruments, the maximum elongation of an arm and its rest length (when all muscles fibres are relaxed during propulsion movement) are measured and compared to define the longitudinal stretch, with the impressive average result of 194%. With a similar setup integrated with a force sensor, the pulling force capability is measured as a function of grasp point position along the arm. The measured parameters are used as real specifications for the design of an octopus-like arm with a biomimetic approach.

  18. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  19. Comparing 3D foot scanning with conventional measurement methods.

    PubMed

    Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J

    2014-01-01

    Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.

  20. Three dimensional finite element methods: Their role in the design of DC accelerator systems

    NASA Astrophysics Data System (ADS)

    Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.

    2013-04-01

    High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 × 300 mm2. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.

  1. A simple method of calculating Stirling engines for engine design optimization

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    A calculation method is presented for a rhombic drive Stirling engine with a tubular heater and cooler and a screen type regenerator. Generally the equations presented describe power generation and consumption and heat losses. It is the simplest type of analysis that takes into account the conflicting requirements inherent in Stirling engine design. The method itemizes the power and heat losses for intelligent engine optimization. The results of engine analysis of the GPU-3 Stirling engine are compared with more complicated engine analysis and with engine measurements.

  2. Design, evaluation and test of an electronic, multivariable control for the F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Skira, C. A.; Dehoff, R. L.; Hall, W. E., Jr.

    1980-01-01

    A digital, multivariable control design procedure for the F100 turbofan engine is described. The controller is based on locally linear synthesis techniques using linear, quadratic regulator design methods. The control structure uses an explicit model reference form with proportional and integral feedback near a nominal trajectory. Modeling issues, design procedures for the control law and the estimation of poorly measured variables are presented.

  3. 2010 Anthropometric Survey of U.S. Marine Corps Personnel: Methods and Summary Statistics

    DTIC Science & Technology

    2013-06-01

    models for the ergonomic design of working environments. Today, the entire production chain for a piece of clothing, beginning with the design and...Corps 382 crewstations and workstations. Digital models are increasingly used in the design process for seated and standing workstations, as well...International Standards for Ergonomic Design : These dimensions are useful for comparing data sets between nations, and are measured according to

  4. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  5. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  6. Climate Change: A "Green" Approach to Teaching Contemporary Germany

    ERIC Educational Resources Information Center

    Melin, Charlotte

    2013-01-01

    This article describes a newly designed upper division German language course, "Contemporary Germany: Food, Energy Politics," and two sampling methods of assessment for measuring parallel gains in German skills and sustainable development (SD) thinking. Second Language Acquisition (SLA) informed course design, key assignments, and…

  7. Developing a method for estimating AADT on all Louisiana roads : [tech summary].

    DOT National Transportation Integrated Search

    2015-12-01

    Annual Average Daily Tra c (AADT), the average daily volume of vehicle tra c on a highway or road, is an : important measure in transportation engineering. AADT is used in highway geometric design, pavement : design, tra c forecasting, and h...

  8. Study on the millimeter-wave scale absorber based on the Salisbury screen

    NASA Astrophysics Data System (ADS)

    Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan

    2018-03-01

    In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.

  9. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  10. Mapping Mixed Methods Research: Methods, Measures, and Meaning

    ERIC Educational Resources Information Center

    Wheeldon, J.

    2010-01-01

    This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…

  11. Measurement of Spray Drift with a Specifically Designed Lidar System.

    PubMed

    Gregorio, Eduard; Torrent, Xavier; Planas de Martí, Santiago; Solanelles, Francesc; Sanz, Ricardo; Rocadenbosch, Francesc; Masip, Joan; Ribes-Dasi, Manel; Rosell-Polo, Joan R

    2016-04-08

    Field measurements of spray drift are usually carried out by passive collectors and tracers. However, these methods are labour- and time-intensive and only provide point- and time-integrated measurements. Unlike these methods, the light detection and ranging (lidar) technique allows real-time measurements, obtaining information with temporal and spatial resolution. Recently, the authors have developed the first eye-safe lidar system specifically designed for spray drift monitoring. This prototype is based on a 1534 nm erbium-doped glass laser and an 80 mm diameter telescope, has scanning capability, and is easily transportable. This paper presents the results of the first experimental campaign carried out with this instrument. High coefficients of determination (R² > 0.85) were observed by comparing lidar measurements of the spray drift with those obtained by horizontal collectors. Furthermore, the lidar system allowed an assessment of the drift reduction potential (DRP) when comparing low-drift nozzles with standard ones, resulting in a DRP of 57% (preliminary result) for the tested nozzles. The lidar system was also used for monitoring the evolution of the spray flux over the canopy and to generate 2-D images of these plumes. The developed instrument is an advantageous alternative to passive collectors and opens the possibility of new methods for field measurement of spray drift.

  12. Wear measurement of dental tissues and materials in clinical studies: A systematic review.

    PubMed

    Wulfman, C; Koenig, V; Mainjot, A K

    2018-06-01

    This study aims to systematically review the different methods used for wear measurement of dental tissues and materials in clinical studies, their relevance and reliability in terms of accuracy and precision, and the performance of the different steps of the workflow taken independently. An exhaustive search of clinical studies related to wear of dental tissues and materials reporting a quantitative measurement method was conducted. MedLine, Embase, Scopus, Cochrane Library and Web of Science databases were used. Prospective studies, pilot studies and case series (>10 patients), as long as they contained a description of wear measurement methodology. Only studies published after 1995 were considered. After duplicates' removal, 495 studies were identified, and 41 remained for quantitative analysis. Thirty-four described wear-measurement protocols, using digital profilometry and superimposition, whereas 7 used alternative protocols. A specific form was designed to analyze the risk of bias. The methods were described in terms of material analyzed; study design; device used for surface acquisition; matching software details and settings; type of analysis (vertical height-loss measurement vs volume loss measurement); type of area investigated (entire occlusal area or selective areas); and results. There is a need of standardization of clinical wear measurement. Current methods exhibit accuracy, which is not sufficient to monitor wear of restorative materials and tooth tissues. Their performance could be improved, notably limiting the use of replicas, using standardized calibration procedures and positive controls, optimizing the settings of scanners and matching softwares, and taking into account unusable data. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  13. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  14. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  15. The nature and outcomes of work: a replication and extension of interdisciplinary work-design research.

    PubMed

    Edwards, J R; Scully, J A; Brtek, M D

    2000-12-01

    Research into the changing nature of work requires comprehensive models of work design. One such model is the interdisciplinary framework (M. A. Campion, 1988), which integrates 4 work-design approaches (motivational, mechanistic, biological, perceptual-motor) and links each approach to specific outcomes. Unfortunately, studies of this framework have used methods that disregard measurement error, overlook dimensions within each work-design approach, and treat each approach and outcome separately. This study reanalyzes data from M. A. Campion (1988), using structural equation models that incorporate measurement error, specify multiple dimensions for each work-design approach, and examine the work-design approaches and outcomes jointly. Results show that previous studies underestimate relationships between work-design approaches and outcomes and that dimensions within each approach exhibit relationships with outcomes that differ in magnitude and direction.

  16. Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation

    NASA Technical Reports Server (NTRS)

    Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.

  17. Gender counts: A systematic review of evaluations of gender-integrated health interventions in low- and middle-income countries.

    PubMed

    Schriver, Brittany; Mandal, Mahua; Muralidharan, Arundati; Nwosu, Anthony; Dayal, Radhika; Das, Madhumita; Fehringer, Jessica

    2017-11-01

    As a result of new global priorities, there is a growing need for high-quality evaluations of gender-integrated health programmes. This systematic review examined 99 peer-reviewed articles on evaluations of gender-integrated (accommodating and transformative) health programmes with regard to their theory of change (ToC), study design, gender integration in data collection, analysis, and gender measures used. Half of the evaluations explicitly described a ToC or conceptual framework (n = 50) that guided strategies for their interventions. Over half (61%) of the evaluations used quantitative methods exclusively; 11% used qualitative methods exclusively; and 28% used mixed methods. Qualitative methods were not commonly detailed. Evaluations of transformative interventions were less likely than those of accommodating interventions to employ randomised control trials. Two-thirds of the reviewed evaluations reported including at least one specific gender-related outcome (n = 18 accommodating, n = 44 transformative). To strengthen evaluations of gender-integrated programmes, we recommend use of ToCs, explicitly including gender in the ToC, use of gender-sensitive measures, mixed-method designs, in-depth descriptions of qualitative methods, and attention to gender-related factors in data collection logistics. We also recommend further research to develop valid and reliable gender measures that are globally relevant.

  18. Thermophysical Properties of Matter - The TPRC Data Series. Volume 3. Thermal Conductivity - Nonmetallic Liquids and Gases

    DTIC Science & Technology

    1970-01-01

    design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .

  19. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  20. Design approaches to experimental mediation.

    PubMed

    Pirlott, Angela G; MacKinnon, David P

    2016-09-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., "measurement-of-mediation" designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable.

  1. Automated hotspot analysis with aerial image CD metrology for advanced logic devices

    NASA Astrophysics Data System (ADS)

    Buttgereit, Ute; Trautzsch, Thomas; Kim, Min-ho; Seo, Jung-Uk; Yoon, Young-Keun; Han, Hak-Seung; Chung, Dong Hoon; Jeon, Chan-Uk; Meyers, Gary

    2014-09-01

    Continuously shrinking designs by further extension of 193nm technology lead to a much higher probability of hotspots especially for the manufacturing of advanced logic devices. The CD of these potential hotspots needs to be precisely controlled and measured on the mask. On top of that, the feature complexity increases due to high OPC load in the logic mask design which is an additional challenge for CD metrology. Therefore the hotspot measurements have been performed on WLCD from ZEISS, which provides the benefit of reduced complexity by measuring the CD in the aerial image and qualifying the printing relevant CD. This is especially of advantage for complex 2D feature measurements. Additionally, the data preparation for CD measurement becomes more critical due to the larger amount of CD measurements and the increasing feature diversity. For the data preparation this means to identify these hotspots and mark them automatically with the correct marker required to make the feature specific CD measurement successful. Currently available methods can address generic pattern but cannot deal with the pattern diversity of the hotspots. The paper will explore a method how to overcome those limitations and to enhance the time-to-result in the marking process dramatically. For the marking process the Synopsys WLCD Output Module was utilized, which is an interface between the CATS mask data prep software and the WLCD metrology tool. It translates the CATS marking directly into an executable WLCD measurement job including CD analysis. The paper will describe the utilized method and flow for the hotspot measurement. Additionally, the achieved results on hotspot measurements utilizing this method will be presented.

  2. Intra-rater reliability and agreement of various methods of measurement to assess dorsiflexion in the Weight Bearing Dorsiflexion Lunge Test (WBLT) among female athletes.

    PubMed

    Langarika-Rocafort, Argia; Emparanza, José Ignacio; Aramendi, José F; Castellano, Julen; Calleja-González, Julio

    2017-01-01

    To examine the intra-observer reliability and agreement between five methods of measurement for dorsiflexion during Weight Bearing Dorsiflexion Lunge Test and to assess the degree of agreement between three methods in female athletes. Repeated measurements study design. Volleyball club. Twenty-five volleyball players. Dorsiflexion was evaluated using five methods: heel-wall distance, first toe-wall distance, inclinometer at tibia, inclinometer at Achilles tendon and the dorsiflexion angle obtained by a simple trigonometric function. For the statistical analysis, agreement was studied using the Bland-Altman method, the Standard Error of Measurement and the Minimum Detectable Change. Reliability analysis was performed using the Intraclass Correlation Coefficient. Measurement methods using the inclinometer had more than 6° of measurement error. The angle calculated by trigonometric function had 3.28° error. The reliability of inclinometer based methods had ICC values < 0.90. Distance based methods and trigonometric angle measurement had an ICC values > 0.90. Concerning the agreement between methods, there was from 1.93° to 14.42° bias, and from 4.24° to 7.96° random error. To assess DF angle in WBLT, the angle calculated by a trigonometric function is the most repeatable method. The methods of measurement cannot be used interchangeably. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. What quantum measurements measure

    NASA Astrophysics Data System (ADS)

    Griffiths, Robert B.

    2017-09-01

    A solution to the second measurement problem, determining what prior microscopic properties can be inferred from measurement outcomes ("pointer positions"), is worked out for projective and generalized (POVM) measurements, using consistent histories. The result supports the idea that equipment properly designed and calibrated reveals the properties it was designed to measure. Applications include Einstein's hemisphere and Wheeler's delayed choice paradoxes, and a method for analyzing weak measurements without recourse to weak values. Quantum measurements are noncontextual in the original sense employed by Bell and Mermin: if [A ,B ]=[A ,C ]=0 ,[B ,C ]≠0 , the outcome of an A measurement does not depend on whether it is measured with B or with C . An application to Bohm's model of the Einstein-Podolsky-Rosen situation suggests that a faulty understanding of quantum measurements is at the root of this paradox.

  4. A physical parameter method for the design of broad-band X-ray imaging systems to do coronal plasma diagnostics

    NASA Technical Reports Server (NTRS)

    Kahler, S.; Krieger, A. S.

    1978-01-01

    The technique commonly used for the analysis of data from broad-band X-ray imaging systems for plasma diagnostics is the filter ratio method. This requires the use of two or more broad-band filters to derive temperatures and line-of-sight emission integrals or emission measure distributions as a function of temperature. Here an alternative analytical approach is proposed in which the temperature response of the imaging system is matched to the physical parameter being investigated. The temperature response of a system designed to measure the total radiated power along the line of sight of any coronal structure is calculated. Other examples are discussed.

  5. 15 CFR 200.103 - Consulting and advisory services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...

  6. 15 CFR 200.103 - Consulting and advisory services.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...., details of design and construction, operational aspects, unusual or extreme conditions, methods of statistical control of the measurement process, automated acquisition of laboratory data, and data reduction... group seminars on the precision measurement of specific types of physical quantities, offering the...

  7. Design of an S band narrow-band bandpass BAW filter

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhao, Kun-li; Han, Chao

    2017-11-01

    An S band narrowband bandpass filter BAW with center frequency 2.460 GHz, bandwidth 41MHz, band insertion loss - 1.154 dB, the passband ripple 0.9 dB, the out of band rejection about -42.5dB@2.385 GHz; -45.5dB@2.506 GHz was designed for potential UAV measurement and control applications. According to the design specifications, the design is as follows: each FBAR's stack was designed in BAW filter by using Mason model. Each FBAR's shape was designed with the method of apodization electrode. The layout of BAW filter was designed. The acoustic-electromagnetic cosimulation model was built to validate the performance of the designed BAW filter. The presented design procedure is a common one, and there are two characteristics: 1) an A and EM co-simulation method is used for the final BAW filter performance validation in the design stage, thus ensures over-optimistic designs by the bare 1D Mason model are found and rejected in time; 2) An in-house developed auto-layout method is used to get compact BAW filter layout, which simplifies iterative error-and-try work here and output necessary in-plane geometry information to the A and EM cosimulation model.

  8. Efficient two-dimensional compressive sensing in MIMO radar

    NASA Astrophysics Data System (ADS)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  9. Progress in integrated-circuit horn antennas for receiver applications. Part 1: Antenna design

    NASA Technical Reports Server (NTRS)

    Eleftheriades, George V.; Ali-Ahmad, Walid Y.; Rebeiz, Gabriel M.

    1992-01-01

    The purpose of this work is to present a systematic method for the design of multimode quasi-integrated horn antennas. The design methodology is based on the Gaussian beam approach and the structures are optimized for achieving maximum fundamental Gaussian coupling efficiency. For this purpose, a hybrid technique is employed in which the integrated part of the antennas is treated using full-wave analysis, whereas the machined part is treated using an approximate method. This results in a simple and efficient design process. The developed design procedure has been applied for the design of a 20, a 23, and a 25 dB quasi-integrated horn antennas, all with a Gaussian coupling efficiency exceeding 97 percent. The designed antennas have been tested and characterized using both full-wave analysis and 90 GHz/370 GHz measurements.

  10. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  11. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  12. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  13. Measuring Response to Intervention: Comparing Three Effect Size Calculation Techniques for Single-Case Design Analysis

    ERIC Educational Resources Information Center

    Ross, Sarah Gwen

    2012-01-01

    Response to intervention (RTI) is increasingly being used in educational settings to make high-stakes, special education decisions. Because of this, the accurate use and analysis of single-case designs to monitor intervention effectiveness has become important to the RTI process. Effect size methods for single-case designs provide a useful way to…

  14. The measure method of internal screw thread and the measure device design

    NASA Astrophysics Data System (ADS)

    Hu, Dachao; Chen, Jianguo

    2008-12-01

    In accordance with the principle of Three-Line, this paper analyzed the correlation of every main parameter of internal screw thread, and then designed a device to measure the main parameters of internal screw thread. Basis on the measured value and corresponding formula calculation, we can get the internal thread parameters, such as the pitch diameter, thread angle and screw-pitch of common screw thread, terraced screw thread, zigzag screw thread and some else. The practical application has proved that this operation of this device is convenience, and the measured dates have a high accuracy. Meanwhile, the application of this device's patent of invention is accepted by the Patent Office. (The filing number: 200710044081.5)

  15. Conversion of radius of curvature to power (and vice versa)

    NASA Astrophysics Data System (ADS)

    Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.

    2015-09-01

    Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.

  16. Nevada STORMS project: Measurement of mercury emissions from naturally enriched surfaces

    USGS Publications Warehouse

    Gustin, M.S.; Lindberg, S.; Marsik, F.; Casimir, A.; Ebinghaus, R.; Edwards, G.; Hubble-Fitzgerald, C.; Kemp, R.; Kock, H.; Leonard, T.; London, J.; Majewski, M.; Montecinos, C.; Owens, J.; Pilote, M.; Poissant, L.; Rasmussen, P.; Schaedlich, F.; Schneeberger, D.; Schroeder, W.; Sommar, J.; Turner, R.; Vette, A.; Wallschlaeger, D.; Xiao, Z.; Zhang, H.

    1999-01-01

    Diffuse anthropogenic and naturally mercury-enriched areas represent long-lived sources of elemental mercury to the atmosphere. The Nevada Study and Tests of the Release of Mercury From Soils (STORMS) project focused on the measurement of mercury emissions from a naturally enriched area. During the project, concurrent measurements of mercury fluxes from naturally mercury-enriched substrate were made September 1-4, 1997, using four micrometeorological methods and seven field flux chambers. Ambient air mercury concentrations ranged from 2 to nearly 200 ng m-3 indicating that the field site is a source of atmospheric mercury. The mean daytime mercury fluxes, during conditions of no precipitation, measured with field chambers were 50 to 360 ng m-2 h-1, and with the micrometeorological methods were 230 to 600 ng m-2 h-1. This wide range in mercury emission rates reflects differences in method experimental designs and local source strengths. Mercury fluxes measured by many field chambers were significantly different (p < 0.05) but linearly correlated. This indicates that field chambers responded similarly to environmental conditions, but differences in experimental design and site heterogeneity had a significant influence on the magnitude of mercury fluxes. Data developed during the field study demonstrated that field flux chambers are ideal for assessment of the physicochemical processes driving mercury flux and development of an understanding of the magnitude of the influence of individual factors on flux. In general, mean mercury fluxes measured with micrometeorological methods during daytime periods were nearly 3 times higher than mean fluxes measured with field flux chambers. Micrometeorological methods allow for derivation of a representative mercury flux occurring from an unconstrained system and provide an assessment of the actual magnitude and variability of fluxes occurring from an area. Copyright 1999 by the American Geophysical Union.

  17. Measurement of Angle Kappa Using Ultrasound Biomicroscopy and Corneal Topography

    PubMed Central

    Yeo, Joon Hyung; Moon, Nam Ju

    2017-01-01

    Purpose To introduce a new convenient and accurate method to measure the angle kappa using ultrasound biomicroscopy (UBM) and corneal topography. Methods Data from 42 eyes (13 males and 29 females) were analyzed in this study. The angle kappa was measured using Orbscan II and calculated with UBM and corneal topography. The angle kappa of the dominant eye was compared with measurements by Orbscan II. Results The mean patient age was 36.4 ± 13.8 years. The average angle kappa measured by Orbscan II was 3.98° ± 1.12°, while the average angle kappa calculated with UBM and corneal topography was 3.19° ± 1.15°. The difference in angle kappa measured by the two methods was statistically significant (p < 0.001). The two methods showed good reliability (intraclass correlation coefficient, 0.671; p < 0.001). Bland-Altman plots were used to demonstrate the agreement between the two methods. Conclusions We designed a new method using UBM and corneal topography to calculate the angle kappa. This method is convenient to use and allows for measurement of the angle kappa without an expensive device. PMID:28471103

  18. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  19. Development and Testing of an Integrated Rotating Dynamometer Based on Fiber Bragg Grating for Four-Component Cutting Force Measurement

    PubMed Central

    Liu, Mingyao; Bing, Junjun; Xiao, Li; Yun, Kang; Wan, Liang

    2018-01-01

    Cutting force measurement is of great importance in machining processes. Hence, various methods of measuring the cutting force have been proposed by many researchers. In this work, a novel integrated rotating dynamometer based on fiber Bragg grating (FBG) was designed, constructed, and tested to measure four-component cutting force. The dynamometer consists of FBGs that are pasted on the newly designed elastic structure which is then mounted on the rotating spindle. The elastic structure is designed as two mutual-perpendicular semi-octagonal rings. The signals of the FBGs are transmitted to FBG interrogator via fiber optic rotary joints and optical fiber, and the wavelength values are displayed on a computer. In order to determine the static and dynamic characteristics, many tests have been done. The results show that it is suitable for measuring cutting force. PMID:29670062

  20. Development and Testing of an Integrated Rotating Dynamometer Based on Fiber Bragg Grating for Four-Component Cutting Force Measurement.

    PubMed

    Liu, Mingyao; Bing, Junjun; Xiao, Li; Yun, Kang; Wan, Liang

    2018-04-18

    Cutting force measurement is of great importance in machining processes. Hence, various methods of measuring the cutting force have been proposed by many researchers. In this work, a novel integrated rotating dynamometer based on fiber Bragg grating (FBG) was designed, constructed, and tested to measure four-component cutting force. The dynamometer consists of FBGs that are pasted on the newly designed elastic structure which is then mounted on the rotating spindle. The elastic structure is designed as two mutual-perpendicular semi-octagonal rings. The signals of the FBGs are transmitted to FBG interrogator via fiber optic rotary joints and optical fiber, and the wavelength values are displayed on a computer. In order to determine the static and dynamic characteristics, many tests have been done. The results show that it is suitable for measuring cutting force.

  1. An intelligent case-adjustment algorithm for the automated design of population-based quality auditing protocols.

    PubMed

    Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A

    2004-01-01

    We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.

  2. Development of a Scale to Measure Lifelong Learning

    ERIC Educational Resources Information Center

    Kirby, John R.; Knapper, Christopher; Lamon, Patrick; Egnatoff, William J.

    2010-01-01

    Primary objective: to develop a scale to measure students' disposition to engage in lifelong learning. Research design, methods and procedures: using items that reflected the components of lifelong learning, we constructed a 14-item scale that was completed by 309 university and vocational college students, who also completed a measure of deep and…

  3. Real Time Measurement of the Size Distribution of Particulate Matter by a Light Scattering Method

    ERIC Educational Resources Information Center

    Gravatt, C. C., Jr.

    1973-01-01

    Discusses a light scattering instrument designed to measure the size of particles in an air flow in approximately 25 microseconds and at a concentration as high as 10,000 particles/cc. Indicates that the measurement can be made for all particles, independent of their index of refraction. (CC)

  4. Self-Report Measure of Psychological Abuse of Older Adults

    ERIC Educational Resources Information Center

    Conrad, Kendon J.; Iris, Madelyn; Ridings, John W.; Langley, Kate; Anetzberger, Georgia J.

    2011-01-01

    Purpose: This study tested key psychometric properties of the Older Adult Psychological Abuse Measure (OAPAM), one self-report scale of the Older Adult Mistreatment Assessment (OAMA). Design and Methods: Items and theory were developed in a prior concept mapping study. Subsequently, the measures were administered to 226 substantiated clients by 22…

  5. Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.

    NASA Astrophysics Data System (ADS)

    Chiang, Wei-Hwa

    The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.

  6. Design of internal screw thread measuring device based on the Three-Line method principle

    NASA Astrophysics Data System (ADS)

    Hu, Dachao; Chen, Jianguo

    2010-08-01

    In accordance with the principle of Three-Line, this paper analyze the correlation of every main parameter of internal screw thread, and then designed a device to measure the main parameters of internal screw thread. Internal thread parameters, such as the pitch diameter, thread angle and screw-pitch of common screw thread, terraced screw thread, zigzag screw thread were obtained through calculation and measurement. The practical applications have proved that this device is convenience to use, and the measurements have a high accuracy. Meanwhile, the application for the patent of invention has been accepted by the Patent Office (Filing number: 200710044081.5).

  7. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Optimal and Miniaturized Strongly Coupled Magnetic Resonant Systems

    NASA Astrophysics Data System (ADS)

    Hu, Hao

    Wireless power transfer (WPT) technologies for communication and recharging devices have recently attracted significant research attention. Conventional WPT systems based either on far-field or near-field coupling cannot provide simultaneously high efficiency and long transfer range. The Strongly Coupled Magnetic Resonance (SCMR) method was introduced recently, and it offers the possibility of transferring power with high efficiency over longer distances. Previous SCMR research has only focused on how to improve its efficiency and range through different methods. However, the study of optimal and miniaturized designs has been limited. In addition, no multiband and broadband SCMR WPT systems have been developed and traditional SCMR systems exhibit narrowband efficiency thereby imposing strict limitations on simultaneous wireless transmission of information and power, which is important for battery-less sensors. Therefore, new SCMR systems that are optimally designed and miniaturized in size will significantly enhance various technologies in many applications. The optimal and miniaturized SCMR systems are studied here. First, analytical models of the Conformal SCMR (CSCMR) system and thorough analysis and design methodology have been presented. This analysis specifically leads to the identification of the optimal design parameters, and predicts the performance of the designed CSCMR system. Second, optimal multiband and broadband CSCMR systems are designed. Two-band, three-band, and four-band CSCMR systems are designed and validated using simulations and measurements. Novel broadband CSCMR systems are also analyzed, designed, simulated and measured. The proposed broadband CSCMR system achieved more than 7 times larger bandwidth compared to the traditional SCMR system at the same frequency. Miniaturization methods of SCMR systems are also explored. Specifically, methods that use printable CSCMR with large capacitors, novel topologies including meandered, SRRs, and spiral topologies or 3-D structures, lower the operating frequency of SCMR systems, thereby reducing their size. Finally, SCMR systems are discussed and designed for various applications, such as biomedical devices and simultaneous powering of multiple devices.

  9. Video Game Learning Dynamics: Actionable Measures of Multidimensional Learning Trajectories

    ERIC Educational Resources Information Center

    Reese, Debbie Denise; Tabachnick, Barbara G.; Kosko, Robert E.

    2015-01-01

    Valid, accessible, reusable methods for instructional video game design and embedded assessment can provide actionable information enhancing individual and collective achievement. Cyberlearning through game-based, metaphor-enhanced learning objects (CyGaMEs) design and embedded assessment quantify player behavior to study knowledge discovery and…

  10. ERGONOMICS ABSTRACTS 48983-49619.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    THE LITERATURE OF ERGONOMICS, OR BIOTECHNOLOGY, IS CLASSIFIED INTO 15 AREAS--METHODS, SYSTEMS OF MEN AND MACHINES, VISUAL AND AUDITORY AND OTHER INPUTS AND PROCESSES, INPUT CHANNELS, BODY MEASUREMENTS, DESIGN OF CONTROLS AND INTEGRATION WITH DISPLAYS, LAYOUT OF PANELS AND CONSOLES, DESIGN OF WORK SPACE, CLOTHING AND PERSONAL EQUIPMENT, SPECIAL…

  11. Physiologic measures of sexual function in women: a review.

    PubMed

    Woodard, Terri L; Diamond, Michael P

    2009-07-01

    To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.

  12. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  13. Design optimization of piezoresistive cantilevers for force sensing in air and water

    PubMed Central

    Doll, Joseph C.; Park, Sung-Jin; Pruitt, Beth L.

    2009-01-01

    Piezoresistive cantilevers fabricated from doped silicon or metal films are commonly used for force, topography, and chemical sensing at the micro- and macroscales. Proper design is required to optimize the achievable resolution by maximizing sensitivity while simultaneously minimizing the integrated noise over the bandwidth of interest. Existing analytical design methods are insufficient for modeling complex dopant profiles, design constraints, and nonlinear phenomena such as damping in fluid. Here we present an optimization method based on an analytical piezoresistive cantilever model. We use an existing iterative optimizer to minimimize a performance goal, such as minimum detectable force. The design tool is available as open source software. Optimal cantilever design and performance are found to strongly depend on the measurement bandwidth and the constraints applied. We discuss results for silicon piezoresistors fabricated by epitaxy and diffusion, but the method can be applied to any dopant profile or material which can be modeled in a similar fashion or extended to other microelectromechanical systems. PMID:19865512

  14. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  15. Thermal-Mechanical Noise Based CMUT Characterization and Sensing

    PubMed Central

    Gurun, Gokce; Hochman, Michael; Hasler, Paul; Degertekin, F. Levent

    2012-01-01

    When capacitive micromachined ultrasonic transducers (CMUTs) are monolithically integrated with custom-designed low-noise electronics, the output noise of the system can be dominated by the CMUT thermal-mechanical noise both in air and in immersion even for devices with low capacitance. Since the thermal-mechanical noise can be related to the electrical admittance of the CMUTs, this provides an effective means of device characterization. This approach yields a novel method to test the functionality and uniformity of CMUT arrays and the integrated electronics where a direct connection to CMUT array element terminals is not available. These measurements can be performed in air at the wafer level, suitable for batch manufacturing and testing. We demonstrate this method on the elements of an 800-μm diameter CMUT-on-CMOS array designed for intravascular imaging in the 10-20 MHz range. Noise measurements in air show the expected resonance behavior and spring softening effects. Noise measurements in immersion for the same array provide useful information on both the acoustic cross talk and radiation properties of the CMUT array elements. The good agreement between a CMUT model based on finite difference and boundary element method and the noise measurements validates the model and indicates that the output noise is indeed dominated by thermal-mechanical noise. The measurement method can be exploited to implement CMUT based passive sensors to measure immersion medium properties, or other parameters affecting the electro-mechanics of the CMUT structure. PMID:22718877

  16. Thermal-mechanical-noise-based CMUT characterization and sensing.

    PubMed

    Gurun, Gokce; Hochman, Michael; Hasler, Paul; Degertekin, F Levent

    2012-06-01

    When capacitive micromachined ultrasonic transducers (CMUTs) are monolithically integrated with custom-designed low-noise electronics, the output noise of the system can be dominated by the CMUT thermal-mechanical noise both in air and in immersion even for devices with low capacitance. Because the thermal-mechanical noise can be related to the electrical admittance of the CMUTs, this provides an effective means of device characterization. This approach yields a novel method to test the functionality and uniformity of CMUT arrays and the integrated electronics when a direct connection to CMUT array element terminals is not available. Because these measurements can be performed in air at the wafer level, the approach is suitable for batch manufacturing and testing. We demonstrate this method on the elements of an 800-μm-diameter CMUT-on-CMOS array designed for intravascular imaging in the 10 to 20 MHz range. Noise measurements in air show the expected resonance behavior and spring softening effects. Noise measurements in immersion for the same array provide useful information on both the acoustic cross talk and radiation properties of the CMUT array elements. The good agreement between a CMUT model based on finite difference and boundary element methods and the noise measurements validates the model and indicates that the output noise is indeed dominated by thermal-mechanical noise. The measurement method can be exploited to implement CMUT-based passive sensors to measure immersion medium properties, or other parameters affecting the electro-mechanics of the CMUT structure.

  17. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  18. Design, fabrication, and measurement of reflective metasurface for orbital angular momentum vortex wave in radio frequency domain

    NASA Astrophysics Data System (ADS)

    Yu, Shixing; Li, Long; Shi, Guangming; Zhu, Cheng; Zhou, Xiaoxiao; Shi, Yan

    2016-03-01

    In this paper, a reflective metasurface is designed, fabricated, and experimentally demonstrated to generate an orbital angular momentum (OAM) vortex wave in radio frequency domain. Theoretical formula of phase-shift distribution is deduced and used to design the metasurface producing vortex radio waves. The prototype of a practical configuration is designed, fabricated, and measured to validate the theoretical analysis at 5.8 GHz. The simulated and experimental results verify that the vortex waves with different OAM mode numbers can be flexibly generated by using sub-wavelength reflective metasurfaces. The proposed method and metasurface pave a way to generate the OAM vortex waves for radio and microwave wireless communication applications.

  19. Optimization techniques applied to passive measures for in-orbit spacecraft survivability

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.; Price, D. Marvin

    1987-01-01

    Optimization techniques applied to passive measures for in-orbit spacecraft survivability, is a six-month study, designed to evaluate the effectiveness of the geometric programming (GP) optimization technique in determining the optimal design of a meteoroid and space debris protection system for the Space Station Core Module configuration. Geometric Programming was found to be superior to other methods in that it provided maximum protection from impact problems at the lowest weight and cost.

  20. Active retroreflector to measure the rotational orientation in conjunction with a laser tracker

    NASA Astrophysics Data System (ADS)

    Hofherr, O.; Wachten, C.; Müller, C.; Reinecke, H.

    2012-10-01

    High precision optical non-contact position measurement is a key technology in modern engineering. Laser trackers (LT) can determine accurately x-y-z coordinates of passive retroreflectors. Next-generation systems answer the additional need to measure an object's rotational orientation (pitch, yaw, roll). These devices are based on photogrammetry or on enhanced retroreflectors. However, photogrammetry relies on camera systems and time-consuming image processing. Enhanced retroreflectors analyze the LT's beam but are restricted in roll angle measurements. Here we present an integrated laser based method to evaluate all six degrees of freedom. An active retroreflector directly analyzes its orientation to the LT's beam path by outcoupling laser light on detectors. A proof of concept prototype has been designed with a specified measuring range of 360° for roll angle measurements and +/-15° for pitch and yaw angle respectively. The prototype's optical design is inspired by a cat's eye retroreflector. First results are promising and further improvements are under development. We anticipate our method to facilitate simple and cost-effective six degrees of freedom measurements. Furthermore, for industrial applications wide customizations are possible, e.g. adaptation of measuring range, optimization of accuracy, and further system miniaturization.

  1. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  2. S/Ka Dichroic Plate with Rounded Corners for NASA's 34-m Beam-Waveguide Antenna

    NASA Astrophysics Data System (ADS)

    Veruttipong, W.; Khayatian, B.; Imbriale, W.

    2016-02-01

    An S-/Ka-band frequency selective surface (FSS) or a dichroic plate is designed, manufactured, and tested for use in NASA's Deep Space Network (DSN) 34-m beam-waveguide (BWG) antennas. Due to its large size, the proposed dichroic incorporates a new design feature: waveguides with rounded corners to cut cost and allow ease of manufacturing the plate. The dichroic is designed using an analysis that combines the finite-element method (FEM) for arbitrarily shaped guides with the method of moments and Floquet mode theory for periodic structures. The software was verified by comparison with previously measured and computed dichroic plates. The large plate was manufactured with end-mill machining. The RF performance was measured and is in excellent agreement with the analytical results. The dichroic has been successfully installed and is operational at DSS-24, DSS-34, and DSS-54.

  3. Competence assessment for vocational school students based on business and industry chamber to improve graduate entrepreneurship

    NASA Astrophysics Data System (ADS)

    Samsudi, Widodo, Joko; Margunani

    2017-03-01

    Vocational school's skill competence assessment is an important phase to complete learning process at vocational school. For vocational school this phase should be designed and implemented not only to measure learning objective target, but also to provide entrepreneurship experience for the graduates. Therefore competence assessment implementation should be done comprehensively in cooperation with Business and Industry Chamber. The implementation of skill competence aspect covering materials, methods, strategies, tools and assessors, need to be designed and optimized with respect to vocational school together with Business and Industry Chamber. This aims to measure the learning objective target and produce improved entrepreneurship graduates. 4M-S strategy in students' skill competence assessment could be done to ensure that the material, method, tool and assessor have been well designed and implemented in both institutions: vocational school and Business and Industry Chamber to improve entrepreneurship graduates.

  4. Improving Quality of Shoe Soles Product using Six Sigma

    NASA Astrophysics Data System (ADS)

    Jesslyn Wijaya, Athalia; Trusaji, Wildan; Akbar, Muhammad; Ma’ruf, Anas; Irianto, Dradjad

    2018-03-01

    A manufacture in Bandung produce kind of rubber-based product i.e. trim, rice rollers, shoe soles, etc. After penetrating the shoe soles market, the manufacture has met customer with tight quality control. Based on the past data, defect level of this product was 18.08% that caused the manufacture’s loss of time and money. Quality improvement effort was done using six sigma method that included phases of define, measure, analyse, improve, and control (DMAIC). In the design phase, the object’s problem and definition were defined. Delphi method was also used in this phase to identify critical factors. In the measure phase, the existing process stability and sigma quality level were measured. Fishbone diagram and failure mode and effect analysis (FMEA) were used in the next phase to analyse the root cause and determine the priority issues. Improve phase was done by designing alternative improvement strategy using 5W1H method. Some improvement efforts were identified, i.e. (i) modifying design of the hanging rack, (ii) create pantone colour book and check sheet, (iii) provide pedestrian line at compound department, (iv) buying stop watch, and (v) modifying shoe soles dies. Some control strategies for continuous improvement were proposed such as SOP or reward and punishment system.

  5. Integration of optical measurement methods with flight parameter measurement systems

    NASA Astrophysics Data System (ADS)

    Kopecki, Grzegorz; Rzucidlo, Pawel

    2016-05-01

    During the AIM (advanced in-flight measurement techniques) and AIM2 projects, innovative modern techniques were developed. The purpose of the AIM project was to develop optical measurement techniques dedicated for flight tests. Such methods give information about aircraft elements deformation, thermal loads or pressure distribution, etc. In AIM2 the development of optical methods for flight testing was continued. In particular, this project aimed at the development of methods that could be easily applied in flight tests in an industrial setting. Another equally important task was to guarantee the synchronization of the classical measuring system with cameras. The PW-6U glider used in flight tests was provided by the Rzeszów University of Technology. The glider had all the equipment necessary for testing the IPCT (image pattern correlation technique) and IRT (infrared thermometry) methods. Additionally, equipment adequate for the measurement of typical flight parameters, registration and analysis has been developed. This article describes the designed system, as well as presenting the system’s application during flight tests. Additionally, the results obtained in flight tests show certain limitations of the IRT method as applied.

  6. The Role of Attention in Somatosensory Processing: A Multi-Trait, Multi-Method Analysis

    ERIC Educational Resources Information Center

    Wodka, Ericka L.; Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.

    2016-01-01

    Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different…

  7. A Comparison of Methods for Transforming Sentences into Test Questions for Instructional Materials. Technical Report #1.

    ERIC Educational Resources Information Center

    Roid, Gale; And Others

    Several measurement theorists have convincingly argued that methods of writing test questions, particularly for criterion-referenced tests, should be based on operationally defined rules. This study was designed to examine and further refine a method for objectively generating multiple-choice questions for prose instructional materials. Important…

  8. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  9. Enhanced NDE systems

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The goal of this contractual effort was to evaluate the Langley narrow-band ultrasonic debond detection method for a factory use configuration. Successful accomplishment requires establishing the robustness of the method, and enhancing it if necessary. It is also desirable to strive for simplicity of implementation, such as attachment to in-place scanning devices planned for nondestructive evaluation (NDE) measurements. The contract was established with three phases for the ultrasonic work: (1) establish the method and robustness of the ultrasonic method; (2) follow up on any questions which arise with respect to the method or its implementation and produce a Phase A design; and (3) fabricate and test the Phase A design. This is a report on Phase 1.

  10. Stage acoustics for musicians: A multidimensional approach using 3D ambisonic technology

    NASA Astrophysics Data System (ADS)

    Guthrie, Anne

    In this research, a method was outlined and tested for the use of 3D Ambisonic technology to inform stage acoustics research and design. Stage acoustics for musicians as a field has yet to benefit from recent advancements in auralization and spatial acoustic analysis. This research attempts to address common issues in stage acoustics: subjective requirements for performers in relation to feelings of support, quality of sound, and ease of ensemble playing in relation to measurable, objective characteristics that can be used to design better stage enclosures. While these issues have been addressed in previous work, this research attempts to use technological advancements to improve the resolution and realism of the testing and analysis procedures. Advancements include measurement of spatial impulse responses using a spherical microphone array, higher-order ambisonic encoding and playback for real-time performer auralization, high-resolution spatial beamforming for analysis of onstage impulse responses, and multidimensional scaling procedures to determine subjective musician preferences. The methodology for implementing these technologies into stage acoustics research is outlined in this document and initial observations regarding implications for stage enclosure design are proposed. This research provides a robust method for measuring and analyzing performer experiences on multiple stages without the costly and time-intensive process of physically surveying orchestras on different stages, with increased repeatability while maintaining a high level of immersive realism and spatial resolution. Along with implications for physical design, this method provides possibilities for virtual teaching and rehearsal, parametric modeling and co-located performance.

  11. Design and numerical simulation of novel giant magnetostrictive ultrasonic transducer

    NASA Astrophysics Data System (ADS)

    Li, Pengyang; Liu, Qiang; Li, Shujuan; Wang, Quandai; Zhang, Dongya; Li, Yan

    This paper provides a design method of a novel giant magnetostrictive ultrasonic transducer utilized in incremental sheet metal forming. The frequency equations of the ultrasonic vibrator were deduced and the corresponding correctness verified by the modal and harmonic response characteristic through the finite element method (FEM) and ANSYS software. In addition, the magnetic field of the vibrator system was designed and verified by the ANSYS. Finally, the frequency tests based on the impedance response analysis and the amplitude measurements based on the laser displacement sensor were performed on the prototype. The results confirmed the appropriate design of this transducer, setting the foundation for a low mechanical quality factor and satisfying amplitude.

  12. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  13. An Efficient and Robust Singular Value Method for Star Pattern Recognition and Attitude Determination

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Kim, Hye-Young; Junkins, John L.

    2003-01-01

    A new star pattern recognition method is developed using singular value decomposition of a measured unit column vector matrix in a measurement frame and the corresponding cataloged vector matrix in a reference frame. It is shown that singular values and right singular vectors are invariant with respect to coordinate transformation and robust under uncertainty. One advantage of singular value comparison is that a pairing process for individual measured and cataloged stars is not necessary, and the attitude estimation and pattern recognition process are not separated. An associated method for mission catalog design is introduced and simulation results are presented.

  14. Joint design of large-tip-angle parallel RF pulses and blipped gradient trajectories.

    PubMed

    Cao, Zhipeng; Donahue, Manus J; Ma, Jun; Grissom, William A

    2016-03-01

    To design multichannel large-tip-angle kT-points and spokes radiofrequency (RF) pulses and gradient waveforms for transmit field inhomogeneity compensation in high field magnetic resonance imaging. An algorithm to design RF subpulse weights and gradient blip areas is proposed to minimize a magnitude least-squares cost function that measures the difference between realized and desired state parameters in the spin domain, and penalizes integrated RF power. The minimization problem is solved iteratively with interleaved target phase updates, RF subpulse weights updates using the conjugate gradient method with optimal control-based derivatives, and gradient blip area updates using the conjugate gradient method. Two-channel parallel transmit simulations and experiments were conducted in phantoms and human subjects at 7 T to demonstrate the method and compare it to small-tip-angle-designed pulses and circularly polarized excitations. The proposed algorithm designed more homogeneous and accurate 180° inversion and refocusing pulses than other methods. It also designed large-tip-angle pulses on multiple frequency bands with independent and joint phase relaxation. Pulses designed by the method improved specificity and contrast-to-noise ratio in a finger-tapping spin echo blood oxygen level dependent functional magnetic resonance imaging study, compared with circularly polarized mode refocusing. A joint RF and gradient waveform design algorithm was proposed and validated to improve large-tip-angle inversion and refocusing at ultrahigh field. © 2015 Wiley Periodicals, Inc.

  15. [The research in a foot pressure measuring system based on LabVIEW].

    PubMed

    Li, Wei; Qiu, Hong; Xu, Jiang; He, Jiping

    2011-01-01

    This paper presents a system of foot pressure measuring system based on LabVIEW. The designs of hardware and software system are figured out. LabVIEW is used to design the application interface for displaying plantar pressure. The system can realize the plantar pressure data acquisition, data storage, waveform display, and waveform playback. It was also shown that the testing results of the system were in line with the changing trend of normal gait, which conformed to human system engineering theory. It leads to the demonstration of system reliability. The system gives vivid and visual results, and provides a new method of how to measure foot-pressure and some references for the design of Insole System.

  16. Design and experimental research on a self-magnetic pinch diode under MV

    NASA Astrophysics Data System (ADS)

    Pengfei, ZHANG; Yang, HU; Jiang, SUN; Yan, SONG; Jianfeng, SUN; Zhiming, YAO; Peitian, CONG; Mengtong, QIU; Aici, QIU

    2018-01-01

    A self-magnetic pinch diode (SMPD) integrating an anode foil-reinforced electron beam pinch focus and a small high-dose x-ray spot output was designed and optimized. An x-ray focal spot measuring system was developed in accordance with the principle of pinhole imaging. The designed SMPD and the corresponding measuring system were tested under ∼MV, with 1.75 × 2 mm2 oval x-ray spots (AWE defined) and forward directed dose 1.6 rad at 1 m. Results confirmed that the anode foil can significantly strengthen the electron beam pinch focus, and the focal spot measuring system can collect clear focal spot images. This finding indicated that the principle and method are feasible.

  17. Cryogenic colocalization microscopy for nanometer-distance measurements.

    PubMed

    Weisenburger, Siegfried; Jing, Bo; Hänni, Dominik; Reymond, Luc; Schuler, Benjamin; Renn, Alois; Sandoghdar, Vahid

    2014-03-17

    The main limiting factor in spatial resolution of localization microscopy is the number of detected photons. Recently we showed that cryogenic measurements improve the photostability of fluorophores, giving access to Angstrom precision in localization of single molecules. Here, we extend this method to colocalize two fluorophores attached to well-defined positions of a double-stranded DNA. By measuring the separations of the fluorophore pairs prepared at different design positions, we verify the feasibility of cryogenic distance measurement with sub-nanometer accuracy. We discuss the important challenges of our method as well as its potential for further improvement and various applications.

  18. Blood oxygenation and flow measurements using a single 720-nm tunable V-cavity laser.

    PubMed

    Feng, Yafei; Deng, Haoyu; Chen, Xin; He, Jian-Jun

    2017-08-01

    We propose and demonstrate a single-laser-based sensing method for measuring both blood oxygenation and microvascular blood flow. Based on the optimal wavelength range found from theoretical analysis on differential absorption based blood oxygenation measurement, we designed and fabricated a 720-nm-band wavelength tunable V-cavity laser. Without any grating or bandgap engineering, the laser has a wavelength tuning range of 14.1 nm. By using the laser emitting at 710.3 nm and 724.4 nm to measure the oxygenation and blood flow, we experimentally demonstrate the proposed method.

  19. Evaluation of Psychological Hardiness and Coping Style as Risk/Resilience Factors for Health Risk Behaviour

    DTIC Science & Technology

    2011-04-01

    205 RESEARCH DESIGN & METHODS Objective/Hypothesis: The objective of the study is to evaluate the utility of a short hardiness-resilience...problems. Study Design & Methods : This research will evaluate the utility of hardiness, as measured by the DRS-15R, as a screening tool for...may take a toll on defense workers. This research points the way to new approaches for early identification of military workers at risk for stress

  20. A Multi Directional Perfect Reconstruction Filter Bank Designed with 2-D Eigenfilter Approach: Application to Ultrasound Speckle Reduction.

    PubMed

    Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S

    2017-02-01

    B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.

  1. The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: a systematic literature overview.

    PubMed

    McDonald, Suzanne; Quinn, Francis; Vieira, Rute; O'Brien, Nicola; White, Martin; Johnston, Derek W; Sniehotta, Falko F

    2017-12-01

    n-of-1 studies test hypotheses within individuals based on repeated measurement of variables within the individual over time. Intra-individual effects may differ from those found in between-participant studies. Using examples from a systematic review of n-of-1 studies in health behaviour research, this article provides a state of the art overview of the use of n-of-1 methods, organised according to key methodological considerations related to n-of-1 design and analysis, and describes future challenges and opportunities. A comprehensive search strategy (PROSPERO:CRD42014007258) was used to identify articles published between 2000 and 2016, reporting observational or interventional n-of-1 studies with health behaviour outcomes. Thirty-nine articles were identified which reported on n-of-1 observational designs and a range of n-of-1 interventional designs, including AB, ABA, ABABA, alternating treatments, n-of-1 randomised controlled trial, multiple baseline and changing criterion designs. Behaviours measured included treatment adherence, physical activity, drug/alcohol use, sleep, smoking and eating behaviour. Descriptive, visual or statistical analyses were used. We identify scope and opportunities for using n-of-1 methods to answer key questions in health behaviour research. n-of-1 methods provide the tools needed to help advance theoretical knowledge and personalise/tailor health behaviour interventions to individuals.

  2. The use of the principle of superposition in measuring and predicting the thermal characteristics of an electronic equipment operated in a space environment

    NASA Technical Reports Server (NTRS)

    Gale, E. H.

    1980-01-01

    The advantages and possible pitfalls of using a generalized method of measuring and, based on these measurements, predicting the transient or steady-state thermal response characteristics of an electronic equipment designed to operate in a space environment are reviewed. The method requires generation of a set of thermal influence coefficients by test measurement in vacuo. A implified thermal mockup isused in this test. Once this data set is measured, temperatures resulting from arbitrary steady-state or time varying power profiles can be economically calculated with the aid of a digital computer.

  3. Design and validation of a method for evaluation of interocular interaction.

    PubMed

    Lai, Xin Jie Angela; Alexander, Jack; Ho, Arthur; Yang, Zhikuan; He, Mingguang; Suttle, Catherine

    2012-02-01

    To design a simple viewing system allowing dichoptic masking, and to validate this system in adults and children with normal vision. A Trial Frame Apparatus (TFA) was designed to evaluate interocular interaction. This device consists of a trial frame, a 1 mm pinhole in front of the tested eye and a full or partial occluder in front of the non-tested eye. The difference in visual function in one eye between the full- and partial-occlusion conditions was termed the Interaction Index. In experiment 1, low-contrast acuity was measured in six adults using five types of partial occluder. Interaction Index was compared between these five, and the occluder showing the highest Index was used in experiment 2. In experiment 2, low-contrast acuity, contrast sensitivity, and alignment sensitivity were measured in the non-dominant eye of 45 subjects (15 older adults, 15 young adults, and 15 children), using the TFA and an existing well-validated device (shutter goggles) with full and partial occlusion of the dominant eye. These measurements were repeated on 11 subjects of each group using TFA in the partial-occlusion condition only. Repeatability of visual function measurements using TFA was assessed using the Bland-Altman method and agreement between TFA and goggles in terms of visual functions and interactions was assessed using the Bland-Altman method and t-test. In all three subject groups, the TFA showed a high level of repeatability in all visual function measurements. Contrast sensitivity was significantly poorer when measured using TFA than using goggles (p < 0.05). However, Interaction Index of all three visual functions showed acceptable agreement between TFA and goggles (p > 0.05). The TFA may provide an acceptable method for the study of some forms of dichoptic masking in populations where more complex devices (e.g., shutter goggles) cannot be used.

  4. A salient region detection model combining background distribution measure for indoor robots.

    PubMed

    Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong

    2017-01-01

    Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.

  5. Perspective: Randomized Controlled Trials Are Not a Panacea for Diet-Related Research12

    PubMed Central

    Hébert, James R; Frongillo, Edward A; Adams, Swann A; Turner-McGrievy, Gabrielle M; Hurley, Thomas G; Miller, Donald R; Ockene, Ira S

    2016-01-01

    Research into the role of diet in health faces a number of methodologic challenges in the choice of study design, measurement methods, and analytic options. Heavier reliance on randomized controlled trial (RCT) designs is suggested as a way to solve these challenges. We present and discuss 7 inherent and practical considerations with special relevance to RCTs designed to study diet: 1) the need for narrow focus; 2) the choice of subjects and exposures; 3) blinding of the intervention; 4) perceived asymmetry of treatment in relation to need; 5) temporal relations between dietary exposures and putative outcomes; 6) strict adherence to the intervention protocol, despite potential clinical counter-indications; and 7) the need to maintain methodologic rigor, including measuring diet carefully and frequently. Alternatives, including observational studies and adaptive intervention designs, are presented and discussed. Given high noise-to-signal ratios interjected by using inaccurate assessment methods in studies with weak or inappropriate study designs (including RCTs), it is conceivable and indeed likely that effects of diet are underestimated. No matter which designs are used, studies will require continued improvement in the assessment of dietary intake. As technology continues to improve, there is potential for enhanced accuracy and reduced user burden of dietary assessments that are applicable to a wide variety of study designs, including RCTs. PMID:27184269

  6. Vulnerability to cavitation in Olea europaea current-year shoots: further evidence of an open-vessel artifact associated with centrifuge and air-injection techniques.

    PubMed

    Torres-Ruiz, José M; Cochard, Hervé; Mayr, Stefan; Beikircher, Barbara; Diaz-Espejo, Antonio; Rodriguez-Dominguez, Celia M; Badel, Eric; Fernández, José Enrique

    2014-11-01

    Different methods have been devised to analyze vulnerability to cavitation of plants. Although a good agreement between them is usually found, some discrepancies have been reported when measuring samples from long-vesseled species. The aim of this study was to evaluate possible artifacts derived from different methods and sample sizes. Current-year shoot segments of mature olive trees (Olea europaea), a long-vesseled species, were used to generate vulnerability curves (VCs) by bench dehydration, pressure collar and both static- and flow-centrifuge methods. For the latter, two different rotors were used to test possible effects of the rotor design on the curves. Indeed, high-resolution computed tomography (HRCT) images were used to evaluate the functional status of xylem at different water potentials. Measurements of native embolism were used to validate the methods used. The pressure collar and the two centrifugal methods showed greater vulnerability to cavitation than the dehydration method. The shift in vulnerability thresholds in centrifuge methods was more pronounced in shorter samples, supporting the open-vessel artifact hypothesis as a higher proportion of vessels were open in short samples. The two different rotor designs used for the flow-centrifuge method revealed similar vulnerability to cavitation. Only the bench dehydration or HRCT methods produced VCs that agreed with native levels of embolism and water potential values measured in the field. © 2014 Scandinavian Plant Physiology Society.

  7. Sensor for Measuring Hydrogen Partial Pressure in Parabolic Trough Power Plant Expansion Tanks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatzmaier, Greg C.; Cooney, Daniel A.

    The National Renewable Energy Laboratory and Acciona Energy North America are working together to design and implement a process system that provides a permanent solution to the issue of hydrogen buildup at parabolic trough power plants. We are pursuing a method that selectively removes hydrogen from the expansion tanks that serve as reservoirs for the heat transfer fluid (HTF) that circulates in the collector field and power block components. Our modeling shows that removing hydrogen from the expansion tanks at a design rate reduces and maintains dissolved hydrogen in the circulating HTF to a selected target level. Our collaborative workmore » consists of several tasks that are needed to advance this process concept to a development stage, where it is ready for implementation at a commercial power plant. Our main effort is to design and evaluate likely process-unit operations that remove hydrogen from the expansion tanks at a specified rate. Additionally, we designed and demonstrated a method and instrumentation to measure hydrogen partial pressure and concentration in the expansion-tank headspace gas. We measured hydrogen partial pressure in the headspace gas mixture using a palladium-alloy membrane, which is permeable exclusively to hydrogen. The membrane establishes a pure hydrogen gas phase that is in equilibrium with the hydrogen in the gas mixture. We designed and fabricated instrumentation, and demonstrated its effectiveness in measuring hydrogen partial pressures over a range of three orders of magnitude. Our goal is to install this instrument at the Nevada Solar One power plant and to demonstrate its effectiveness in measuring hydrogen levels in the expansion tanks under normal plant operating conditions.« less

  8. A longitudinal multilevel CFA-MTMM model for interchangeable and structurally different methods

    PubMed Central

    Koch, Tobias; Schultze, Martin; Eid, Michael; Geiser, Christian

    2014-01-01

    One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM) measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM) is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality), whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings). Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only five observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed. PMID:24860515

  9. A novel analysis method for paired-sample microbial ecology experiments

    DOE PAGES

    Olesen, Scott W.; Vora, Suhani; Techtmann, Stephen M.; ...

    2016-05-06

    Many microbial ecology experiments use sequencing data to measure a community s response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samplesmore » and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method s validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Furthermore, our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of bottle effects .« less

  10. A novel analysis method for paired-sample microbial ecology experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olesen, Scott W.; Vora, Suhani; Techtmann, Stephen M.

    Many microbial ecology experiments use sequencing data to measure a community s response to an experimental treatment. In a common experimental design, two units, one control and one experimental, are sampled before and after the treatment is applied to the experimental unit. The four resulting samples contain information about the dynamics of organisms that respond to the treatment, but there are no analytical methods designed to extract exactly this type of information from this configuration of samples. Here we present an analytical method specifically designed to visualize and generate hypotheses about microbial community dynamics in experiments that have paired samplesmore » and few or no replicates. The method is based on the Poisson lognormal distribution, long studied in macroecology, which we found accurately models the abundance distribution of taxa counts from 16S rRNA surveys. To demonstrate the method s validity and potential, we analyzed an experiment that measured the effect of crude oil on ocean microbial communities in microcosm. Our method identified known oil degraders as well as two clades, Maricurvus and Rhodobacteraceae, that responded to amendment with oil but do not include known oil degraders. Furthermore, our approach is sensitive to organisms that increased in abundance only in the experimental unit but less sensitive to organisms that increased in both control and experimental units, thus mitigating the role of bottle effects .« less

  11. Evaluating the Impact of Physical Activity Apps and Wearables: Interdisciplinary Review

    PubMed Central

    Rooksby, John; Gray, Cindy M

    2018-01-01

    Background Although many smartphone apps and wearables have been designed to improve physical activity, their rapidly evolving nature and complexity present challenges for evaluating their impact. Traditional methodologies, such as randomized controlled trials (RCTs), can be slow. To keep pace with rapid technological development, evaluations of mobile health technologies must be efficient. Rapid alternative research designs have been proposed, and efficient in-app data collection methods, including in-device sensors and device-generated logs, are available. Along with effectiveness, it is important to measure engagement (ie, users’ interaction and usage behavior) and acceptability (ie, users’ subjective perceptions and experiences) to help explain how and why apps and wearables work. Objectives This study aimed to (1) explore the extent to which evaluations of physical activity apps and wearables: employ rapid research designs; assess engagement, acceptability, as well as effectiveness; use efficient data collection methods; and (2) describe which dimensions of engagement and acceptability are assessed. Method An interdisciplinary scoping review using 8 databases from health and computing sciences. Included studies measured physical activity, and evaluated physical activity apps or wearables that provided sensor-based feedback. Results were analyzed using descriptive numerical summaries, chi-square testing, and qualitative thematic analysis. Results A total of 1829 abstracts were screened, and 858 articles read in full. Of 111 included studies, 61 (55.0%) were published between 2015 and 2017. Most (55.0%, 61/111) were RCTs, and only 2 studies (1.8%) used rapid research designs: 1 single-case design and 1 multiphase optimization strategy. Other research designs included 23 (22.5%) repeated measures designs, 11 (9.9%) nonrandomized group designs, 10 (9.0%) case studies, and 4 (3.6%) observational studies. Less than one-third of the studies (32.0%, 35/111) investigated effectiveness, engagement, and acceptability together. To measure physical activity, most studies (90.1%, 101/111) employed sensors (either in-device [67.6%, 75/111] or external [23.4%, 26/111]). RCTs were more likely to employ external sensors (accelerometers: P=.005). Studies that assessed engagement (52.3%, 58/111) mostly used device-generated logs (91%, 53/58) to measure the frequency, depth, and length of engagement. Studies that assessed acceptability (57.7%, 64/111) most often used questionnaires (64%, 42/64) and/or qualitative methods (53%, 34/64) to explore appreciation, perceived effectiveness and usefulness, satisfaction, intention to continue use, and social acceptability. Some studies (14.4%, 16/111) assessed dimensions more closely related to usability (ie, burden of sensor wear and use, interface complexity, and perceived technical performance). Conclusions The rapid increase of research into the impact of physical activity apps and wearables means that evaluation guidelines are urgently needed to promote efficiency through the use of rapid research designs, in-device sensors and user-logs to assess effectiveness, engagement, and acceptability. Screening articles was time-consuming because reporting across health and computing sciences lacked standardization. Reporting guidelines are therefore needed to facilitate the synthesis of evidence across disciplines. PMID:29572200

  12. Eddy Covariance Method: Overview of General Guidelines and Conventional Workflow

    NASA Astrophysics Data System (ADS)

    Burba, G. G.; Anderson, D. J.; Amen, J. L.

    2007-12-01

    Atmospheric flux measurements are widely used to estimate water, heat, carbon dioxide and trace gas exchange between the ecosystem and the atmosphere. The Eddy Covariance method is one of the most direct, defensible ways to measure and calculate turbulent fluxes within the atmospheric boundary layer. However, the method is mathematically complex, and requires significant care to set up and process data. These reasons may be why the method is currently used predominantly by micrometeorologists. Modern instruments and software can potentially expand the use of this method beyond micrometeorology and prove valuable for plant physiology, hydrology, biology, ecology, entomology, and other non-micrometeorological areas of research. The main challenge of the method for a non-expert is the complexity of system design, implementation, and processing of the large volume of data. In the past several years, efforts of the flux networks (e.g., FluxNet, Ameriflux, CarboEurope, Fluxnet-Canada, Asiaflux, etc.) have led to noticeable progress in unification of the terminology and general standardization of processing steps. The methodology itself, however, is difficult to unify, because various experimental sites and different purposes of studies dictate different treatments, and site-, measurement- and purpose-specific approaches. Here we present an overview of theory and typical workflow of the Eddy Covariance method in a format specifically designed to (i) familiarize a non-expert with general principles, requirements, applications, and processing steps of the conventional Eddy Covariance technique, (ii) to assist in further understanding the method through more advanced references such as textbooks, network guidelines and journal papers, (iii) to help technicians, students and new researchers in the field deployment of the Eddy Covariance method, and (iv) to assist in its use beyond micrometeorology. The overview is based, to a large degree, on the frequently asked questions received from new users of the Eddy Covariance method and relevant instrumentation, and employs non-technical language to be of practical use to those new to this field. Information is provided on theory of the method (including state of methodology, basic derivations, practical formulations, major assumptions and sources of errors, error treatment, and use in non- traditional terrains), practical workflow (e.g., experimental design, implementation, data processing, and quality control), alternative methods and applications, and the most frequently overlooked details of the measurements. References and access to an extended 141-page Eddy Covariance Guideline in three electronic formats are also provided.

  13. Plastic Foam Porosity Characterization by Air-Borne Ultrasound

    NASA Astrophysics Data System (ADS)

    Hoffrén, H.; Karppinen, T.; Hæggström, E.

    2006-03-01

    We continue to develop an ultrasonic burst-reflection method for estimating porosity and tortuosity of solid materials. As a first step we report on method design considerations and measurements on polyurethane foams (Sylomer® vibration dampener) with well-defined porosity. The ultrasonic method is experimentally tested by measuring 235 kHz and 600 kHz air-borne ultrasound reflection from a foam surface at two incidence angles. The reflected sound wave from different foam samples (32% - 64% porosity) was compared to a wave that had traveled from the transmitter to the detector without reflection. The ultrasonically estimated sample porosities coincided within 8% with the porosity estimates obtained by a gravimetric reference method. This parallels the uncertainty of the gravimetric method, 8%. The repeatability of the ultrasonic porosity measurements was better than 5%.

  14. Environmental Correlates to Behavioral Health Outcomes in Alzheimer's Special Care Units

    ERIC Educational Resources Information Center

    Zeisel, John; Silverstein, Nina M.; Hyde, Joan; Levkoff, Sue; Lawton, M. Powell; Holmes, William

    2003-01-01

    Purpose: We systematically measured the associations between environmental design features of nursing home special care units and the incidence of aggression, agitation, social withdrawal, depression, and psychotic problems among persons living there who have Alzheimer's disease or a related disorder. Design and Methods: We developed and tested a…

  15. Using Delphi Methodology to Design Assessments of Teachers' Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Manizade, Agida Gabil; Mason, Marguerite M.

    2011-01-01

    Descriptions of methodologies that can be used to create items for assessing teachers' "professionally situated" knowledge are lacking in mathematics education research literature. In this study, researchers described and used the Delphi method to design an instrument to measure teachers' pedagogical content knowledge. The instrument focused on a…

  16. Children's Services Statistical Neighbour Benchmarking Tool. Practitioner User Guide

    ERIC Educational Resources Information Center

    National Foundation for Educational Research, 2007

    2007-01-01

    Statistical neighbour models provide one method for benchmarking progress. For each local authority (LA), these models designate a number of other LAs deemed to have similar characteristics. These designated LAs are known as statistical neighbours. Any LA may compare its performance (as measured by various indicators) against its statistical…

  17. Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt

    EPA Science Inventory

    Accurately predicting runoff volume from impervious surfaces for water quality design events (e.g., 25.4 mm) is important for sizing green infrastructure stormwater control measures to meet water quality and infiltration design targets. The objective of this research was to quan...

  18. Research on an optoelectronic measurement system of dynamic envelope measurement for China Railway high-speed train

    NASA Astrophysics Data System (ADS)

    Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun

    2018-01-01

    The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.

  19. Social cohesion through football: a quasi-experimental mixed methods design to evaluate a complex health promotion program

    PubMed Central

    2010-01-01

    Social isolation and disengagement fragments local communities. Evidence indicates that refugee families are highly vulnerable to social isolation in their countries of resettlement. Research to identify approaches to best address this is needed. Football United is a program that aims to foster social inclusion and cohesion in areas with high refugee settlement in New South Wales, Australia, through skills and leadership development, mentoring, and the creation of links with local community and corporate leaders and organisations. The Social Cohesion through Football study's broad goal is to examine the implementation of a complex health promotion program, and to analyse the processes involved in program implementation. The study will consider program impact on individual health and wellbeing, social inclusion and cohesion, as well as analyse how the program by necessity interacts and adapts to context during implementation, a concept we refer to as plasticity. The proposed study will be the first prospective cohort impact study to our knowledge to assess the impact of a comprehensive integrated program using football as a vehicle for fostering social inclusion and cohesion in communities with high refugee settlement. Methods/design A quasi-experimental cohort study design with treatment partitioning involving four study sites. The study employs a 'dose response' model, comparing those with no involvement in the Football United program with those with lower or higher levels of participation. A range of qualitative and quantitative measures will be used in the study. Study participants' emotional well being, resilience, ethnic identity and other group orientation, feelings of social inclusion and belonging will be measured using a survey instrument complemented by relevant data drawn from in-depth interviews, self reporting measures and participant observation. The views of key informants from the program and the wider community will also be solicited. Discussion The complexity of the Football United program poses challenges for measurement, and requires the study design to be responsive to the dynamic nature of the program and context. Assessment of change is needed at multiple levels, drawing on mixed methods and multidisciplinary approaches in implementation and evaluation. Attention to these challenges has underpinned the design and methods in the Social Cohesion through Football study, which will use a unique and innovative combination of measures that have not been applied together previously in social inclusion/cohesion and sport and social inclusion/cohesion program research. PMID:20920361

  20. Assessing ergonomic risks of software: Development of the SEAT.

    PubMed

    Peres, S Camille; Mehta, Ranjana K; Ritchey, Paul

    2017-03-01

    Software utilizing interaction designs that require extensive dragging or clicking of icons may increase users' risks for upper extremity cumulative trauma disorders. The purpose of this research is to develop a Self-report Ergonomic Assessment Tool (SEAT) for assessing the risks of software interaction designs and facilitate mitigation of those risks. A 28-item self-report measure was developed by combining and modifying items from existing industrial ergonomic tools. Data were collected from 166 participants after they completed four different tasks that varied by method of input (touch or keyboard and mouse) and type of task (selecting or typing). Principal component analysis found distinct factors associated with stress (i.e., demands) and strain (i.e., response). Repeated measures analyses of variance showed that participants could discriminate the different strain induced by the input methods and tasks. However, participants' ability to discriminate between the stressors associated with that strain was mixed. Further validation of the SEAT is necessary but these results indicate that the SEAT may be a viable method of assessing ergonomics risks presented by software design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  2. Chapter 12: Survey Design and Implementation for Estimating Gross Savings Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Baumgartner, Robert

    This chapter presents an overview of best practices for designing and executing survey research to estimate gross energy savings in energy efficiency evaluations. A detailed description of the specific techniques and strategies for designing questions, implementing a survey, and analyzing and reporting the survey procedures and results is beyond the scope of this chapter. So for each topic covered below, readers are encouraged to consult articles and books cited in References, as well as other sources that cover the specific topics in greater depth. This chapter focuses on the use of survey methods to collect data for estimating gross savingsmore » from energy efficiency programs.« less

  3. Design of microstrip patch antennas using knowledge insertion through retraining

    NASA Astrophysics Data System (ADS)

    Divakar, T. V. S.; Sudhakar, A.

    2018-04-01

    The traditional way of analyzing/designing neural network is to collect experimental data and train neural network. Then, the trained neural network acts as global approximate function. The network is then used to calculate parameters for unknown configurations. The main drawback of this method is one does not have enough experimental data, cost of prototypes being a major factor [1-4]. Therefore, in this method the author collected training data from available approximate formulas with in full design range and trained the network with it. After successful training, the network is retrained with available measured results. This simple way inserts experimental knowledge into the network [5]. This method is tested for rectangular microstrip antenna and circular microstrip antenna.

  4. On-line measurement of diameter of hot-rolled steel tube

    NASA Astrophysics Data System (ADS)

    Zhu, Xueliang; Zhao, Huiying; Tian, Ailing; Li, Bin

    2015-02-01

    In order to design a online diameter measurement system for Hot-rolled seamless steel tube production line. On one hand, it can play a stimulate part in the domestic pipe measuring technique. On the other hand, it can also make our domestic hot rolled seamless steel tube enterprises gain a strong product competitiveness with low input. Through the analysis of various detection methods and techniques contrast, this paper choose a CCD camera-based online caliper system design. The system mainly includes the hardware measurement portion and the image processing section, combining with software control technology and image processing technology, which can complete online measurement of heat tube diameter. Taking into account the complexity of the actual job site situation, it can choose a relatively simple and reasonable layout. The image processing section mainly to solve the camera calibration and the application of a function in Matlab, to achieve the diameter size display directly through the algorithm to calculate the image. I build a simulation platform in the design last phase, successfully, collect images for processing, to prove the feasibility and rationality of the design and make error in less than 2%. The design successfully using photoelectric detection technology to solve real work problems

  5. Anisotropic transmissive coding metamaterials based on dispersion modulation of spoof surface plasmon polaritons

    NASA Astrophysics Data System (ADS)

    Pang, Yongqiang; Li, Yongfeng; Zhang, Jieqiu; Chen, Hongya; Xu, Zhuo; Qu, Shaobo

    2018-06-01

    Anisotropic transmissive coding metamaterials (CMMs) have been designed and demonstrated in this work. High-efficiency transmission with the amplitudes close to unity is achieved by ultrathin metallic tapered blade structures, on which incident waves can be highly coupled into spoof surface plasmon polaritons (SSPPs). The transmission phase can be therefore manipulated with much freedom by designing the dispersion of the SSPPs. These tapered blade structures are designed as the anisotropic unit cells of the CMMs. Two 1-bit anisotropic CMMs with different coding sequences were first designed and simulated, and then a 2-bit anisotropic CMM was designed and measured experimentally. The measured results agree well with the simulations. It is expected that this work provides an alternative method for designing the transmissive CMMs, and may find potential applications in the beam forming technique.

  6. Thermodynamic Studies for Drug Design and Screening

    PubMed Central

    Garbett, Nichola C.; Chaires, Jonathan B.

    2012-01-01

    Introduction A key part of drug design and development is the optimization of molecular interactions between an engineered drug candidate and its binding target. Thermodynamic characterization provides information about the balance of energetic forces driving binding interactions and is essential for understanding and optimizing molecular interactions. Areas covered This review discusses the information that can be obtained from thermodynamic measurements and how this can be applied to the drug development process. Current approaches for the measurement and optimization of thermodynamic parameters are presented, specifically higher throughput and calorimetric methods. Relevant literature for this review was identified in part by bibliographic searches for the period 2004 – 2011 using the Science Citation Index and PUBMED and the keywords listed below. Expert opinion The most effective drug design and development platform comes from an integrated process utilizing all available information from structural, thermodynamic and biological studies. Continuing evolution in our understanding of the energetic basis of molecular interactions and advances in thermodynamic methods for widespread application are essential to realize the goal of thermodynamically-driven drug design. Comprehensive thermodynamic evaluation is vital early in the drug development process to speed drug development towards an optimal energetic interaction profile while retaining good pharmacological properties. Practical thermodynamic approaches, such as enthalpic optimization, thermodynamic optimization plots and the enthalpic efficiency index, have now matured to provide proven utility in design process. Improved throughput in calorimetric methods remains essential for even greater integration of thermodynamics into drug design. PMID:22458502

  7. Structure-Based Design of Functional Amyloid Materials

    DOE PAGES

    Li, Dan; Jones, Eric M.; Sawaya, Michael R.; ...

    2014-12-04

    We report that amyloid fibers, once exclusively associated with disease, are acquiring utility as a class of biological nanomaterials. We introduce a method that utilizes the atomic structures of amyloid peptides, to design materials with versatile applications. As a model application, we designed amyloid fibers capable of capturing carbon dioxide from flue gas, to address the global problem of excess anthropogenic carbon dioxide. By measuring dynamic separation of carbon dioxide from nitrogen, we show that fibers with designed amino acid sequences double the carbon dioxide binding capacity of the previously reported fiber formed by VQIVYK from Tau protein. In amore » second application, we designed fibers that facilitate retroviral gene transfer. Finally, by measuring lentiviral transduction, we show that designed fibers exceed the efficiency of polybrene, a commonly used enhancer of transduction. The same procedures can be adapted to the design of countless other amyloid materials with a variety of properties and uses.« less

  8. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  9. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    PubMed

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  10. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance.

    PubMed

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  11. Note: A dual-channel sensor for dew point measurement based on quartz crystal microbalance

    NASA Astrophysics Data System (ADS)

    Li, Ning; Meng, Xiaofeng; Nie, Jing

    2017-05-01

    A new sensor with dual-channel was designed for eliminating the temperature effect on the frequency measurement of the quartz crystal microbalance (QCM) in dew point detection. The sensor uses active temperature control, produces condensation on the surface of QCM, and then detects the dew point. Both the single-channel and the dual-channel methods were conducted based on the device. The measurement error of the single-channel method was less than 0.5 °C at the dew point range of -2 °C-10 °C while the dual-channel was 0.3 °C. The results showed that the dual-channel method was able to eliminate the temperature effect and yield better measurement accuracy.

  12. Measurements of the Basic SR-71 Airplane Near-Field Signature

    NASA Technical Reports Server (NTRS)

    Haering, Edward A., Jr.; Whitmore, Stephen A.; Ehernberger, L. J.

    1999-01-01

    Airplane design studies have developed configuration concepts that may produce lower sonic boom annoyance levels. Since lower noise designs differ significantly from other HSCT designs, it is necessary to accurately assess their potential before HSCT final configuration decisions are made. Flight tests to demonstrate lower noise design capability by modifying an existing airframe have been proposed for the Mach 3 SR-71 reconnaissance airplane. To support the modified SR-71 proposal, baseline in-flight measurements were made of the unmodified aircraft. These measurements of SR-71 near-field sonic boom signatures were obtained by an F-16XL probe airplane at flightpath separation distances ranging from approximately 740 to 40 ft. This paper discusses the methods used to gather and analyze the flight data, and makes comparisons of these flight data with CFD results from Douglas Aircraft Corporation and NASA Langley Research Center. The CFD solutions were obtained for the near-field flow about the SR-71, and then propagated to the flight test measurement location using the program MDBOOM.

  13. Comparative Measurements of Radon Concentration in Soil Using Passive and Active Methods in High Level Natural Radiation Area (HLNRA) of Ramsar

    PubMed Central

    Amanat, B; Kardan, M R; Faghihi, R; Hosseini Pooya, S M

    2013-01-01

    Background: Radon and its daughters are amongst the most important sources of natural exposure in the world. Soil is one of the significant sources of radon/thoron due to both radium and thorium so that the emanated thoron from it may cause increased uncertainties in radon measurements. Recently, a diffusion chamber has been designed and optimized for passive discriminative measurements of radon/thoron concentrations in soil. Objective: In order to evaluate the capability of the passive method, some comparative measurements (with active methods) have been performed. Method: The method is based upon measurements by a diffusion chamber, including two Lexan polycarbonate SSNTDs, which can discriminate the emanated radon/thorn from the soil by delay method. The comparative measurements have been done in ten selected points of HLNRA of Ramsar in Iran. The linear regression and correlation between the results of two methods have been studied. Results: The results show that the radon concentrations are within the range of 12.1 to 165 kBq/m3 values. The correlation between the results of active and passive methods was measured by 0.99 value. As well, the thoron concentrations have been measured between 1.9 to 29.5 kBq/m3 values at the points. Conclusion: The sensitivity as well as the strong correlation with active measurements shows that the new low-cost passive method is appropriate for accurate seasonal measurements of radon and thoron concentration in soil. PMID:25505760

  14. A Goniometry Paradigm Shift to Measure Burn Scar Contracture in Burn Patients

    DTIC Science & Technology

    2017-10-01

    test more extensively a recently designed Revised Goniometry (RG) method and compare it to Standard Goniometry (SG)used to measure burn scar...joint angle measurements willbe found between SG techniques compared to RG techniques which incorporate CKM and CFU principles. Specific Aim 1: To... compare the average reduction in joint range of motion measured with the standard GM measurements to a newly conceived set of revised GM measurements in

  15. A Pilot Study of a Novel Method of Measuring Stigma about Depression Developed for Latinos in the Faith-Based Setting.

    PubMed

    Caplan, Susan

    2016-08-01

    In order to understand the effects of interventions designed to reduce stigma about mental illness, we need valid measures. However, the validity of commonly used measures is compromised by social desirability bias. The purpose of this pilot study was to test an anonymous method of measuring stigma in the community setting. The method of data collection, Preguntas con Cartas (Questions with Cards) used numbered playing cards to conduct anonymous group polling about stigmatizing beliefs during a mental health literacy intervention. An analysis of the difference between Preguntas con Cartas stigma votes and corresponding face-to-face individual survey results for the same seven stigma questions indicated that there was a statistically significant differences in the distributions between the two methods of data collection (χ(2) = 8.27, p = 0.016). This exploratory study has shown the potential effectiveness of Preguntas con Cartas as a novel method of measuring stigma in the community-based setting.

  16. Research on distributed optical fiber sensing data processing method based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  17. A new method to reduce the statistical and systematic uncertainty of chance coincidence backgrounds measured with waveform digitizers

    DOE PAGES

    O'Donnell, John M.

    2015-06-30

    We present a new method for measuring chance-coincidence backgrounds during the collection of coincidence data. The method relies on acquiring data with near-zero dead time, which is now realistic due to the increasing deployment of flash electronic-digitizer (waveform digitizer) techniques. An experiment designed to use this new method is capable of acquiring more coincidence data, and a much reduced statistical fluctuation of the measured background. A statistical analysis is presented, and us ed to derive a figure of merit for the new method. Factors of four improvement over other analyses are realistic. The technique is illustrated with preliminary data takenmore » as part of a program to make new measurements of the prompt fission neutron spectra at Los Alamo s Neutron Science Center. In conclusion, it is expected that the these measurements will occur in a regime where the maximum figure of merit will be exploited« less

  18. Development of probabilistic design method for annular fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozawa, Takayuki

    2007-07-01

    The increase of linear power and burn-up during the reactor operation is considered as one measure to ensure the utility of fast reactors in the future; for this the application of annular oxide fuels is under consideration. The annular fuel design code CEPTAR was developed in the Japan Atomic Energy Agency (JAEA) and verified by using many irradiation experiences with oxide fuels. In addition, the probabilistic fuel design code BORNFREE was also developed to provide a safe and reasonable fuel design and to evaluate the design margins quantitatively. This study aimed at the development of a probabilistic design method formore » annular oxide fuels; this was implemented in the developed BORNFREE-CEPTAR code, and the code was used to make a probabilistic evaluation with regard to the permissive linear power. (author)« less

  19. Anthropometry of Brazilian Air Force pilots.

    PubMed

    da Silva, Gilvan V; Halpern, Manny; Gordon, Claire C

    2017-10-01

    Anthropometric data are essential for the design of military equipment including sizing of aircraft cockpits and personal gear. Currently, there are no anthropometric databases specific to Brazilian military personnel. The aim of this study was to create a Brazilian anthropometric database of Air Force pilots. The methods, protocols, descriptions, definitions, landmarks, tools and measurements procedures followed the instructions outlined in Measurer's Handbook: US Army and Marine Corps Anthropometric Surveys, 2010-2011 - NATICK/TR-11/017. The participants were measured countrywide, in all five Brazilian Geographical Regions. Thirty-nine anthropometric measurements related to cockpit design were selected. The results of 2133 males and 206 females aged 16-52 years constitute a set of basic data for cockpit design, space arrangement issues and adjustments, protective gear and equipment design, as well as for digital human modelling. Another important implication is that this study can be considered a starting point for reducing gender bias in women's career as pilots. Practitioner Summary: This paper describes the first large-scale anthropometric survey of the Brazilian Air Force pilots and the development of the related database. This study provides critical data for improving aircraft cockpit design for ergonomics and comprehensive pilot accommodation, protective gear and uniform design, as well as digital human modelling.

  20. Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2007-08-01

    Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.

  1. Measures of precision for dissimilarity-based multivariate analysis of ecological communities.

    PubMed

    Anderson, Marti J; Santana-Garcon, Julia

    2015-01-01

    Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.

  2. Placebo non-response measure in sequential parallel comparison design studies.

    PubMed

    Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio

    2015-07-10

    The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Analysis of X-Ray Line Spectra from a Transient Plasma Under Solar Flare Conditions - Part Three - Diagnostics for Measuring Electron Temperature and Density

    NASA Astrophysics Data System (ADS)

    Sylwester, J.; Mewe, R.; Schrijver, J.

    1980-06-01

    In this paper, the third in a series dealing with plasmas out of equilibrium we present quantitative methods of analysis of non-stationary flare plasma parameters. The method is designed to be used for the interpretation of the SMM XRP Bent Crystal Spectrometer spectra. Our analysis is based on measurements of 11 specific lines in the 1.77-3.3 Å range. Using the proposed method we are able to derive information about temperature, density, emission measure, and other related parameters of the flare plasma. It is shown that the measurements, to be made by XRP can give detailed information on these parameters and their time evolution. The method is then tested on some artificial flares, and proves to be useful and accurate.

  4. Methods for a multicenter randomized trial for mixed urinary incontinence: rationale and patient-centeredness of the ESTEEM trial.

    PubMed

    Sung, Vivian W; Borello-France, Diane; Dunivan, Gena; Gantz, Marie; Lukacz, Emily S; Moalli, Pamela; Newman, Diane K; Richter, Holly E; Ridgeway, Beri; Smith, Ariana L; Weidner, Alison C; Meikle, Susan

    2016-10-01

    Mixed urinary incontinence (MUI) can be a challenging condition to manage. We describe the protocol design and rationale for the Effects of Surgical Treatment Enhanced with Exercise for Mixed Urinary Incontinence (ESTEEM) trial, designed to compare a combined conservative and surgical treatment approach versus surgery alone for improving patient-centered MUI outcomes at 12 months. ESTEEM is a multisite, prospective, randomized trial of female participants with MUI randomized to a standardized perioperative behavioral/pelvic floor exercise intervention plus midurethral sling versus midurethral sling alone. We describe our methods and four challenges encountered during the design phase: defining the study population, selecting relevant patient-centered outcomes, determining sample size estimates using a patient-reported outcome measure, and designing an analysis plan that accommodates MUI failure rates. A central theme in the design was patient centeredness, which guided many key decisions. Our primary outcome is patient-reported MUI symptoms measured using the Urogenital Distress Inventory (UDI) score at 12 months. Secondary outcomes include quality of life, sexual function, cost-effectiveness, time to failure, and need for additional treatment. The final study design was implemented in November 2013 across eight clinical sites in the Pelvic Floor Disorders Network. As of 27 February 2016, 433 total/472 targeted participants had been randomized. We describe the ESTEEM protocol and our methods for reaching consensus for methodological challenges in designing a trial for MUI by maintaining the patient perspective at the core of key decisions. This trial will provide information that can directly impact patient care and clinical decision making.

  5. Methods of and apparatus for radiation measurement, and specifically for in vivo radiation measurement

    DOEpatents

    Huffman, D.D.; Hughes, R.C.; Kelsey, C.A.; Lane, R.; Ricco, A.J.; Snelling, J.B.; Zipperian, T.E.

    1986-08-29

    Methods of and apparatus for in vivo radiation measurements rely on a MOSFET dosimeter of high radiation sensitivity which operates in both the passive mode to provide an integrated dose detector and active mode to provide an irradiation rate detector. A compensating circuit with a matched unirradiated MOSFET is provided to operate at a current designed to eliminate temperature dependence of the device. Preferably, the MOSFET is rigidly mounted in the end of a miniature catheter and the catheter is implanted in the patient proximate the radiation source.

  6. Attitude Determination Using a MEMS-Based Flight Information Measurement Unit

    PubMed Central

    Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I.-Chiang; Lin, Yu-Heng

    2012-01-01

    Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design. PMID:22368455

  7. Attitude determination using a MEMS-based flight information measurement unit.

    PubMed

    Ma, Der-Ming; Shiau, Jaw-Kuen; Wang, I-Chiang; Lin, Yu-Heng

    2012-01-01

    Obtaining precise attitude information is essential for aircraft navigation and control. This paper presents the results of the attitude determination using an in-house designed low-cost MEMS-based flight information measurement unit. This study proposes a quaternion-based extended Kalman filter to integrate the traditional quaternion and gravitational force decomposition methods for attitude determination algorithm. The proposed extended Kalman filter utilizes the evolution of the four elements in the quaternion method for attitude determination as the dynamic model, with the four elements as the states of the filter. The attitude angles obtained from the gravity computations and from the electronic magnetic sensors are regarded as the measurement of the filter. The immeasurable gravity accelerations are deduced from the outputs of the three axes accelerometers, the relative accelerations, and the accelerations due to body rotation. The constraint of the four elements of the quaternion method is treated as a perfect measurement and is integrated into the filter computation. Approximations of the time-varying noise variances of the measured signals are discussed and presented with details through Taylor series expansions. The algorithm is intuitive, easy to implement, and reliable for long-term high dynamic maneuvers. Moreover, a set of flight test data is utilized to demonstrate the success and practicality of the proposed algorithm and the filter design.

  8. Radiotracer investigation in gold leaching tanks.

    PubMed

    Dagadu, C P K; Akaho, E H K; Danso, K A; Stegowski, Z; Furman, L

    2012-01-01

    Measurement and analysis of residence time distribution (RTD) is a classical method to investigate performance of chemical reactors. In the present investigation, the radioactive tracer technique was used to measure the RTD of aqueous phase in a series of gold leaching tanks at the Damang gold processing plant in Ghana. The objective of the investigation was to measure the effective volume of each tank and validate the design data after recent process intensification or revamping of the plant. I-131 was used as a radioactive tracer and was instantaneously injected into the feed stream of the first tank and monitored at the outlet of different tanks. Both sampling and online measurement methods were used to monitor the tracer concentration. The results of measurements indicated that both the methods provided identical RTD curves. The mean residence time (MRT) and effective volume of each tank was estimated. The tanks-in-series model with exchange between active and stagnant volume was used and found suitable to describe the flow structure of aqueous phase in the tanks. The estimated effective volume of the tanks and high degree of mixing in tanks could validate the design data and confirmed the expectation of the plant engineer after intensification of the process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A reference Pelton turbine design

    NASA Astrophysics Data System (ADS)

    Solemslie, B. W.; Dahlhaug, O. G.

    2012-09-01

    The designs of hydraulic turbines are usually close kept corporation secrets. Therefore, the possibility of innovation and co-operation between different academic institutions regarding a specific turbine geometry is difficult. A Ph.D.-project at the Waterpower Laboratory, NTNU, aim to design several model Pelton turbines where all measurements, simulations, the design strategy, design software in addition to the physical model will be available to the public. In the following paper a short description of the methods and the test rig that are to be utilized in the project are described. The design will be based on empirical data and NURBS will be used as the descriptive method for the turbine geometry. In addition CFX and SPH simulations will be included in the design process. Each turbine designed and produced in connection to this project will be based on the experience and knowledge gained from the previous designs. The first design will be based on the philosophy to keep a near constant relative velocity through the bucket.

  10. An experimental evaluation of a new designed apparatus (NDA) for the rapid measurement of impaired motor function in rats.

    PubMed

    Jarrahi, M; Sedighi Moghadam, B; Torkmandi, H

    2015-08-15

    Assessment of the ability of rat to balance by rotarod apparatus (ROTA) is frequently used as a measure of impaired motor system function. Most of these methods have some disadvantages, such as failing to sense motor coordination rather than endurance and as the sensitivity of the method is low, more animals are needed to obtain statistically significant results. We have designed and tested a new designed apparatus (NDA) to measure motor system function in rats. Our system consists of a glass box containing 4 beams which placed with 1cm distance between them, two electrical motors for rotating the beams, and a camera to record the movements of the rats. The RPM of the beams is adjustable digitally between 0 and 50 rounds per minute. We evaluated experimentally the capability of the NDA for the rapid measurement of impaired motor function in rats. Also we demonstrated that the sensitivity of the NDA increases by faster rotation speeds and may be more sensitive than ROTA for evaluating of impaired motor system function. Compared to a previous version of this task, our NDA provides a more efficient method to test rodents for studies of motor system function after impaired motor nervous system. In summary, our NDA will allow high efficient monitoring of rat motor system function and may be more sensitive than ROTA for evaluating of impaired motor system function in rats. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Measurement of Angle Kappa Using Ultrasound Biomicroscopy and Corneal Topography.

    PubMed

    Yeo, Joon Hyung; Moon, Nam Ju; Lee, Jeong Kyu

    2017-06-01

    To introduce a new convenient and accurate method to measure the angle kappa using ultrasound biomicroscopy (UBM) and corneal topography. Data from 42 eyes (13 males and 29 females) were analyzed in this study. The angle kappa was measured using Orbscan II and calculated with UBM and corneal topography. The angle kappa of the dominant eye was compared with measurements by Orbscan II. The mean patient age was 36.4 ± 13.8 years. The average angle kappa measured by Orbscan II was 3.98° ± 1.12°, while the average angle kappa calculated with UBM and corneal topography was 3.19° ± 1.15°. The difference in angle kappa measured by the two methods was statistically significant (p < 0.001). The two methods showed good reliability (intraclass correlation coefficient, 0.671; p < 0.001). Bland-Altman plots were used to demonstrate the agreement between the two methods. We designed a new method using UBM and corneal topography to calculate the angle kappa. This method is convenient to use and allows for measurement of the angle kappa without an expensive device. © 2017 The Korean Ophthalmological Society

  12. Freeform lens design for LED collimating illumination.

    PubMed

    Chen, Jin-Jia; Wang, Te-Yuan; Huang, Kuang-Lung; Liu, Te-Shu; Tsai, Ming-Da; Lin, Chin-Tang

    2012-05-07

    We present a simple freeform lens design method for an application to LED collimating illumination. The method is derived from a basic geometric-optics analysis and construction approach. By using this method, a highly collimating lens with LED chip size of 1.0 mm × 1.0 mm and optical simulation efficiency of 86.5% under a view angle of ± 5 deg is constructed. To verify the practical performance of the lens, a prototype of the collimator lens is also made, and an optical efficiency of 90.3% with a beam angle of 4.75 deg is measured.

  13. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  14. Dynamic tracking down-conversion signal processing method based on reference signal for grating heterodyne interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Guochao; Yan, Shuhua; Zhou, Weihong; Gu, Chenhui

    2012-08-01

    Traditional displacement measurement systems by grating, which purely make use of fringe intensity to implement fringe count and subdivision, have rigid demands for signal quality and measurement condition, so they are not easy to realize measurement with nanometer precision. Displacement measurement with the dual-wavelength and single-grating design takes advantage of the single grating diffraction theory and the heterodyne interference theory, solving quite well the contradiction between large range and high precision in grating displacement measurement. To obtain nanometer resolution and nanometer precision, high-power subdivision of interference fringes must be realized accurately. A dynamic tracking down-conversion signal processing method based on the reference signal is proposed. Accordingly, a digital phase measurement module to realize high-power subdivision on field programmable gate array (FPGA) was designed, as well as a dynamic tracking down-conversion module using phase-locked loop (PLL). Experiments validated that a carrier signal after down-conversion can constantly maintain close to 100 kHz, and the phase-measurement resolution and phase precision are more than 0.05 and 0.2 deg, respectively. The displacement resolution and the displacement precision, corresponding to the phase results, are 0.139 and 0.556 nm, respectively.

  15. Remediation of Math Anxiety in Preservice Elementary School Teachers

    ERIC Educational Resources Information Center

    Dunkle, Susan M.

    2010-01-01

    The purpose of this study was to measure the level of math anxiety in preservice elementary teachers, and then to determine if remediation methods would lower the measured level of anxiety in these same preservice teachers. The 10-day study provided an intense remediation using a time-series design to measure change on the Revised Math Anxiety…

  16. The Other Side of Method Bias: The Perils of Distinct Source Research Designs

    ERIC Educational Resources Information Center

    Kammeyer-Mueller, John; Steel, Piers D. G.; Rubenstein, Alex

    2010-01-01

    Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source…

  17. Nursing Home Staff Turnover: Impact on Nursing Home Compare Quality Measures

    ERIC Educational Resources Information Center

    Castle, Nicholas G.; Engberg, John; Men, Aiju

    2007-01-01

    Purpose: We used data from a large sample of nursing homes to examine the association between staff turnover and quality. Design and Methods: The staff turnover measures came from primary data collected from 2,840 nursing homes in 2004 (representing a 71% response rate). Data collection included measures for nurse aides, licensed practical nurses,…

  18. An improved offset generator developed for Allan deviation measurement of ultra stable frequency standards

    NASA Technical Reports Server (NTRS)

    Hamell, Robert L.; Kuhnle, Paul F.; Sydnor, Richard L.

    1992-01-01

    Measuring the performance of ultra stable frequency standards such as the Superconducting Cavity Maser Oscillator (SCMO) necessitates improvement of some test instrumentation. The frequency stability test equipment used at JPL includes a 1 Hz Offset Generator to generate a beat frequency between a pair of 100 MHz signals that are being compared. The noise floor of the measurement system using the current Offset Generator is adequate to characterize stability of hydrogen masers, but it is not adequate for the SCMO. A new Offset Generator with improved stability was designed and tested at JPL. With this Offset Generator and a new Zero Crossing Detector, recently developed at JPL, the measurement flow was reduced by a factor of 5.5 at 1 second tau, 3.0 at 1000 seconds, and 9.4 at 10,000 seconds, compared against the previous design. In addition to the new circuit designs of the Offset Generator and Zero Crossing Detector, tighter control of the measurement equipment environment was required to achieve this improvement. The design of this new Offset Generator are described, along with details of the environment control methods used.

  19. Modified T-history method for measuring thermophysical properties of phase change materials at high temperature

    NASA Astrophysics Data System (ADS)

    Omaraa, Ehsan; Saman, Wasim; Bruno, Frank; Liu, Ming

    2017-06-01

    Latent heat storage using phase change materials (PCMs) can be used to store large amounts of energy in a narrow temperature difference during phase transition. The thermophysical properties of PCMs such as latent heat, specific heat and melting and solidification temperature need to be defined at high precision for the design and estimating the cost of latent heat storage systems. The existing laboratory standard methods, such as differential thermal analysis (DTA) and differential scanning calorimetry (DSC), use a small sample size (1-10 mg) to measure thermophysical properties, which makes these methods suitable for homogeneous elements. In addition, this small amount of sample has different thermophysical properties when compared with the bulk sample and may have limitations for evaluating the properties of mixtures. To avoid the drawbacks in existing methods, the temperature - history (T-history) method can be used with bulk quantities of PCM salt mixtures to characterize PCMs. This paper presents a modified T-history setup, which was designed and built at the University of South Australia to measure the melting point, heat of fusion, specific heat, degree of supercooling and phase separation of salt mixtures for a temperature range between 200 °C and 400 °C. Sodium Nitrate (NaNO3) was used to verify the accuracy of the new setup.

  20. 78 FR 35038 - Proposed Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ..., reliable, and transparent method for identifying high-quality programs that can receive continuing five... the system is working. The study will employ a mixed-methods design that integrates and layers administrative and secondary data sources, observational measures, and interviews to develop a rich knowledge...

  1. Agent Reasoning Transparency: The Influence of Information Level on Automation-Induced Complacency

    DTIC Science & Technology

    2017-06-30

    Surveys and Tests 16 3.3.4 Experimental Design and Performance Measures 19 3.3.5 Procedure 21 3.4 Results 23 3.4.1 Complacent behavior, Primary Task...Method 57 4.3.1 Participants 57 4.3.2 Apparatus 57 4.3.3 Surveys and Tests 57 4.3.4 Experimental Design and Performance Measures 58 4.3.5...Discussion 115 5.5 Conclusion 119 6. References 120 Appendix A. Demographics Questionnaire 127 Appendix B. Attentional Control Survey 129

  2. Automated iodine monitor system. [for aqueous solutions

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The feasibility of a direct spectrophotometric measurement of iodine in water was established. An iodine colorimeter, was built to demonstrate the practicality of this technique. The specificity of this method was verified when applied to an on-line system where a reference solution cannot be used, and a preliminary design is presented for an automated iodine measuring and controlling system meeting the desired specifications. An Automated iodine monitor/controller system based on this preliminary design was built, tested, and delivered to the Johnson Space Center.

  3. [Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure].

    PubMed

    Yokohama, Noriya

    2013-07-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost.

  4. A Review of Computerized Team Performance Measures to Identify Military-Relevant, Low-to-Medium Fidelity Tests of Small Group Effectiveness during Shared Information Processing

    DTIC Science & Technology

    2012-05-01

    Alexandria, Virginia 22314. Orders will be expedited if placed through the librarian or other person designated to request documents from DTIC...an official Department of the Army position, policy, or decision, unless so designated by other official documentation. Citation of trade names in...teamwork and evaluate the effectiveness of team training methods (Baker and Salas, 1997). Additionally, good measures of team performance should aid the

  5. Computer-Aided Design of Low-Noise Microwave Circuits

    NASA Astrophysics Data System (ADS)

    Wedge, Scott William

    1991-02-01

    Devoid of most natural and manmade noise, microwave frequencies have detection sensitivities limited by internally generated receiver noise. Low-noise amplifiers are therefore critical components in radio astronomical antennas, communications links, radar systems, and even home satellite dishes. A general technique to accurately predict the noise performance of microwave circuits has been lacking. Current noise analysis methods have been limited to specific circuit topologies or neglect correlation, a strong effect in microwave devices. Presented here are generalized methods, developed for computer-aided design implementation, for the analysis of linear noisy microwave circuits comprised of arbitrarily interconnected components. Included are descriptions of efficient algorithms for the simultaneous analysis of noisy and deterministic circuit parameters based on a wave variable approach. The methods are therefore particularly suited to microwave and millimeter-wave circuits. Noise contributions from lossy passive components and active components with electronic noise are considered. Also presented is a new technique for the measurement of device noise characteristics that offers several advantages over current measurement methods.

  6. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  7. Design of an impact evaluation using a mixed methods model--an explanatory assessment of the effects of results-based financing mechanisms on maternal healthcare services in Malawi.

    PubMed

    Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela

    2014-04-22

    In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.

  8. Deriving depth-dependent light escape efficiency and optical Swank factor from measured pulse height spectra of scintillators

    PubMed Central

    Howansky, Adrian; Peng, Boyu; Lubinsky, Anthony R.; Zhao, Wei

    2017-01-01

    Purpose Pulse height spectroscopy has been used by investigators to deduce the imaging properties of scintillators. Pulse height spectra (PHS) are used to compute the Swank factor, which describes the variation in scintillator light output per x-ray interaction. The spread in PHS measured below the K-edge is related to the optical component of the Swank factor, i.e. variations in light escape efficiency from different depths of x-ray interaction in the scintillator, denoted ε̄(z). Optimizing scintillators for medical imaging applications requires understanding of these optical properties, as they determine tradeoffs between parameters such as x-ray absorption, light yield, and spatial resolution. This work develops a model for PHS acquisition such that the effect of measurement uncertainty can be removed. This method allows ε̄(z) to be quantified on an absolute scale and permits more accurate estimation of the optical Swank factor of scintillators. Methods The pulse height spectroscopy acquisition chain was modeled as a linear system of stochastic gain stages. Analytical expressions were derived for signal and noise propagation through the PHS chain, accounting for deterministic and stochastic aspects of x-ray absorption, scintillation, and light detection with a photomultiplier tube. The derived expressions were used to calculate PHS of thallium-doped cesium iodide (CsI) scintillators using parameters that were measured, calculated, or known from literature. PHS were measured at 25 and 32 keV of CsI samples designed with an optically-reflective or absorptive backing, with or without a fiber-optic faceplate (FOP), and with thicknesses ranging from 150–1000 μm. Measured PHS were compared with calculated PHS, then light escape model parameters were varied until measured and modeled results reached agreement. Resulting estimates of ε̄(z) were used to calculate each scintillator’s optical Swank factor. Results For scintillators of the same optical design, only minor differences in light escape efficiency were observed between samples with different thickness. As thickness increased, escape efficiency decreased by up to 20% for interactions furthest away from light collection. Optical design (i.e. backing and FOP) predominantly affected the magnitude and relative variation in ε̄(z). Depending on interaction depth and scintillator thickness, samples with an absorptive backing and FOP were estimated to yield 4.1–13.4 photons/keV. Samples with a reflective backing and FOP yielded 10.4–18.4 keV−1, while those with a reflective backing and no FOP yielded 29.5–52.0 keV−1. Optical Swank factors were approximately 0.9 and near-unity in samples featuring an absorptive or reflective backing, respectively. Conclusions This work uses a modeling approach to remove the noise introduced by the measurement apparatus from measured PHS. This method allows absolute quantification of ε̄(z) and more accurate estimation of the optical Swank factor of scintillators. The method was applied to CsI scintillators with different thickness and optical design, and determined that optical design more strongly affects ε̄(z) and Swank factor than differences in CsI thickness. Despite large variations in ε̄(z) between optical designs, the Swank factor of all evaluated samples is above 0.9. Information provided by this methodology can help validate Monte Carlo simulations of structured CsI and optimize scintillator design for x-ray imaging applications. PMID:28039881

  9. Challenges of developing an electro-optical system for measuring man's operational envelope

    NASA Technical Reports Server (NTRS)

    Woolford, B.

    1985-01-01

    In designing work stations and restraint systems, and in planning tasks to be performed in space, a knowledge of the capabilities of the operator is essential. Answers to such questions as whether a specific control or work surface can be reached from a given restraint and how much force can be applied are of particular interest. A computer-aided design system has been developed for designing and evaluating work stations, etc., and the Anthropometric Measurement Laboratory (AML) has been charged with obtaining the data to be used in design and modeling. Traditional methods of measuring reach and force are very labor intensive and require bulky equipment. The AML has developed a series of electro-optical devices for collecting reach data easily, in computer readable form, with portable systems. The systems developed, their use, and data collected with them are described.

  10. An instrument for the geometric attributes of metallic appearance.

    PubMed

    Christie, J S

    1969-09-01

    With the use of a greater variety of metals and methods of finishing them, an increasing need to measure metallic appearance has developed in the automotive industry. A simple and easy to operate instrument has been designed to measure the geometric characteristics of reflectance related to metallic appearance. These are specular reflectance, distinctness of image, haze, and diffuseness. A series of selected aluminum and stainless steel specimens has been used to test the performance of the new instrument and of older devices with which it has been compared. Functionally, the new instrument combines features of the Distinctness of ReflectedImage (DORI)meter designed by Tingle, and the abridged goniophotometer designed by Tingle and George. The design and operation of the new instrument have been simplified by use of multiple receptor apertures with optical fiber light collectors. The measurement of a wide range of metal appearance characteristic has thus been achieved with mechanical and electrical circuit simplicity.

  11. A method for mandibular dental arch superimposition using 3D cone beam CT and orthodontic 3D digital model

    PubMed Central

    Park, Tae-Joon; Lee, Sang-Hyun

    2012-01-01

    Objective The purpose of this study was to develop superimposition method on the lower arch using 3-dimensional (3D) cone beam computed tomography (CBCT) images and orthodontic 3D digital modeling. Methods Integrated 3D CBCT images were acquired by substituting the dental portion of 3D CBCT images with precise dental images of an orthodontic 3D digital model. Images were acquired before and after treatment. For the superimposition, 2 superimposition methods were designed. Surface superimposition was based on the basal bone structure of the mandible by surface-to-surface matching (best-fit method). Plane superimposition was based on anatomical structures (mental and lingual foramen). For the evaluation, 10 landmarks including teeth and anatomic structures were assigned, and 30 times of superimpositions and measurements were performed to determine the more reproducible and reliable method. Results All landmarks demonstrated that the surface superimposition method produced relatively more consistent coordinate values. The mean distances of measured landmarks values from the means were statistically significantly lower with the surface superimpositions method. Conclusions Between the 2 superimposition methods designed for the evaluation of 3D changes in the lower arch, surface superimposition was the simpler, more reproducible, reliable method. PMID:23112948

  12. Analysis and Design of ITER 1 MV Core Snubber

    NASA Astrophysics Data System (ADS)

    Wang, Haitian; Li, Ge

    2012-11-01

    The core snubber, as a passive protection device, can suppress arc current and absorb stored energy in stray capacitance during the electrical breakdown in accelerating electrodes of ITER NBI. In order to design the core snubber of ITER, the control parameters of the arc peak current have been firstly analyzed by the Fink-Baker-Owren (FBO) method, which are used for designing the DIIID 100 kV snubber. The B-H curve can be derived from the measured voltage and current waveforms, and the hysteresis loss of the core snubber can be derived using the revised parallelogram method. The core snubber can be a simplified representation as an equivalent parallel resistance and inductance, which has been neglected by the FBO method. A simulation code including the parallel equivalent resistance and inductance has been set up. The simulation and experiments result in dramatically large arc shorting currents due to the parallel inductance effect. The case shows that the core snubber utilizing the FBO method gives more compact design.

  13. A Program Manager’s Methodology for Developing Structured Design in Embedded Weapons Systems.

    DTIC Science & Technology

    1983-12-01

    the hardware selec- tion. This premise has been reiterated and substantiated by numerous case studies performed in recent years among them Barry ...measures, rules of thumb, and analysis techniques, this method with early development by re Marco is the basis for the Pressman design methcdology...desired traits of a design based on the specificaticns generated, but does not include a procedure for realizat or" of the design. Pressman , (Ref. 5

  14. Structured surface reflector design for oblique incidence beam splitter at 610 GHz.

    PubMed

    Defrance, F; Casaletti, M; Sarrazin, J; Wiedner, M C; Gibson, H; Gay, G; Lefèvre, R; Delorme, Y

    2016-09-05

    An iterative alternate projection-based algorithm is developed to design structured surface reflectors to operate as beam splitters at GHz and THz frequencies. To validate the method, a surface profile is determined to achieve a reflector at 610 GHz that generates four equal-intensity beams towards desired directions of ±12.6° with respect to the specular reflection axis. A prototype is fabricated and the beam splitter behavior is experimentally demonstrated. Measurements confirm a good agreement (within 1%) with computer simulations using Feko, validating the method. The beam splitter at 610 GHz has a measured efficiency of 78% under oblique incidence illumination that ensures a similar intensity between the four reflected beams (variation of about 1%).

  15. Calculated and measured fields in superferric wiggler magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, E.B.; Solomon, L.

    1995-02-01

    Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less

  16. Paired methods to measure biofilm killing and removal: a case study with Penicillin G treatment of Staphylococcus aureus biofilm.

    PubMed

    Ausbacher, D; Lorenz, L; Pitts, B; Stewart, P S; Goeres, D M

    2018-03-01

    Biofilms are microbial aggregates that show high tolerance to antibiotic treatments in vitro and in vivo. Killing and removal are both important in biofilm control, therefore methods that measure these two mechanisms were evaluated in a parallel experimental design. Kill was measured using the single tube method (ASTM method E2871) and removal was determined by video microscopy and image analysis using a new treatment flow cell. The advantage of the parallel test design is that both methods used biofilm covered coupons harvested from a CDC biofilm reactor, a well-established and standardized biofilm growth method. The control Staphylococcus aureus biofilms treated with growth medium increased by 0·6 logs during a 3-h contact time. Efficacy testing showed biofilms exposed to 400 μmol l -1 penicillin G decreased by only 0·3 logs. Interestingly, time-lapse confocal scanning laser microscopy revealed that penicillin G treatment dispersed the biofilm despite being an ineffective killing agent. In addition, no biofilm removal was detected when assays were performed in 96-well plates. These results illustrate that biofilm behaviour and impact of treatments can vary substantially when assayed by different methods. Measuring both killing and removal with well-characterized methods will be crucial for the discovery of new anti-biofilm strategies. Biofilms are tolerant to antimicrobial treatments and can lead to persistent infections. Finding new anti-biofilm strategies and understanding their mode-of-action is therefore of high importance. Historically, antimicrobial testing has focused on measuring the decrease in viability. While kill data are undeniably important, measuring biofilm disruption provides equally useful information. Starting with biofilm grown in the same reactor, we paired assessment of biofilm removal using a new treatment-flow-cell and real-time microscopy with kill data collected using the single tube method (ASTM E2871). Pairing these two methods revealed efficient biofilm removal properties of Penicillin G which were not detected during efficacy testing. © 2017 The Society for Applied Microbiology.

  17. Rotavirus vaccine effectiveness in low-income settings: An evaluation of the test-negative design.

    PubMed

    Schwartz, Lauren M; Halloran, M Elizabeth; Rowhani-Rahbar, Ali; Neuzil, Kathleen M; Victor, John C

    2017-01-03

    The test-negative design (TND), an epidemiologic method currently used to measure rotavirus vaccine (RV) effectiveness, compares the vaccination status of rotavirus-positive cases and rotavirus-negative controls meeting a pre-defined case definition for acute gastroenteritis. Despite the use of this study design in low-income settings, the TND has not been evaluated to measure rotavirus vaccine effectiveness. This study builds upon prior methods to evaluate the use of the TND for influenza vaccine using a randomized controlled clinical trial database. Test-negative vaccine effectiveness (VE-TND) estimates were derived from three large randomized placebo-controlled trials (RCTs) of monovalent (RV1) and pentavalent (RV5) rotavirus vaccines in sub-Saharan Africa and Asia. Derived VE-TND estimates were compared to the original RCT vaccine efficacy estimates (VE-RCTs). The core assumption of the TND (i.e., rotavirus vaccine has no effect on rotavirus-negative diarrhea) was also assessed. TND vaccine effectiveness estimates were nearly equivalent to original RCT vaccine efficacy estimates. Neither RV had a substantial effect on rotavirus-negative diarrhea. This study supports the TND as an appropriate epidemiologic study design to measure rotavirus vaccine effectiveness in low-income settings. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Design and application of star map simulation system for star sensors

    NASA Astrophysics Data System (ADS)

    Wu, Feng; Shen, Weimin; Zhu, Xifang; Chen, Yuheng; Xu, Qinquan

    2013-12-01

    Modern star sensors are powerful to measure attitude automatically which assure a perfect performance of spacecrafts. They achieve very accurate attitudes by applying algorithms to process star maps obtained by the star camera mounted on them. Therefore, star maps play an important role in designing star cameras and developing procession algorithms. Furthermore, star maps supply significant supports to exam the performance of star sensors completely before their launch. However, it is not always convenient to supply abundant star maps by taking pictures of the sky. Thus, star map simulation with the aid of computer attracts a lot of interests by virtue of its low price and good convenience. A method to simulate star maps by programming and extending the function of the optical design program ZEMAX is proposed. The star map simulation system is established. Firstly, based on analyzing the working procedures of star sensors to measure attitudes and the basic method to design optical system by ZEMAX, the principle of simulating star sensor imaging is given out in detail. The theory about adding false stars and noises, and outputting maps is discussed and the corresponding approaches are proposed. Then, by external programming, the star map simulation program is designed and produced. Its user interference and operation are introduced. Applications of star map simulation method in evaluating optical system, star image extraction algorithm and star identification algorithm, and calibrating system errors are presented completely. It was proved that the proposed simulation method provides magnificent supports to the study on star sensors, and improves the performance of star sensors efficiently.

  19. Flow Pattern Phenomena in Two-Phase Flow in Microchannels

    NASA Astrophysics Data System (ADS)

    Keska, Jerry K.; Simon, William E.

    2004-02-01

    Space transportation systems require high-performance thermal protection and fluid management techniques for systems ranging from cryogenic fluid management devices to primary structures and propulsion systems exposed to extremely high temperatures, as well as for other space systems such as cooling or environment control for advanced space suits and integrated circuits. Although considerable developmental effort is being expended to bring potentially applicable technologies to a readiness level for practical use, new and innovative methods are still needed. One such method is the concept of Advanced Micro Cooling Modules (AMCMs), which are essentially compact two-phase heat exchangers constructed of microchannels and designed to remove large amounts of heat rapidly from critical systems by incorporating phase transition. The development of AMCMs requires fundamental technological advancement in many areas, including: (1) development of measurement methods/systems for flow-pattern measurement/identification for two-phase mixtures in microchannels; (2) development of a phenomenological model for two-phase flow which includes the quantitative measure of flow patterns; and (3) database development for multiphase heat transfer/fluid dynamics flows in microchannels. This paper focuses on the results of experimental research in the phenomena of two-phase flow in microchannels. The work encompasses both an experimental and an analytical approach to incorporating flow patterns for air-water mixtures flowing in a microchannel, which are necessary tools for the optimal design of AMCMs. Specifically, the following topics are addressed: (1) design and construction of a sensitive test system for two-phase flow in microchannels, one which measures ac and dc components of in-situ physical mixture parameters including spatial concentration using concomitant methods; (2) data acquisition and analysis in the amplitude, time, and frequency domains; and (3) analysis of results including evaluation of data acquisition techniques and their validity for application in flow pattern determination.

  20. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

Top