Sample records for linear size test

  1. Effect of thermal cycling on composites reinforced with two differently sized silica-glass fibers.

    PubMed

    Meriç, Gökçe; Ruyter, I Eystein

    2007-09-01

    To evaluate the effects of thermal cycling on the flexural properties of composites reinforced with two differently sized fibers. Acid-washed, woven, fused silica-glass fibers, were heat-treated at 500 degrees C, silanized and sized with one of two sizing resins (linear poly(butyl methacrylate)) (PBMA), cross-linked poly(methyl methacrylate) (PMMA). Subsequently the fibers were incorporated into a polymer matrix. Two test groups with fibers and one control group without fibers were prepared. The flexural properties of the composite reinforced with linear PBMA-sized fibers were evaluated by 3-point bend testing before thermal cycling. The specimens from all three groups were thermally cycled in water (12,000 cycles, 5/55 degrees C, dwell time 30 s), and afterwards tested by 3-point bending. SEM micrographs were taken of the fibers and of the fractured fiber reinforced composites (FRC). The reduction of ultimate flexural strength after thermal cycling was less than 20% of that prior to thermal cycling for composites reinforced with linear PBMA-sized silica-glass fibers. The flexural strength of the composite reinforced with cross-linked PMMA-sized fibers was reduced to less than half of the initial value. This study demonstrated that thermal cycling differently influences the flexural properties of composites reinforced with different sized silica-glass fibers. The interfacial linear PBMA-sizing polymer acts as a stress-bearing component for the high interfacial stresses during thermal cycling due to the flexible structure of the linear PBMA above Tg. The cross-linked PMMA-sizing, however, acts as a rigid component and therefore causes adhesive fracture between the fibers and matrix after the fatigue process of thermal cycling and flexural fracture.

  2. Synthesizing Dynamic Programming Algorithms from Linear Temporal Logic Formulae

    NASA Technical Reports Server (NTRS)

    Rosu, Grigore; Havelund, Klaus

    2001-01-01

    The problem of testing a linear temporal logic (LTL) formula on a finite execution trace of events, generated by an executing program, occurs naturally in runtime analysis of software. We present an algorithm which takes an LTL formula and generates an efficient dynamic programming algorithm. The generated algorithm tests whether the LTL formula is satisfied by a finite trace of events given as input. The generated algorithm runs in linear time, its constant depending on the size of the LTL formula. The memory needed is constant, also depending on the size of the formula.

  3. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  4. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  5. Linearity can account for the similarity among conventional, frequency-doubling, and gabor-based perimetric tests in the glaucomatous macula.

    PubMed

    Sun, Hao; Dul, Mitchell W; Swanson, William H

    2006-07-01

    The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli.

  6. Single-Specimen Technique to Establish the J-Resistance of Linear Viscoelastic Solids with Constant Poisson's Ratio

    NASA Technical Reports Server (NTRS)

    Gutierrez-Lemini, Danton; McCool, Alex (Technical Monitor)

    2001-01-01

    A method is developed to establish the J-resistance function for an isotropic linear viscoelastic solid of constant Poisson's ratio using the single-specimen technique with constant-rate test data. The method is based on the fact that, for a test specimen of fixed crack size under constant rate, the initiation J-integral may be established from the crack size itself, the actual external load and load-point displacement at growth initiation, and the relaxation modulus of the viscoelastic solid, without knowledge of the complete test record. Since crack size alone, of the required data, would be unknown at each point of the load-vs-load-point displacement curve of a single-specimen test, an expression is derived to estimate it. With it, the physical J-integral at each point of the test record may be established. Because of its basis on single-specimen testing, not only does the method not require the use of multiple specimens with differing initial crack sizes, but avoids the need for tracking crack growth as well.

  7. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  8. A Review of the Proposed KIsi Offset-Secant Method for Size-Insensitive Linear-Elastic Fracture Toughness Evaluation

    NASA Technical Reports Server (NTRS)

    James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim

    2017-01-01

    Recently proposed modifications to ASTM E399 would provide a new size-insensitive approach to analyzing the force-displacement test record. The proposed size-insensitive linear-elastic fracture toughness, KIsi, targets a consistent 0.5mm crack extension for all specimen sizes by using an offset secant that is a function of the specimen ligament length. The KIsi evaluation also removes the Pmax/PQ criterion and increases the allowable specimen deformation. These latter two changes allow more plasticity at the crack tip, prompting the review undertaken in this work to ensure the validity of this new interpretation of the force-displacement curve. This paper provides a brief review of the proposed KIsi methodology and summarizes a finite element study into the effects of increased crack tip plasticity on the method given the allowance for additional specimen deformation. The study has two primary points of investigation: the effect of crack tip plasticity on compliance change in the force-displacement record and the continued validity of linear-elastic fracture mechanics to describe the crack front conditions. The analytical study illustrates that linear-elastic fracture mechanics assumptions remain valid at the increased deformation limit; however, the influence of plasticity on the compliance change in the test record is problematic. A proposed revision to the validity criteria for the KIsi test method is briefly discussed.

  9. Linearity Can Account for the Similarity Among Conventional, Frequency-Doubling, and Gabor-Based Perimetric Tests in the Glaucomatous Macula

    PubMed Central

    DUL, MITCHELL W.; SWANSON, WILLIAM H.

    2006-01-01

    Purposes The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Methods Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. Results The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Conclusions Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli. PMID:16840860

  10. Investigating the unification of LOFAR-detected powerful AGN in the Boötes field

    NASA Astrophysics Data System (ADS)

    Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.

    2017-08-01

    Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.

  11. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    PubMed

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value

  12. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  13. Shade response of a full size TESSERA module

    NASA Astrophysics Data System (ADS)

    Slooff, Lenneke H.; Carr, Anna J.; de Groot, Koen; Jansen, Mark J.; Okel, Lars; Jonkman, Rudi; Bakker, Jan; de Gier, Bart; Harthoorn, Adriaan

    2017-08-01

    A full size TESSERA shade tolerant module has been made and was tested under various shadow conditions. The results show that the dedicated electrical interconnection of cells result in an almost linear response under shading. Furthermore, the voltage at maximum power point is almost independent of the shadow. This decreases the demand on the voltage range of the inverter. The increased shadow linearity results in a calculated increase in annual yield of about 4% for a typical Dutch house.

  14. [Size of testes and epididymes in boys up to 17 years of life assessed by ultrasound method and method of external linear measurements].

    PubMed

    Osemlak, Paweł

    2011-01-01

    1. Determination of the size of testes and epididymes on the right and left side, in healthy boys in various age groups with use of non-invasive ultrasound examination method and the method of external linear measurements. 2. Determination of age, when intensive growth of testicular and epididymal size starts. 3. Determination whether there are statistically significant differences between the size of the right and the left testis, as well as between the right and left epididymis. 4. Evaluation of the ultrasound method and method of external linear measurements in their use for scientific investigations. 309 boys, aged from 1 day to 17 years of life, treated in the Clinical Department of Paediatric Surgery and Traumatology of the Medical University in Lublin from 2009 to 2010 due to diseases needed to be treated surgically, but not the scrotum, were examined in this study. No pathologies influencing the development of genital organs were found in these boys. Dimension of the testes was studied with ultrasound method and with method of external linear measurements. Dimension of epididymes was only examined with ultrasound method. In every age group the author calculated mean arithmetical values for: testiscular length, thickness, width and volume, as well as epididymal depth and basis. With consideration of standard deviation (X+/-1 SD) it was possible to define the range of dimension of healthy testes and epididymes and their change with age. Final dimensions of the right and left testis as well as of the right and left epididymis were compared. Dimensions of the testis on the same side of body acquired with the ultrasound method and acquired with the method of external linear measurements were compared. Statistical work-up with Wilcoxon test for two dependent groups was implemented. Ultrasound evaluation pointed to intensive 2.5-times increase in testicular length and width, and 2-times increase in testicular thickness in boys aged 10 to 17 years. Mean volume of neonatal testis is 0.35 ml. From 10th year of life, the testicular volume increases 10-times from 1.36 ml to 12.83 ml in 17th year of life. Depth of epididymis measured with ultrasound method is always greater than its basis. Both these dimensions increase quickly from the 10th year of life. Measurements done with the caliper on the average overestimate testicular length by 5.7 mm, its thickness by 2.9 mm and its width by 1.4 mm, comparing with ultrasound method. There were no statistically important differences between dimension of the right and left testis. Differences between dimension of the right and left epididymis are statistically significant. 1. Age is the main factor influencing testicular size in boys. 2. Intensive growth of testes starts in the 10th year of life, of epididymes in 12th year of life. 3. Testicular volume is the most precise description of its size. There are no statisticallysignificant differences between volume of the right and left testis. Differences between dimension, described by the depth and basis of the right and left epididymis are statistically significant. 4. Ultrasound method and method of external linear measurements with the caliper have similar diagnostic value in comparing the size of both testes. 5. Measurements of testicular size with ultrasound method have much greater value for detail evaluation than the method of external linear measurements with the caliper, which does not regard thickness of the skin and testicular coats, as well as the epididymal head which is often situated on the upper end of the testis.

  15. Effect of stimulus configuration on crowding in strabismic amblyopia.

    PubMed

    Norgett, Yvonne; Siderov, John

    2017-11-01

    Foveal vision in strabismic amblyopia can show increased levels of crowding, akin to typical peripheral vision. Target-flanker similarity and visual-acuity test configuration may cause the magnitude of crowding to vary in strabismic amblyopia. We used custom-designed visual acuity tests to investigate crowding in observers with strabismic amblyopia. LogMAR was measured monocularly in both eyes of 11 adults with strabismic or mixed strabismic/anisometropic amblyopia using custom-designed letter tests. The tests used single-letter and linear formats with either bar or letter flankers to introduce crowding. Tests were presented monocularly on a high-resolution display at a test distance of 4 m, using standardized instructions. For each condition, five letters of each size were shown; testing continued until three letters of a given size were named incorrectly. Uncrowded logMAR was subtracted from logMAR in each of the crowded tests to highlight the crowding effect. Repeated-measures ANOVA showed that letter flankers and linear presentation individually resulted in poorer performance in the amblyopic eyes (respectively, mean normalized logMAR = 0.29, SE = 0.07, mean normalized logMAR = 0.27, SE = 0.07; p < 0.05) and together had an additive effect (mean = 0.42, SE = 0.09, p < 0.001). There was no difference across the tests in the fellow eyes (p > 0.05). Both linear presentation and letter rather than bar flankers increase crowding in the amblyopic eyes of people with strabismic amblyopia. These results suggest the influence of more than one mechanism contributing to crowding in linear visual-acuity charts with letter flankers.

  16. Evaluation of an enhanced gravity-based fine-coal circuit for high-sulfur coal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, M.K.; Samal, A.R.; Palit, A.

    One of the main objectives of this study was to evaluate a fine-coal cleaning circuit using an enhanced gravity separator specifically for a high sulfur coal application. The evaluation not only included testing of individual unit operations used for fine-coal classification, cleaning and dewatering, but also included testing of the complete circuit simultaneously. At a scale of nearly 2 t/h, two alternative circuits were evaluated to clean a minus 0.6-mm coal stream utilizing a 150-mm-diameter classifying cyclone, a linear screen having a projected surface area of 0.5 m{sup 2}, an enhanced gravity separator having a bowl diameter of 250 mmmore » and a screen-bowl centrifuge having a bowl diameter of 500 mm. The cleaning and dewatering components of both circuits were the same; however, one circuit used a classifying cyclone whereas the other used a linear screen as the classification device. An industrial size coal spiral was used to clean the 2- x 0.6-mm coal size fraction for each circuit to estimate the performance of a complete fine-coal circuit cleaning a minus 2-mm particle size coal stream. The 'linear screen + enhanced gravity separator + screen-bowl circuit' provided superior sulfur and ash-cleaning performance to the alternative circuit that used a classifying cyclone in place of the linear screen. Based on these test data, it was estimated that the use of the recommended circuit to treat 50 t/h of minus 2-mm size coal having feed ash and sulfur contents of 33.9% and 3.28%, respectively, may produce nearly 28.3 t/h of clean coal with product ash and sulfur contents of 9.15% and 1.61 %, respectively.« less

  17. Affective Organizational Commitment and Citizenship Behavior: Linear and Non-linear Moderating Effects of Organizational Tenure

    ERIC Educational Resources Information Center

    Ng, Thomas W. H.; Feldman, Daniel C.

    2011-01-01

    Utilizing a meta-analytical approach for testing moderating effects, the current study investigated organizational tenure as a moderator in the relation between affective organizational commitment and organizational citizenship behavior (OCB). We observed that, across 40 studies (N = 11,416 respondents), the effect size for the relation between…

  18. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  19. Intense beams at the micron level for the Next Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seeman, J.T.

    1991-08-01

    High brightness beams with sub-micron dimensions are needed to produce a high luminosity for electron-positron collisions in the Next Linear Collider (NLC). To generate these small beam sizes, a large number of issues dealing with intense beams have to be resolved. Over the past few years many have been successfully addressed but most need experimental verification. Some of these issues are beam dynamics, emittance control, instrumentation, collimation, and beam-beam interactions. Recently, the Stanford Linear Collider (SLC) has proven the viability of linear collider technology and is an excellent test facility for future linear collider studies.

  20. Development of a superconducting claw-pole linear test-rig

    NASA Astrophysics Data System (ADS)

    Radyjowski, Patryk; Keysan, Ozan; Burchell, Joseph; Mueller, Markus

    2016-04-01

    Superconducting generators can help to reduce the cost of energy for large offshore wind turbines, where the size and mass of the generator have a direct effect on the installation cost. However, existing superconducting generators are not as reliable as the alternative technologies. In this paper, a linear test prototype for a novel superconducting claw-pole topology, which has a stationary superconducting coil that eliminates the cryocooler coupler will be presented. The issues related to mechanical, electromagnetic and thermal aspects of the prototype will be presented.

  1. Analysis and testing of axial compression in imperfect slender truss struts

    NASA Technical Reports Server (NTRS)

    Lake, Mark S.; Georgiadis, Nicholas

    1990-01-01

    The axial compression of imperfect slender struts for large space structures is addressed. The load-shortening behavior of struts with initially imperfect shapes and eccentric compressive end loading is analyzed using linear beam-column theory and results are compared with geometrically nonlinear solutions to determine the applicability of linear analysis. A set of developmental aluminum clad graphite/epoxy struts sized for application to the Space Station Freedom truss are measured to determine their initial imperfection magnitude, load eccentricity, and cross sectional area and moment of inertia. Load-shortening curves are determined from axial compression tests of these specimens and are correlated with theoretical curves generated using linear analysis.

  2. SU-F-T-475: An Evaluation of the Overlap Between the Acceptance Testing and Commissioning Processes for Conventional Medical Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, A; Rangaraj, D; Perez-Andujar, A

    2016-06-15

    Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each weremore » calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.« less

  3. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  4. Small-Sample DIF Estimation Using SIBTEST, Cochran's Z, and Log-Linear Smoothing

    ERIC Educational Resources Information Center

    Lei, Pui-Wa; Li, Hongli

    2013-01-01

    Minimum sample sizes of about 200 to 250 per group are often recommended for differential item functioning (DIF) analyses. However, there are times when sample sizes for one or both groups of interest are smaller than 200 due to practical constraints. This study attempts to examine the performance of Simultaneous Item Bias Test (SIBTEST),…

  5. Pass rates on the American Board of Family Medicine Certification Exam by residency location and size.

    PubMed

    Falcone, John L; Middleton, Donald B

    2013-01-01

    The Accreditation Council for Graduate Medical Education (ACGME) sets residency performance standards for the American Board of Family Medicine Certification Examination. This study aims are to describe the compliance of residency programs with ACGME standards and to determine whether residency pass rates depend on program size and location. In this retrospective cohort study, residency performance from 2007 to 2011 was compared with the ACGME performance standards. Simple linear regression was performed to see whether program pass rates were dependent on program size. Regional differences in performance were compared with χ(2) tests, using an α level of 0.05. Of 429 total residency programs, there were 205 (47.8%) that violate ACGME performance standards. Linear regression showed that program pass rates were positively correlated and dependent on program size (P < .001). The median pass rate per state was 86.4% (interquartile range, 82.0-90.8. χ(2) Tests showed that states in the West performed higher than the other 3 US Census Bureau Regions (all P < .001). Approximately half of the family medicine training programs do not meet the ACGME examination performance standards. Pass rates are associated with residency program size, and regional variation occurs. These findings have the potential to affect ACGME policy and residency program application patterns.

  6. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  7. Nonadiabatic effects in ultracold molecules via anomalous linear and quadratic Zeeman shifts.

    PubMed

    McGuyer, B H; Osborn, C B; McDonald, M; Reinaudi, G; Skomorowski, W; Moszynski, R; Zelevinsky, T

    2013-12-13

    Anomalously large linear and quadratic Zeeman shifts are measured for weakly bound ultracold 88Sr2 molecules near the intercombination-line asymptote. Nonadiabatic Coriolis coupling and the nature of long-range molecular potentials explain how this effect arises and scales roughly cubically with the size of the molecule. The linear shifts yield nonadiabatic mixing angles of the molecular states. The quadratic shifts are sensitive to nearby opposite f-parity states and exhibit fourth-order corrections, providing a stringent test of a state-of-the-art ab initio model.

  8. Sizing Single Cantilever Beam Specimens for Characterizing Facesheet/Core Peel Debonding in Sandwich Structure

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.

    2010-01-01

    This paper details part of an effort focused on the development of a standardized facesheet/core peel debonding test procedure. The purpose of the test is to characterize facesheet/core peel in sandwich structure, accomplished through the measurement of the critical strain energy release rate associated with the debonding process. The specific test method selected for the standardized test procedure utilizes a single cantilever beam (SCB) specimen configuration. The objective of the current work is to develop a method for establishing SCB specimen dimensions. This is achieved by imposing specific limitations on specimen dimensions, with the objectives of promoting a linear elastic specimen response, and simplifying the data reduction method required for computing the critical strain energy release rate associated with debonding. The sizing method is also designed to be suitable for incorporation into a standardized test protocol. Preliminary application of the resulting sizing method yields practical specimen dimensions.

  9. Assessment of geometrical characteristics of dental endodontic micro-instruments utilizing X-ray micro computed tomography

    PubMed Central

    Al JABBARI, Youssef S.; TSAKIRIDIS, Peter; ELIADES, George; AL-HADLAQ, Solaiman M.; ZINELIS, Spiros

    2012-01-01

    Objective The aim of this study was to quantify the surface area, volume and specific surface area of endodontic files employing quantitative X-ray micro computed tomography (mXCT). Material and Methods Three sets (six files each) of the Flex-Master Ni-Ti system (Nº 20, 25 and 30, taper .04) were utilized in this study. The files were scanned by mXCT. The surface area and volume of all files were determined from the cutting tip up to 16 mm. The data from the surface area, volume and specific area were statistically evaluated using the one-way ANOVA and SNK multiple comparison tests at α=0.05, employing the file size as a discriminating variable. The correlation between the surface area and volume with nominal ISO sizes were tested employing linear regression analysis. Results The surface area and volume of Nº 30 files showed the highest value followed by Nº 25 and Nº 20 and the differences were statistically significant. The Nº 20 files showed a significantly higher specific surface area compared to Nº 25 and Nº 30. The increase in surface and volume towards higher file sizes follows a linear relationship with the nominal ISO sizes (r2=0.930 for surface area and r2=0.974 for volume respectively). Results indicated that the surface area and volume demonstrated an almost linear increase while the specific surface area exhibited an abrupt decrease towards higher sizes. Conclusions This study demonstrates that mXCT can be effectively applied to discriminate very small differences in the geometrical features of endodontic micro-instruments, while providing quantitative information for their geometrical properties. PMID:23329248

  10. Wave-induced hydraulic forces on submerged aquatic plants in shallow lakes.

    PubMed

    Schutten, J; Dainty, J; Davy, A J

    2004-03-01

    Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot-size descriptors (plan-form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0.2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1.5 and shoot size). The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan-form area performed similarly well as shoot-size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species.

  11. An empirical test of Lanchester's square law: mortality during battles of the fire ant Solenopsis invicta

    PubMed Central

    Plowes, Nicola J.R; Adams, Eldridge S

    2005-01-01

    Lanchester's models of attrition describe casualty rates during battles between groups as functions of the numbers of individuals and their fighting abilities. Originally developed to describe human warfare, Lanchester's square law has been hypothesized to apply broadly to social animals as well, with important consequences for their aggressive behaviour and social structure. According to the square law, the fighting ability of a group is proportional to the square of the number of individuals, but rises only linearly with fighting ability of individuals within the group. By analyzing mortality rates of fire ants (Solenopsis invicta) fighting in different numerical ratios, we provide the first quantitative test of Lanchester's model for a non-human animal. Casualty rates of fire ants were not consistent with the square law; instead, group fighting ability was an approximately linear function of group size. This implies that the relative numbers of casualties incurred by two fighting groups are not strongly affected by relative group sizes and that battles do not disproportionately favour group size over individual prowess. PMID:16096093

  12. The Generation Rate of Respirable Dust from Cutting Fiber Cement Siding Using Different Tools

    PubMed Central

    Qi, Chaolong; Echt, Alan; Gressel, Michael G

    2017-01-01

    This article describes the evaluation of the generation rate of respirable dust (GAPS, defined as the mass of respirable dust generated per unit linear length cut) from cutting fiber cement siding using different tools in a laboratory testing system. We used an aerodynamic particle sizer spectrometer (APS) to continuously monitor the real-time size distributions of the dust throughout cutting tests when using a variety of tools, and calculated the generation rate of respirable dust for each testing condition using the size distribution data. The test result verifies that power shears provided an almost dust-free operation with a GAPS of 0.006 gram meter−1 (g m−1) at the testing condition. For the same power saws, the cuts using saw blades with more teeth generated more respirable dusts. Using the same blade for all four miter saws tested in this study, a positive linear correlation was found between the saws’ blade rotating speed and its dust generation rate. In addition, a circular saw running at the highest blade rotating speed of 9068 RPM generated the greatest amount of dust. All the miter saws generated less dust in the ‘chopping mode’ than in the ‘chopping and sliding’ mode. For the tested saws, GAPS consistently decreased with the increases of the saw cutting feed rate and the number of board in the stack. All the test results point out that fewer cutting interactions between the saw blade’s teeth and the siding board for a unit linear length of cut tend to result in a lower generation rate of respirable dust. These results may help guide optimal operation in practice and future tool development aimed at minimizing dust generation while producing a satisfactory cut. PMID:28395343

  13. The Generation Rate of Respirable Dust from Cutting Fiber Cement Siding Using Different Tools.

    PubMed

    Qi, Chaolong; Echt, Alan; Gressel, Michael G

    2017-03-01

    This article describes the evaluation of the generation rate of respirable dust (GAPS, defined as the mass of respirable dust generated per unit linear length cut) from cutting fiber cement siding using different tools in a laboratory testing system. We used an aerodynamic particle sizer spectrometer (APS) to continuously monitor the real-time size distributions of the dust throughout cutting tests when using a variety of tools, and calculated the generation rate of respirable dust for each testing condition using the size distribution data. The test result verifies that power shears provided an almost dust-free operation with a GAPS of 0.006 g m-1 at the testing condition. For the same power saws, the cuts using saw blades with more teeth generated more respirable dusts. Using the same blade for all four miter saws tested in this study, a positive linear correlation was found between the saws' blade rotating speed and its dust generation rate. In addition, a circular saw running at the highest blade rotating speed of 9068 rpm generated the greatest amount of dust. All the miter saws generated less dust in the 'chopping mode' than in the 'chopping and sliding' mode. For the tested saws, GAPS consistently decreased with the increases of the saw cutting feed rate and the number of board in the stack. All the test results point out that fewer cutting interactions between the saw blade's teeth and the siding board for a unit linear length of cut tend to result in a lower generation rate of respirable dust. These results may help guide optimal operation in practice and future tool development aimed at minimizing dust generation while producing a satisfactory cut. Published by Oxford University Press on behalf of The British Occupational Hygiene Society 2017.

  14. The linear sizes tolerances and fits system modernization

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Grinevich, V. A.; Shalay, V. V.

    2018-04-01

    The study is carried out on the urgent topic for technical products quality providing in the tolerancing process of the component parts. The aim of the paper is to develop alternatives for improving the system linear sizes tolerances and dimensional fits in the international standard ISO 286-1. The tasks of the work are, firstly, to classify as linear sizes the elements additionally linear coordinating sizes that determine the detail elements location and, secondly, to justify the basic deviation of the tolerance interval for the element's linear size. The geometrical modeling method of real details elements, the analytical and experimental methods are used in the research. It is shown that the linear coordinates are the dimensional basis of the elements linear sizes. To standardize the accuracy of linear coordinating sizes in all accuracy classes, it is sufficient to select in the standardized tolerance system only one tolerance interval with symmetrical deviations: Js for internal dimensional elements (holes) and js for external elements (shafts). The main deviation of this coordinating tolerance is the average zero deviation, which coincides with the nominal value of the coordinating size. Other intervals of the tolerance system are remained for normalizing the accuracy of the elements linear sizes with a fundamental change in the basic deviation of all tolerance intervals is the maximum deviation corresponding to the limit of the element material: EI is the lower tolerance for the of the internal elements (holes) sizes and es is the upper tolerance deviation for the outer elements (shafts) sizes. It is the sizes of the material maximum that are involved in the of the dimensional elements mating of the shafts and holes and determine the fits type.

  15. Effect of various binning methods and ROI sizes on the accuracy of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho

    2008-03-01

    To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.

  16. An Analytic Solution to the Computation of Power and Sample Size for Genetic Association Studies under a Pleiotropic Mode of Inheritance.

    PubMed

    Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A

    2016-01-01

    Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.

  17. Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel

    NASA Technical Reports Server (NTRS)

    Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.

    2013-01-01

    Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response

  18. Disk diffusion antimicrobial susceptibility testing of members of the family Legionellaceae including erythromycin-resistant variants of Legionella micdadei.

    PubMed Central

    Dowling, J N; McDevitt, D A; Pasculle, A W

    1984-01-01

    Disk diffusion antimicrobial susceptibility testing of members of the family Legionellaceae was accomplished on buffered charcoal yeast extract agar by allowing the bacteria to grow for 6 h before placement of the disks, followed by an additional 42-h incubation period before the inhibitory zones were measured. This system was standardized by comparing the zone sizes with the MICs for 20 antimicrobial agents of nine bacterial strains in five Legionella species and of 19 laboratory-derived, erythromycin-resistant variants of Legionella micdadei. A high, linear correlation between zone size and MIC was found for erythromycin, trimethoprim, penicillin, ampicillin, carbenicillin, cephalothin, cefamandole, cefoxitin, moxalactam, chloramphenicol, vancomycin, and clindamycin. Disk susceptibility testing could be employed to screen Legionella isolates for resistance to any of these antimicrobial agents, of which only erythromycin is known to be efficacious in the treatment of legionellosis. With selected antibiotics, disk susceptibility patterns also appeared to accurately identify to the species level the legionellae. The range of the MICs of the legionellae for rifampin and the aminoglycosides was too small to determine whether the correlation of zone size with MIC was linear. However, laboratory-derived, high-level rifampin-resistant variants of L. micdadei demonstrated no inhibition zone around the rifampin disk, indicating that disk susceptibility testing would likely identify a rifampin-resistant clinical isolate. Of the antimicrobial agents tested, the only agents for which disk susceptibility testing was definitely not possible on buffered charcoal yeast extract agar were oxacillin, the tetracyclines, and the sulfonamides. PMID:6565706

  19. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  20. Correlation of Respirator Fit Measured on Human Subjects and a Static Advanced Headform

    PubMed Central

    Bergman, Michael S.; He, Xinjian; Joseph, Michael E.; Zhuang, Ziqing; Heimbuch, Brian K.; Shaffer, Ronald E.; Choe, Melanie; Wander, Joseph D.

    2015-01-01

    This study assessed the correlation of N95 filtering face-piece respirator (FFR) fit between a Static Advanced Headform (StAH) and 10 human test subjects. Quantitative fit evaluations were performed on test subjects who made three visits to the laboratory. On each visit, one fit evaluation was performed on eight different FFRs of various model/size variations. Additionally, subject breathing patterns were recorded. Each fit evaluation comprised three two-minute exercises: “Normal Breathing,” “Deep Breathing,” and again “Normal Breathing.” The overall test fit factors (FF) for human tests were recorded. The same respirator samples were later mounted on the StAH and the overall test manikin fit factors (MFF) were assessed utilizing the recorded human breathing patterns. Linear regression was performed on the mean log10-transformed FF and MFF values to assess the relationship between the values obtained from humans and the StAH. This is the first study to report a positive correlation of respirator fit between a headform and test subjects. The linear regression by respirator resulted in R2 = 0.95, indicating a strong linear correlation between FF and MFF. For all respirators the geometric mean (GM) FF values were consistently higher than those of the GM MFF. For 50% of respirators, GM FF and GM MFF values were significantly different between humans and the StAH. For data grouped by subject/respirator combinations, the linear regression resulted in R2 = 0.49. A weaker correlation (R2 = 0.11) was found using only data paired by subject/respirator combination where both the test subject and StAH had passed a real-time leak check before performing the fit evaluation. For six respirators, the difference in passing rates between the StAH and humans was < 20%, while two respirators showed a difference of 29% and 43%. For data by test subject, GM FF and GM MFF values were significantly different for 40% of the subjects. Overall, the advanced headform system has potential for assessing fit for some N95 FFR model/sizes. PMID:25265037

  1. Examination of a size-change test for photovoltaic encapsulation materials

    NASA Astrophysics Data System (ADS)

    Miller, David C.; Gu, Xiaohong; Ji, Liang; Kelly, George; Nickel, Nichole; Norum, Paul; Shioda, Tsuyoshi; Tamizhmani, Govindasamy; Wohlgemuth, John H.

    2012-10-01

    We examine a proposed test standard that can be used to evaluate the maximum representative change in linear dimensions of sheet encapsulation products for photovoltaic modules (resulting from their thermal processing). The proposed protocol is part of a series of material-level tests being developed within Working Group 2 of the Technical Committee 82 of the International Electrotechnical Commission. The characterization tests are being developed to aid module design (by identifying the essential characteristics that should be communicated on a datasheet), quality control (via internal material acceptance and process control), and failure analysis. Discovery and interlaboratory experiments were used to select particular parameters for the size-change test. The choice of a sand substrate and aluminum carrier is explored relative to other options. The temperature uniformity of +/-5°C for the substrate was confirmed using thermography. Considerations related to the heating device (hot-plate or oven) are explored. The time duration of 5 minutes was identified from the time-series photographic characterization of material specimens (EVA, ionomer, PVB, TPO, and TPU). The test procedure was revised to account for observed effects of size and edges. The interlaboratory study identified typical size-change characteristics, and also verified the absolute reproducibility of +/-5% between laboratories.

  2. Predictive equations for lung volumes from computed tomography for size matching in pulmonary transplantation.

    PubMed

    Konheim, Jeremy A; Kon, Zachary N; Pasrija, Chetan; Luo, Qingyang; Sanchez, Pablo G; Garcia, Jose P; Griffith, Bartley P; Jeudy, Jean

    2016-04-01

    Size matching for lung transplantation is widely accomplished using height comparisons between donors and recipients. This gross approximation allows for wide variation in lung size and, potentially, size mismatch. Three-dimensional computed tomography (3D-CT) volumetry comparisons could offer more accurate size matching. Although recipient CT scans are universally available, donor CT scans are rarely performed. Therefore, predicted donor lung volumes could be used for comparison to measured recipient lung volumes, but no such predictive equations exist. We aimed to use 3D-CT volumetry measurements from a normal patient population to generate equations for predicted total lung volume (pTLV), predicted right lung volume (pRLV), and predicted left lung volume (pLLV), for size-matching purposes. Chest CT scans of 400 normal patients were retrospectively evaluated. 3D-CT volumetry was performed to measure total lung volume, right lung volume, and left lung volume of each patient, and predictive equations were generated. The fitted model was tested in a separate group of 100 patients. The model was externally validated by comparison of total lung volume with total lung capacity from pulmonary function tests in a subset of those patients. Age, gender, height, and race were independent predictors of lung volume. In the test group, there were strong linear correlations between predicted and actual lung volumes measured by 3D-CT volumetry for pTLV (r = 0.72), pRLV (r = 0.72), and pLLV (r = 0.69). A strong linear correlation was also observed when comparing pTLV and total lung capacity (r = 0.82). We successfully created a predictive model for pTLV, pRLV, and pLLV. These may serve as reference standards and predict donor lung volume for size matching in lung transplantation. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  3. Advanced Statistical Analyses to Reduce Inconsistency of Bond Strength Data.

    PubMed

    Minamino, T; Mine, A; Shintani, A; Higashi, M; Kawaguchi-Uemura, A; Kabetani, T; Hagino, R; Imai, D; Tajiri, Y; Matsumoto, M; Yatani, H

    2017-11-01

    This study was designed to clarify the interrelationship of factors that affect the value of microtensile bond strength (µTBS), focusing on nondestructive testing by which information of the specimens can be stored and quantified. µTBS test specimens were prepared from 10 noncarious human molars. Six factors of µTBS test specimens were evaluated: presence of voids at the interface, X-ray absorption coefficient of resin, X-ray absorption coefficient of dentin, length of dentin part, size of adhesion area, and individual differences of teeth. All specimens were observed nondestructively by optical coherence tomography and micro-computed tomography before µTBS testing. After µTBS testing, the effect of these factors on µTBS data was analyzed by the general linear model, linear mixed effects regression model, and nonlinear regression model with 95% confidence intervals. By the general linear model, a significant difference in individual differences of teeth was observed ( P < 0.001). A significantly positive correlation was shown between µTBS and length of dentin part ( P < 0.001); however, there was no significant nonlinearity ( P = 0.157). Moreover, a significantly negative correlation was observed between µTBS and size of adhesion area ( P = 0.001), with significant nonlinearity ( P = 0.014). No correlation was observed between µTBS and X-ray absorption coefficient of resin ( P = 0.147), and there was no significant nonlinearity ( P = 0.089). Additionally, a significantly positive correlation was observed between µTBS and X-ray absorption coefficient of dentin ( P = 0.022), with significant nonlinearity ( P = 0.036). A significant difference was also observed between the presence and absence of voids by linear mixed effects regression analysis. Our results showed correlations between various parameters of tooth specimens and µTBS data. To evaluate the performance of the adhesive more precisely, the effect of tooth variability and a method to reduce variation in bond strength values should also be considered.

  4. Mental chronometry with simple linear regression.

    PubMed

    Chen, J Y

    1997-10-01

    Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry.

  5. Program Flow Analyzer. Volume 3

    DTIC Science & Technology

    1984-08-01

    metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found

  6. Wave‐induced Hydraulic Forces on Submerged Aquatic Plants in Shallow Lakes

    PubMed Central

    SCHUTTEN, J.; DAINTY, J.; DAVY, A. J.

    2004-01-01

    • Background and Aims Hydraulic pulling forces arising from wave action are likely to limit the presence of freshwater macrophytes in shallow lakes, particularly those with soft sediments. The aim of this study was to develop and test experimentally simple models, based on linear wave theory for deep water, to predict such forces on individual shoots. • Methods Models were derived theoretically from the action of the vertical component of the orbital velocity of the waves on shoot size. Alternative shoot‐size descriptors (plan‐form area or dry mass) and alternative distributions of the shoot material along its length (cylinder or inverted cone) were examined. Models were tested experimentally in a flume that generated sinusoidal waves which lasted 1 s and were up to 0·2 m high. Hydraulic pulling forces were measured on plastic replicas of Elodea sp. and on six species of real plants with varying morphology (Ceratophyllum demersum, Chara intermedia, Elodea canadensis, Myriophyllum spicatum, Potamogeton natans and Potamogeton obtusifolius). • Key Results Measurements on the plastic replicas confirmed predicted relationships between force and wave phase, wave height and plant submergence depth. Predicted and measured forces were linearly related over all combinations of wave height and submergence depth. Measured forces on real plants were linearly related to theoretically derived predictors of the hydraulic forces (integrals of the products of the vertical orbital velocity raised to the power 1·5 and shoot size). • Conclusions The general applicability of the simplified wave equations used was confirmed. Overall, dry mass and plan‐form area performed similarly well as shoot‐size descriptors, as did the conical or cylindrical models of shoot distribution. The utility of the modelling approach in predicting hydraulic pulling forces from relatively simple plant and environmental measurements was validated over a wide range of forces, plant sizes and species. PMID:14988098

  7. Risk Factors for Bovine Tuberculosis (bTB) in Cattle in Ethiopia.

    PubMed

    Dejene, Sintayehu W; Heitkönig, Ignas M A; Prins, Herbert H T; Lemma, Fitsum A; Mekonnen, Daniel A; Alemu, Zelalem E; Kelkay, Tessema Z; de Boer, Willem F

    2016-01-01

    Bovine tuberculosis (bTB) infection is generally correlated with individual cattle's age, sex, body condition, and with husbandry practices such as herd composition, cattle movement, herd size, production system and proximity to wildlife-including bTB maintenance hosts. We tested the correlation between those factors and the prevalence of bTB, which is endemic in Ethiopia's highland cattle, in the Afar Region and Awash National Park between November 2013 and April 2015. A total of 2550 cattle from 102 herds were tested for bTB presence using the comparative intradermal tuberculin test (CITT). Data on herd structure, herd movement, management and production system, livestock transfer, and contact with wildlife were collected using semi-structured interviews with cattle herders and herd owners. The individual overall prevalence of cattle bTB was 5.5%, with a herd prevalence of 46%. Generalized Linear Mixed Models with a random herd-effect were used to analyse risk factors of cattle reactors within each herd. The older the age of the cattle and the lower the body condition the higher the chance of a positive bTB test result, but sex, lactation status and reproductive status were not correlated with bTB status. At herd level, General Linear Models showed that pastoral production systems with transhumant herds had a higher bTB prevalence than sedentary herds. A model averaging analysis identified herd size, contact with wildlife, and the interaction of herd size and contact with wildlife as significant risk factors for bTB prevalence in cattle. A subsequent Structural Equation Model showed that the probability of contact with wildlife was influenced by herd size, through herd movement. Larger herds moved more and grazed in larger areas, hence the probability of grazing in an area with wildlife and contact with either infected cattle or infected wildlife hosts increased, enhancing the chances for bTB infection. Therefore, future bTB control strategies in cattle in pastoral areas should consider herd size and movement as important risk factors.

  8. Examination of the Chayes-Kruskal procedure for testing correlations between proportions

    USGS Publications Warehouse

    Kork, J.O.

    1977-01-01

    The Chayes-Kruskal procedure for testing correlations between proportions uses a linear approximation to the actual closure transformation to provide a null value, pij, against which an observed closed correlation coefficient, rij, can be tested. It has been suggested that a significant difference between pij and rij would indicate a nonzero covariance relationship between the ith and jth open variables. In this paper, the linear approximation to the closure transformation is described in terms of a matrix equation. Examination of the solution set of this equation shows that estimation of, or even the identification of, significant nonzero open correlations is essentially impossible even if the number of variables and the sample size are large. The method of solving the matrix equation is described in the appendix. ?? 1977 Plenum Publishing Corporation.

  9. Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.

    PubMed

    O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E

    2018-04-26

    Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as <0.56 mV (acutely) and <0.62 mV (chronically). Taking the macroscopic gap size as gold standard, error in gap measurements were determined for voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.

  10. A Quantitative Test of the Applicability of Independent Scattering to High Albedo Planetary Regoliths

    NASA Technical Reports Server (NTRS)

    Goguen, Jay D.

    1993-01-01

    To test the hypothesis that the independent scattering calculation widely used to model radiative transfer in atmospheres and clouds will give a useful approximation to the intensity and linear polarization of visible light scattered from an optically thick surface of transparent particles, laboratory measurements are compared to the independent scattering calculation for a surface of spherical particles with known optical constants and size distribution. Because the shape, size distribution, and optical constants of the particles are known, the independent scattering calculation is completely determined and the only remaining unknown is the net effect of the close packing of the particles in the laboratory sample surface...

  11. Linear, non-linear and thermal properties of single crystal of LHMHCl

    NASA Astrophysics Data System (ADS)

    Kulshrestha, Shobha; Shrivastava, A. K.

    2018-05-01

    The single crystal of amino acid of L-histidine monohydrochloride was grown by slow evaporation technique at room temperature. High optical quality and appropriate size of crystals were grown under optimized growth conditions. The grown crystals were transparent. Crystals are characterized with different characterizations such as Solubility test, UV-Visible, optical band gap (Eg). With the help of optical data to be calculate absorption coefficient (α), extinction coefficient (k), refractive index (n), dielectric constant (ɛ). These optical constants are shows favorable conditions for photonics devices. Second harmonic generation (NLO) test show the green light emission which is confirm that crystal have properties for laser application. Thermal stability of grown crystal is confirmed by TG/DTA.

  12. Transient Characterization of Type B Particles in a Transport Riser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadle, L.J.; Monazam, E.R.; Mei, J.S.

    2007-01-01

    Simple and rapid dynamic tests were used to evaluate fluid dynamic behavior of granular materials in the transport regime. Particles with densities ranging from 189 to 2,500 kg/m3 and Sauter mean size from 61 to 812 μm were tested in a 0.305 m diameter, 15.5 m height circulating fluidized bed (CFB) riser. The transient tests involved the abrupt stoppage of solids flow for each granular material over a wide range gas flow rates. The riser emptying time was linearly related to the Froude number in each of three different operating regimes. The flow structure along the height of the risermore » followed a distinct pattern as tracked through incremental pressures. These results are discussed to better understand the transformations that take place when operating over various regimes. During the transients the particle size distribution was measured. The effects of pressure, particle size, and density on test performance are also presented.« less

  13. Increased statistical power with combined independent randomization tests used with multiple-baseline design.

    PubMed

    Tyrrell, Pascal N; Corey, Paul N; Feldman, Brian M; Silverman, Earl D

    2013-06-01

    Physicians often assess the effectiveness of treatments on a small number of patients. Multiple-baseline designs (MBDs), based on the Wampold-Worsham (WW) method of randomization and applied to four subjects, have relatively low power. Our objective was to propose another approach with greater power that does not suffer from the time requirements of the WW method applied to a greater number of subjects. The power of a design that involves the combination of two four-subject MBDs was estimated using computer simulation and compared with the four- and eight-subject designs. The effect of a delayed linear response to treatment on the power of the test was also investigated. Power was found to be adequate (>80%) for a standardized mean difference (SMD) greater than 0.8. The effect size associated with 80% power from combined tests was smaller than that of the single four-subject MBD (SMD=1.3) and comparable with the eight-subject MBD (SMD=0.6). A delayed linear response to the treatment resulted in important reductions in power (20-35%). By combining two four-subject MBD tests, an investigator can detect better effect sizes (SMD=0.8) and be able to complete a comparatively timelier and feasible study. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Ultrasonic linear array validation via concrete test blocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoegh, Kyle, E-mail: hoeg0021@umn.edu; Khazanovich, Lev, E-mail: hoeg0021@umn.edu; Ferraro, Chris

    2015-03-31

    Oak Ridge National Laboratory (ORNL) comparatively evaluated the ability of a number of NDE techniques to generate an image of the volume of 6.5′ X 5.0′ X 10″ concrete specimens fabricated at the Florida Department of Transportation (FDOT) NDE Validation Facility in Gainesville, Florida. These test blocks were fabricated to test the ability of various NDE methods to characterize various placements and sizes of rebar as well as simulated cracking and non-consolidation flaws. The first version of the ultrasonic linear array device, MIRA [version 1], was one of 7 different NDE equipment used to characterize the specimens. This paper dealsmore » with the ability of this equipment to determine subsurface characterizations such as reinforcing steel relative size, concrete thickness, irregularities, and inclusions using Kirchhoff-based migration techniques. The ability of individual synthetic aperture focusing technique (SAFT) B-scan cross sections resulting from self-contained scans are compared with various processing, analysis, and interpretation methods using the various features fabricated in the specimens for validation. The performance is detailed, especially with respect to the limitations and implications for evaluation of a thicker, more heavily reinforced concrete structures.« less

  15. Ontogenetic scaling of caudal fin shape in Squalus acanthias (Chondrichthyes, Elasmobranchii): a geometric morphometric analysis with implications for caudal fin functional morphology.

    PubMed

    Reiss, Katie L; Bonnan, Matthew F

    2010-07-01

    The shark heterocercal caudal fin and its contribution to locomotion are of interest to biologists and paleontologists. Current hydrodynamic data show that the stiff dorsal lobe leads the ventral lobe, both lobes of the tail are synchronized during propulsion, and tail shape reflects its overall locomotor function. Given the difficulties surrounding the analysis of shark caudal fins in vivo, little is known about changes in tail shape related to ontogeny and sex in sharks. A quantifiable analysis of caudal fin shape may provide an acceptable proxy for inferring gross functional morphology where direct testing is difficult or impossible. We examined ontogenetic and sex-related shape changes in the caudal fins of 115 Squalus acanthias museum specimens, to test the hypothesis that significant shape changes in the caudal fin shape occur with increasing size and between the sexes. Using linear and geometric morphometrics, we examined caudal shape changes within the context of current hydrodynamic models. We found no statistically significant linear or shape difference between sexes, and near-isometric scaling trends for caudal dimensions. These results suggest that lift and thrust increase linearly with size and caudal span. Thin-plate splines results showed a significant allometric shape change associated with size and caudal span: the dorsal lobe elongates and narrows, whereas the ventral lobe broadens and expands ventrally. Our data suggest a combination of caudal fin morphology with other body morphology aspects, would refine, and better elucidate the hydrodynamic factors (if any) that underlie the significant shape changes we report here for S. acanthias.

  16. Study of cavitating inducer instabilities

    NASA Technical Reports Server (NTRS)

    Young, W. E.; Murphy, R.; Reddecliff, J. M.

    1972-01-01

    An analytic and experimental investigation into the causes and mechanisms of cavitating inducer instabilities was conducted. Hydrofoil cascade tests were performed, during which cavity sizes were measured. The measured data were used, along with inducer data and potential flow predictions, to refine an analysis for the prediction of inducer blade suction surface cavitation cavity volume. Cavity volume predictions were incorporated into a linearized system model, and instability predictions for an inducer water test loop were generated. Inducer tests were conducted and instability predictions correlated favorably with measured instability data.

  17. Why Does Rebalancing Class-Unbalanced Data Improve AUC for Linear Discriminant Analysis?

    PubMed

    Xue, Jing-Hao; Hall, Peter

    2015-05-01

    Many established classifiers fail to identify the minority class when it is much smaller than the majority class. To tackle this problem, researchers often first rebalance the class sizes in the training dataset, through oversampling the minority class or undersampling the majority class, and then use the rebalanced data to train the classifiers. This leads to interesting empirical patterns. In particular, using the rebalanced training data can often improve the area under the receiver operating characteristic curve (AUC) for the original, unbalanced test data. The AUC is a widely-used quantitative measure of classification performance, but the property that it increases with rebalancing has, as yet, no theoretical explanation. In this note, using Gaussian-based linear discriminant analysis (LDA) as the classifier, we demonstrate that, at least for LDA, there is an intrinsic, positive relationship between the rebalancing of class sizes and the improvement of AUC. We show that the largest improvement of AUC is achieved, asymptotically, when the two classes are fully rebalanced to be of equal sizes.

  18. Prediction of fracture load and stiffness of the proximal femur by CT-based specimen specific finite element analysis: cadaveric validation study.

    PubMed

    Miura, Michiaki; Nakamura, Junichi; Matsuura, Yusuke; Wako, Yasushi; Suzuki, Takane; Hagiwara, Shigeo; Orita, Sumihisa; Inage, Kazuhide; Kawarai, Yuya; Sugano, Masahiko; Nawata, Kento; Ohtori, Seiji

    2017-12-16

    Finite element analysis (FEA) of the proximal femur has been previously validated with large mesh size, but these were insufficient to simulate the model with small implants in recent studies. This study aimed to validate the proximal femoral computed tomography (CT)-based specimen-specific FEA model with smaller mesh size using fresh frozen cadavers. Twenty proximal femora from 10 cadavers (mean age, 87.1 years) were examined. CT was performed on all specimens with a calibration phantom. Nonlinear FEA prediction with stance configuration was performed using Mechanical Finder (mesh,1.5 mm tetrahedral elements; shell thickness, 0.2 mm; Poisson's coefficient, 0.3), in comparison with mechanical testing. Force was applied at a fixed vertical displacement rate, and the magnitude of the applied load and displacement were continuously recorded. The fracture load and stiffness were calculated from force-displacement curve, and the correlation between mechanical testing and FEA prediction was examined. A pilot study with one femur revealed that the equations proposed by Keller for vertebra were the most reproducible for calculating Young's modulus and the yield stress of elements of the proximal femur. There was a good linear correlation between fracture loads of mechanical testing and FEA prediction (R 2 = 0.6187) and between the stiffness of mechanical testing and FEA prediction (R 2 = 0.5499). There was a good linear correlation between fracture load and stiffness (R 2 = 0.6345) in mechanical testing and an excellent correlation between these (R 2 = 0.9240) in FEA prediction. CT-based specimen-specific FEA model of the proximal femur with small element size was validated using fresh frozen cadavers. The equations proposed by Keller for vertebra were found to be the most reproducible for the proximal femur in elderly people.

  19. Linear Combinations of Multiple Outcome Measures to Improve the Power of Efficacy Analysis ---Application to Clinical Trials on Early Stage Alzheimer Disease

    PubMed Central

    Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall

    2018-01-01

    Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251

  20. Angular Size Test on the Expansion of the Universe

    NASA Astrophysics Data System (ADS)

    López-Corredoira, Martín

    Assuming the standard cosmological model to be correct, the average linear size of the galaxies with the same luminosity is six times smaller at z = 3.2 than at z = 0; and their average angular size for a given luminosity is approximately proportional to z-1. Neither the hypothesis that galaxies which formed earlier have much higher densities nor their luminosity evolution, merger ratio, and massive outflows due to a quasar feedback mechanism are enough to justify such a strong size evolution. Also, at high redshift, the intrinsic ultraviolet surface brightness would be prohibitively high with this evolution, and the velocity dispersion much higher than observed. We explore here another possibility of overcoming this problem: considering different cosmological scenarios, which might make the observed angular sizes compatible with a weaker evolution. One of the explored models, a very simple phenomenological extrapolation of the linear Hubble law in a Euclidean static universe, fits quite well the angular size versus redshift dependence, also approximately proportional to z-1 with this cosmological model. There are no free parameters derived ad hoc, although the error bars allow a slight size/luminosity evolution. The supernova Ia Hubble diagram can also be explained in terms of this model without any ad-hoc-fitted parameter. NB: I do not argue here that the true universe is static. My intention is just to discuss which intellectual theoretical models fit better some data of the observational cosmology.

  1. Angiographic lesion size associated with LOC387715 A69S genotype in subfoveal polypoidal choroidal vasculopathy.

    PubMed

    Sakurada, Yoichi; Kubota, Takeo; Imasawa, Mitsuhiro; Tsumura, Toyoaki; Mabuchi, Fumihiko; Tanabe, Naohiko; Iijima, Hiroyuki

    2009-01-01

    To investigate whether the LOC387715/ARMS2 variants are associated with an angiographic phenotype, including lesion size and composition, in subfoveal polypoidal choroidal vasculopathy. Ninety-two subjects with symptomatic subfoveal polypoidal choroidal vasculopathy, whose visual acuity was from 0.1 to 0.5 on the Landolt chart, were genotyped for the LOC387715 polymorphism (rs10490924) using denaturing high-performance chromatography. The angiographic phenotype, including lesion composition and size, was evaluated by evaluators who were masked for the genotype. Lesion size was assessed by the greatest linear dimension based on fluorescein or indocyanine green angiography. Although there was no statistically significant difference in lesion size on indocyanine green angiography (P = 0.36, Kruskal-Wallis test) and in lesion composition (P = 0.59, chi-square test) among the 3 genotypes, there was a statistically significant difference in lesion size on fluorescein angiography (P = 0.0022, Kruskal-Wallis test). The LOC387715 A69S genotype is not associated with lesion composition or size on indocyanine green angiography but with lesion size on fluorescein angiography in patients with subfoveal polypoidal choroidal vasculopathy. Because fluorescein angiography findings represent secondary exudative changes, including subretinal hemorrhages and retinal pigment epithelial detachment, the results in the present study likely indicate that the T allele at the LOC387715 gene is associated with the exudative activity of polypoidal lesions.

  2. kruX: matrix-based non-parametric eQTL discovery.

    PubMed

    Qi, Jianlong; Asl, Hassan Foroughi; Björkegren, Johan; Michoel, Tom

    2014-01-14

    The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.

  3. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  4. Element enrichment factor calculation using grain-size distribution and functional data regression.

    PubMed

    Sierra, C; Ordóñez, C; Saavedra, A; Gallego, J R

    2015-01-01

    In environmental geochemistry studies it is common practice to normalize element concentrations in order to remove the effect of grain size. Linear regression with respect to a particular grain size or conservative element is a widely used method of normalization. In this paper, the utility of functional linear regression, in which the grain-size curve is the independent variable and the concentration of pollutant the dependent variable, is analyzed and applied to detrital sediment. After implementing functional linear regression and classical linear regression models to normalize and calculate enrichment factors, we concluded that the former regression technique has some advantages over the latter. First, functional linear regression directly considers the grain-size distribution of the samples as the explanatory variable. Second, as the regression coefficients are not constant values but functions depending on the grain size, it is easier to comprehend the relationship between grain size and pollutant concentration. Third, regularization can be introduced into the model in order to establish equilibrium between reliability of the data and smoothness of the solutions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Heat transfer and fire spread

    Treesearch

    Hal E. Anderson

    1969-01-01

    Experimental testing of a mathematical model showed that radiant heat transfer accounted for no more than 40% of total heat flux required to maintain rate of spread. A reasonable prediction of spread was possible by assuming a horizontal convective heat transfer coefficient when certain fuel and flame characteristics were known. Fuel particle size had a linear relation...

  6. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE PAGES

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.; ...

    2017-10-03

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  7. Emittance Growth in the DARHT-II Linear Induction Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekdahl, Carl; Carlson, Carl A.; Frayer, Daniel K.

    The dual-axis radiographic hydrodynamic test (DARHT) facility uses bremsstrahlung radiation source spots produced by the focused electron beams from two linear induction accelerators (LIAs) to radiograph large hydrodynamic experiments driven by high explosives. Radiographic resolution is determined by the size of the source spot, and beam emittance is the ultimate limitation to spot size. On the DARHT-II LIA, we measure an emittance higher than predicted by theoretical simulations, and even though this accelerator produces submillimeter source spots, we are exploring ways to improve the emittance. Some of the possible causes for the discrepancy have been investigated using particle-in-cell codes. Finally,more » the simulations establish that the most likely source of emittance growth is a mismatch of the beam to the magnetic transport, which can cause beam halo.« less

  8. Predicting the mechanical properties of brittle porous materials with various porosity and pore sizes.

    PubMed

    Cui, Zhiwei; Huang, Yongmin; Liu, Honglai

    2017-07-01

    In this work, a micromechanical study using the lattice spring model (LSM) was performed to predict the mechanical properties of BPMs by simulation of the Brazilian test. Stress-strain curve and Weibull plot were analyzed for the determination of fracture strength and Weibull modulus. The presented model composed of linear elastic elements is capable of reproducing the non-linear behavior of BPMs resulting from the damage accumulation and provides consistent results which are in agreement with experimental measurements. Besides, it is also found that porosity shows significant impact on fracture strength while pore size dominates the Weibull modulus, which enables us to establish how choices made in the microstructure to meet the demand of brittle porous materials functioning in various operating conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlgemuth, J.; Bokria, J.; Gu, X.

    Polymeric encapsulation materials may a change size when processed at typical module lamination temperatures. The relief of residual strain, trapped during the manufacture of encapsulation sheet, can affect module performance and reliability. For example, displaced cells and interconnects threaten: cell fracture; broken interconnects (open circuits and ground faults); delamination at interfaces; and void formation. A standardized test for the characterization of change in linear dimensions of encapsulation sheet has been developed and verified. The IEC 62788-1-5 standard quantifies the maximum change in linear dimensions that may occur to allow for process control of size change. Developments incorporated into the Committeemore » Draft (CD) of the standard as well as the assessment of the repeatability and reproducibility of the test method are described here. No pass/fail criteria are given in the standard, rather a repeatable protocol to quantify the change in dimension is provided to aid those working with encapsulation. The round-robin experiment described here identified that the repeatability and reproducibility of measurements is on the order of 1%. Recent refinements to the test procedure to improve repeatability and reproducibility include: the use of a convection oven to improve the thermal equilibration time constant and its uniformity; well-defined measurement locations reduce the effects of sampling size -and location- relative to the specimen edges; a standardized sand substrate may be readily obtained to reduce friction that would otherwise complicate the results; specimen sampling is defined, so that material is examined at known sites across the width and length of rolls; and encapsulation should be examined at the manufacturer’s recommended processing temperature, except when a cross-linking reaction may limit the size change. EVA, for example, should be examined 100 °C, between its melt transition (occurring up to 80 °C) and the onset of cross-linking (often at 100 °C).« less

  10. Testing a single regression coefficient in high dimensional linear models

    PubMed Central

    Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2017-01-01

    In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668

  11. Testing a single regression coefficient in high dimensional linear models.

    PubMed

    Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling

    2016-11-01

    In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.

  12. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V.

    PubMed

    Wall, Michael; Zamba, Gideon K D; Artes, Paul H

    2018-01-01

    It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on "censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher.

  13. SU-E-T-163: Thin-Film Organic Photocell (OPV) Properties in MV and KV Beams for Dosimetry Applications.

    PubMed

    Ng, S K; Hesser, J; Zhang, H; Gowrisanker, S; Yakushevich, S; Shulhevich, Y; Abkai, C; Wack, L; Zygmanski, P

    2012-06-01

    To characterize dosimetric properties of low-cost thin film organic-based photovoltaic (OPV) cells to kV and MV x-ray beams for their usage as large area dosimeter for QA and patient safety monitoring device. A series of thin film OPV cells of various areas and thicknesses were irradiated with MV beams to evaluate the stability and reproducibility of their response, linearity and sensitivity to absorbed dose. The OPV response to x-rays of various linac energies were also characterized. Furthermore the practical (clinical) sensitivity of the cells was determined using IMRT sweeping gap test generated with various gap sizes. To evaluate their potential usage in the development of low cost kV imaging device, the OPV cells were irradiated with kV beam (60-120 kVp) from a fluoroscopy unit. Photocell response to the absorbed dose was characterized as a function of the organic thin film thickness and size, beam energy and exposure for kV beams as well. In addition, photocell response was determined with and without thin plastic scintillator. Response of the OPV cells to the absorbed dose from kV and MV beams are stable and reproducible. The photocell response was linearly proportional to the size and about slightly decreasing with the thickness of the organic thin film, which agrees with the general performance of the photocells in visible light. The photocell response increases as a linear function of absorbed dose and x-ray energy. The sweeping gap tests performed showed that OPV cells have sufficient practical sensitivity to measured MV x-ray delivery with gap size as small as 1 mm. With proper calibration, the OPV cells could be used for online radiation dose measurement for quality assurance and patient safety purposes. Their response to kV beam show promising potential in development of low cost kV radiation detection devices. © 2012 American Association of Physicists in Medicine.

  14. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-01

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  15. Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less

  16. Tooth-size discrepancy: A comparison between manual and digital methods

    PubMed Central

    Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge

    2014-01-01

    Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529

  17. Numerical distance effect size is a poor metric of approximate number system acuity.

    PubMed

    Chesney, Dana

    2018-04-12

    Individual differences in the ability to compare and evaluate nonsymbolic numerical magnitudes-approximate number system (ANS) acuity-are emerging as an important predictor in many research areas. Unfortunately, recent empirical studies have called into question whether a historically common ANS-acuity metric-the size of the numerical distance effect (NDE size)-is an effective measure of ANS acuity. NDE size has been shown to frequently yield divergent results from other ANS-acuity metrics. Given these concerns and the measure's past popularity, it behooves us to question whether the use of NDE size as an ANS-acuity metric is theoretically supported. This study seeks to address this gap in the literature by using modeling to test the basic assumption underpinning use of NDE size as an ANS-acuity metric: that larger NDE size indicates poorer ANS acuity. This assumption did not hold up under test. Results demonstrate that the theoretically ideal relationship between NDE size and ANS acuity is not linear, but rather resembles an inverted J-shaped distribution, with the inflection points varying based on precise NDE task methodology. Thus, depending on specific methodology and the distribution of ANS acuity in the tested population, positive, negative, or null correlations between NDE size and ANS acuity could be predicted. Moreover, peak NDE sizes would be found for near-average ANS acuities on common NDE tasks. This indicates that NDE size has limited and inconsistent utility as an ANS-acuity metric. Past results should be interpreted on a case-by-case basis, considering both specifics of the NDE task and expected ANS acuity of the sampled population.

  18. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  19. Studies of lead tungstate crystals for the ALICE electromagnetic calorimeter PHOS

    NASA Astrophysics Data System (ADS)

    Ippolitov, M.; Beloglovsky, S.; Bogolubsky, M.; Burachas, S.; Erin, S.; Klovning, A.; Kuriakin, A.; Lebedev, V.; Lobanov, M.; Maeland, O.; Manko, V.; Nikulin, S.; Nyanin, A.; Odland, O. H.; Punin, V.; Sadovsky, S.; Samoilenko, V.; Sibiriak, Yu.; Skaali, B.; Tsvetkov, A.; Vinogradov, Yu.; Vasiliev, A.

    2002-06-01

    Full-size (22×22×180 mm 3) A LICE crystals were delivered by "North Crystals" company, Apatity, Russia. These crystals were tested with test benches, specially built for measurements of the crystals optical transmission and light yield. Beam-test results of different sets of 3×3 matrices with Hamamatsu APD light readout are presented. Data were taken at electron momenta from 600 MeV/ c up to 10 GeV/ c. Energy resolution and linearity curves are measured. The tests were carried out at the C ERN PS and SPS secondary beam-lines.

  20. The Mach number of the cosmic flow - A critical test for current theories

    NASA Technical Reports Server (NTRS)

    Ostriker, Jeremiah P.; Suto, Yusushi

    1990-01-01

    A new cosmological, self-contained test using the ratio of mean velocity and the velocity dispersion in the mean flow frame of a group of test objects is presented. To allow comparison with linear theory, the velocity field must first be smoothed on a suitable scale. In the context of linear perturbation theory, the Mach number M(R) which measures the ratio of power on scales larger than to scales smaller than the patch size R, is independent of the perturbation amplitude and also of bias. An apparent inconsistency is found for standard values of power-law index n = 1 and cosmological density parameter Omega = 1, when comparing values of M(R) predicted by popular models with tentative available observations. Nonstandard models based on adiabatic perturbations with either negative n or small Omega value also fail, due to creation of unacceptably large microwave background fluctuations.

  1. On the impact of relatedness on SNP association analysis.

    PubMed

    Gross, Arnd; Tönjes, Anke; Scholz, Markus

    2017-12-06

    When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.

  2. Five instruments for measuring tree height: an evaluation

    Treesearch

    Michael S. Williams; William A. Bechtold; V.J. LaBau

    1994-01-01

    Five instruments were tested for reliability in measuring tree heights under realistic conditions. Four linear models were used to determine if tree height can be measured unbiasedly over all tree sizes and if any of the instruments were more efficient in estimating tree height. The laser height finder was the only instrument to produce unbiased estimates of the true...

  3. Using Generalized Additive Models to Analyze Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Sullivan, Kristynn

    2013-01-01

    Many analyses for single-case designs (SCDs)--including nearly all the effect size indicators-- currently assume no trend in the data. Regression and multilevel models allow for trend, but usually test only linear trend and have no principled way of knowing if higher order trends should be represented in the model. This paper shows how Generalized…

  4. The theory of granular packings for coarse soils

    NASA Astrophysics Data System (ADS)

    Yanqui, Calixtro

    2013-06-01

    Coarse soils are substances made of grains of different shape, size and orientation. In this paper, new massive-measurable grain indexes are defined to develop a simple and systematic theory for the ideal packing of grains. First, a linear relationship between an assemblage of monodisperse spheres and an assemblage of polydisperse grains is deduced. Then, a general formula for the porosity of linearly ordered packings of spheres in contact is settled down by the appropriated choosing of eight neighboring spheres located at the vertices of the unit parallelepiped. The porosity of axisymmetric packings of grains, related to sand piles and axisymmetric compression tests, is proposed to be determined averaging the respective linear parameters. Since they can be tested experimentally, porosities of the densest state and the loosest state of a granular soil can be used to verify the accuracy of the present theory. Diagrams for these extreme quantities show a good agreement between the theoretical lines and the experimental data, no matter the dependency on the protocols and mineral composition.

  5. Relationship between linear type and fertility traits in Nguni cows.

    PubMed

    Zindove, T J; Chimonyo, M; Nephawe, K A

    2015-06-01

    The objective of the study was to assess the dimensionality of seven linear traits (body condition score, body stature, body length, heart girth, navel height, body depth and flank circumference) in Nguni cows using factor analysis and indicate the relationship between the extracted latent variables and calving interval (CI) and age at first calving (AFC). The traits were measured between December 2012 and November 2013 on 1559 Nguni cows kept under thornveld, succulent karoo, grassland and bushveld vegetation types. Low partial correlations (-0.04 to 0.51), high Kaiser statistic for measure of sampling adequacy scores and significance of the Bartlett sphericity test (P1. Factor 1 included body condition score, body depth, flank circumference and heart girth and represented body capacity of cows. Factor 2 included body length, body stature and navel height and represented frame size of cows. CI and AFC decreased linearly with increase of factor 1. There was a quadratic increase in AFC as factor 2 increased (P<0.05). It was concluded that the linear type traits under study can be grouped into two distinct factors, one linked to body capacity and the other to the frame size of the cows. Small-framed cows with large body capacities have shorter CI and AFC.

  6. Toward customer-centric organizational science: A common language effect size indicator for multiple linear regressions and regressions with higher-order terms.

    PubMed

    Krasikova, Dina V; Le, Huy; Bachura, Eric

    2018-06-01

    To address a long-standing concern regarding a gap between organizational science and practice, scholars called for more intuitive and meaningful ways of communicating research results to users of academic research. In this article, we develop a common language effect size index (CLβ) that can help translate research results to practice. We demonstrate how CLβ can be computed and used to interpret the effects of continuous and categorical predictors in multiple linear regression models. We also elaborate on how the proposed CLβ index is computed and used to interpret interactions and nonlinear effects in regression models. In addition, we test the robustness of the proposed index to violations of normality and provide means for computing standard errors and constructing confidence intervals around its estimates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Preparation and swelling inhibition of cation glucoside to montmorillonite

    NASA Astrophysics Data System (ADS)

    Song, Shaofu; Liu, Jurong; Guo, Gang; Huang, Lei; Qu, Chentun; Li, Bianqin; Chen, Gang

    2017-06-01

    In this work, a cation glucoside (CG) was synthesized with glucose and glycidyl trimethyl ammonium chloride (GTA) and used as montmorillonite (MMT) swelling inhibiter. The inhibition of CG was investigated by MMT linear expansion test and mud ball immersing test. The results showed that the CG has a good inhibition to the hydration swelling and dispersion of MMT. Under the same condition, the linear expansion rate of MMT in CG solution is much lower that of methylglucoside and the hydration expansion degree of the mud ball in the CG solution was significantly inhibited. The characterizations of physic-chemical properties of particle, analysized by thermogravimetric analysis and scanning electron microscopy, revealed that CG play great role to prevent water from absorb and keep MMT in large particle size.

  8. The influence of sintering conditions on microstructure and mechanical properties of titanium dioxide scaffolds for the treatment of bone tissue defects.

    PubMed

    Rumian, Łucja; Reczyńska, Katarzyna; Wrona, Małgorzata; Tiainen, Hanna; Haugen, Håvard J; Pamuła, Elżbieta

    2015-01-01

    In this study the attempts to improve mechanical properties of highly-porous titanium dioxide scaffolds produced by polymer sponge replication method were investigated. Particularly the effect of two-step sintering at different temperatures on microstructure and mechanical properties (compression test) of the scaffolds were analysed. To this end microcomputed tomography and scanning electron microscopy were used as analytical methods. Our experiments showed that the most appropriate conditions of manufacturing were when the scaffolds were heat-treated at 1500 °C for 1 h followed by sintering at 1200 °C for 20 h. Such scaffolds exhibited the highest compressive strength which was correlated with the highest linear density and the lowest size of grains. Moreover, grain size distribution was narrower with predominating fraction of fine grains 10-20 μm in size. Smaller grains and higher linear density sug- gested that in this case densification process prevailed over undesirable process of grain coarsening, which finally resulted in im- proved mechanical properties of the scaffolds.

  9. Evaluation of resolution and periodic errors of a flatbed scanner used for digitizing spectroscopic photographic plates

    PubMed Central

    Wyatt, Madison; Nave, Gillian

    2017-01-01

    We evaluated the use of a commercial flatbed scanner for digitizing photographic plates used for spectroscopy. The scanner has a bed size of 420 mm by 310 mm and a pixel size of about 0.0106 mm. Our tests show that the closest line pairs that can be resolved with the scanner are 0.024 mm apart, only slightly larger than the Nyquist resolution of 0.021 mm expected by the 0.0106 mm pixel size. We measured periodic errors in the scanner using both a calibrated length scale and a photographic plate. We find no noticeable periodic errors in the direction parallel to the linear detector in the scanner, but errors with an amplitude of 0.03 mm to 0.05 mm in the direction perpendicular to the detector. We conclude that large periodic errors in measurements of spectroscopic plates using flatbed scanners can be eliminated by scanning the plates with the dispersion direction parallel to the linear detector by placing the plate along the short side of the scanner. PMID:28463262

  10. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  11. kruX: matrix-based non-parametric eQTL discovery

    PubMed Central

    2014-01-01

    Background The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitative trait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlying genetic model and expression trait distribution, but testing billions of marker-trait combinations one-by-one can become computationally prohibitive. Results We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplications to simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-trait combinations at once. KruX is more than ten thousand times faster than computing associations one-by-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20k expression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that the Kruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes and detects a higher proportion of non-linear associations, but is more conservative for calling additive linear associations. Conclusion kruX enables the use of robust non-parametric methods for massive eQTL mapping without the need for a high-performance computing infrastructure and is freely available from http://krux.googlecode.com. PMID:24423115

  12. Growth trajectories and intellectual abilities in young adulthood: The Helsinki Birth Cohort study.

    PubMed

    Räikkönen, Katri; Forsén, Tom; Henriksson, Markus; Kajantie, Eero; Heinonen, Kati; Pesonen, Anu-Katriina; Leskinen, Jukka T; Laaksonen, Ilmo; Osmond, Clive; Barker, David J P; Eriksson, Johan G

    2009-08-15

    Slow childhood growth is associated with poorer intellectual ability. The critical periods of growth remain uncertain. Among 2,786 Finnish male military conscripts (1952-1972) born in 1934-1944, the authors tested how specific growth periods from birth to age 20 years predicted verbal, visuospatial, and arithmetic abilities at age 20. Small head circumference at birth predicted poorer verbal, visuospatial, and arithmetic abilities. The latter 2 measures were also associated with lower weight and body mass index (weight (kg)/height (m)(2)) at birth (for a 1-standard-deviation (SD) decrease in test score per SD decrease in body size > or = 0.05, P's < 0.04). Slow linear growth and weight gain between birth and age 6 months, between ages 6 months and 2 years, or both predicted poorer performance on all 3 tests (for a 1-SD decrease in test score per SD decrease in growth > or = 0.05, P's < 0.03). Reduced linear growth between ages 2 and 7 years predicted worse verbal ability, and between age 11 years and conscription it predicted worse performance on all 3 tests. Prenatal brain growth and linear growth up to 2 years after birth form a first critical period for intellectual development. There is a second critical period, specific for verbal development, between ages 2 and 7 years and a third critical period for all 3 tested outcomes during adolescence.

  13. Experimental and Theoretical Modal Analysis of Full-Sized Wood Composite Panels Supported on Four Nodes

    PubMed Central

    Guan, Cheng; Zhang, Houjiang; Wang, Xiping; Miao, Hu; Zhou, Lujing; Liu, Fenglu

    2017-01-01

    Key elastic properties of full-sized wood composite panels (WCPs) must be accurately determined not only for safety, but also serviceability demands. In this study, the modal parameters of full-sized WCPs supported on four nodes were analyzed for determining the modulus of elasticity (E) in both major and minor axes, as well as the in-plane shear modulus of panels by using a vibration testing method. The experimental modal analysis was conducted on three full-sized medium-density fiberboard (MDF) and three full-sized particleboard (PB) panels of three different thicknesses (12, 15, and 18 mm). The natural frequencies and mode shapes of the first nine modes of vibration were determined. Results from experimental modal testing were compared with the results of a theoretical modal analysis. A sensitivity analysis was performed to identify the sensitive modes for calculating E (major axis: Ex and minor axis: Ey) and the in-plane shear modulus (Gxy) of the panels. Mode shapes of the MDF and PB panels obtained from modal testing are in a good agreement with those from theoretical modal analyses. A strong linear relationship exists between the measured natural frequencies and the calculated frequencies. The frequencies of modes (2, 0), (0, 2), and (2, 1) under the four-node support condition were determined as the characteristic frequencies for calculation of Ex, Ey, and Gxy of full-sized WCPs. The results of this study indicate that the four-node support can be used in free vibration test to determine the elastic properties of full-sized WCPs. PMID:28773043

  14. Multiband selection with linear array detectors

    NASA Technical Reports Server (NTRS)

    Richard, H. L.; Barnes, W. L.

    1985-01-01

    Several techniques that can be used in an earth-imaging system to separate the linear image formed after the collecting optics into the desired spectral band are examined. The advantages and disadvantages of the Multispectral Linear Array (MLA) multiple optics, the MLA adjacent arrays, the imaging spectrometer, and the MLA beam splitter are discussed. The beam-splitter design approach utilizes, in addition to relatively broad spectral region separation, a movable Multiband Selection Device (MSD), placed between the exit ports of the beam splitter and a linear array detector, permitting many bands to be selected. The successful development and test of the MSD is described. The device demonstrated the capacity to provide a wide field of view, visible-to-near IR/short-wave IR and thermal IR capability, and a multiplicity of spectral bands and polarization measuring means, as well as a reasonable size and weight at minimal cost and risk compared to a spectrometer design approach.

  15. Complexation of Polyelectrolyte Micelles with Oppositely Charged Linear Chains.

    PubMed

    Kalogirou, Andreas; Gergidis, Leonidas N; Miliou, Kalliopi; Vlahos, Costas

    2017-03-02

    The formation of interpolyelectrolyte complexes (IPECs) from linear AB diblock copolymer precursor micelles and oppositely charged linear homopolymers is studied by means of molecular dynamics simulations. All beads of the linear polyelectrolyte (C) are charged with elementary quenched charge +1e, whereas in the diblock copolymer only the solvophilic (A) type beads have quenched charge -1e. For the same Bjerrum length, the ratio of positive to negative charges, Z +/- , of the mixture and the relative length of charged moieties r determine the size of IPECs. We found a nonmonotonic variation of the size of the IPECs with Z +/- . For small Z +/- values, the IPECs retain the size of the precursor micelle, whereas at larger Z +/- values the IPECs decrease in size due to the contraction of the corona and then increase as the aggregation number of the micelle increases. The minimum size of the IPECs is obtained at lower Z +/- values when the length of the hydrophilic block of the linear diblock copolymer decreases. The aforementioned findings are in agreement with experimental results. At a smaller Bjerrum length, we obtain the same trends but at even smaller Z +/- values. The linear homopolymer charged units are distributed throughout the corona.

  16. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Development and application of compact and on-chip electron linear accelerators for dynamic tracking cancer therapy and DNA damage/repair analysis

    NASA Astrophysics Data System (ADS)

    Uesaka, M.; Demachi, K.; Fujiwara, T.; Dobashi, K.; Fujisawa, H.; Chhatkuli, R. B.; Tsuda, A.; Tanaka, S.; Matsumura, Y.; Otsuki, S.; Kusano, J.; Yamamoto, M.; Nakamura, N.; Tanabe, E.; Koyama, K.; Yoshida, M.; Fujimori, R.; Yasui, A.

    2015-06-01

    We are developing compact electron linear accelerators (hereafter linac) with high RF (Radio Frequency) frequency (9.3 GHz, wavelength 32.3 mm) of X-band and applying to medicine and non-destructive testing. Especially, potable 950 keV and 3.95 MeV linac X-ray sources have been developed for on-site transmission testing at several industrial plants and civil infrastructures including bridges. 6 MeV linac have been made for pinpoint X-ray dynamic tracking cancer therapy. The length of the accelerating tube is ∼600 mm. The electron beam size at the X-ray target is less than 1 mm and X-ray spot size at the cancer is less than 3 mm. Several hardware and software are under construction for dynamic tracking therapy for moving lung cancer. Moreover, as an ultimate compact linac, we are designing and manufacturing a laser dielectric linac of ∼1 MeV with Yr fiber laser (283 THz, wavelength 1.06 pm). Since the wavelength is 1.06 μm, the length of one accelerating strcture is tens pm and the electron beam size is in sub-micro meter. Since the sizes of cell and nuclear are about 10 and 1 μm, respectively, we plan to use this “On-chip” linac for radiation-induced DNA damage/repair analysis. We are thinking a system where DNA in a nucleus of cell is hit by ∼1 μm electron or X-ray beam and observe its repair by proteins and enzymes in live cells in-situ.

  18. Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness

    NASA Technical Reports Server (NTRS)

    Staats, Matt; Whalen, Michael W.; Heindahl, Mats P. E.; Rajan, Ajitha

    2010-01-01

    In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) only one coverage metric proposed -- Unique First Cause (UFC) coverage -- is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size and (2) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics.

  19. The Effective Dynamic Ranges for Glaucomatous Visual Field Progression With Standard Automated Perimetry and Stimulus Sizes III and V

    PubMed Central

    Zamba, Gideon K. D.; Artes, Paul H.

    2018-01-01

    Purpose It has been shown that threshold estimates below approximately 20 dB have little effect on the ability to detect visual field progression in glaucoma. We aimed to compare stimulus size V to stimulus size III, in areas of visual damage, to confirm these findings by using (1) a different dataset, (2) different techniques of progression analysis, and (3) an analysis to evaluate the effect of censoring on mean deviation (MD). Methods In the Iowa Variability in Perimetry Study, 120 glaucoma subjects were tested every 6 months for 4 years with size III SITA Standard and size V Full Threshold. Progression was determined with three complementary techniques: pointwise linear regression (PLR), permutation of PLR, and linear regression of the MD index. All analyses were repeated on “censored'' datasets in which threshold estimates below a given criterion value were set to equal the criterion value. Results Our analyses confirmed previous observations that threshold estimates below 20 dB contribute much less to visual field progression than estimates above this range. These findings were broadly similar with stimulus sizes III and V. Conclusions Censoring of threshold values < 20 dB has relatively little impact on the rates of visual field progression in patients with mild to moderate glaucoma. Size V, which has lower retest variability, performs at least as well as size III for longitudinal glaucoma progression analysis and appears to have a larger useful dynamic range owing to the upper sensitivity limit being higher. PMID:29356822

  20. High-resolution observations of low-luminosity gigahertz-peaked spectrum and compact steep-spectrum sources

    NASA Astrophysics Data System (ADS)

    Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.

    2018-06-01

    We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.

  1. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  2. Correlation between strength properties in standard test specimens and molded phenolic parts

    NASA Technical Reports Server (NTRS)

    Turner, P S; Thomason, R H

    1946-01-01

    This report describes an investigation of the tensile, flexural, and impact properties of 10 selected types of phenolic molding materials. The materials were studied to see in what ways and to what extent their properties satisfy some assumptions on which the theory of strength of materials is based: namely, (a) isotropy, (b) linear stress-strain relationship for small strains, and (c) homogeneity. The effect of changing the dimensions of tensile and flexural specimens and the span-depth ratio in flexural tests were studied. The strengths of molded boxes and flexural specimens cut from the boxes were compared with results of tests on standard test specimens molded from the respective materials. The nonuniformity of a material, which is indicated by the coefficient of variation, affects the results of tests made with specimens of different sizes and tests with different methods of loading. The strength values were found to depend on the relationship between size and shape of the molded specimen and size and shape of the fillers. The most significant variations observed within a diversified group of materials were found to depend on the orientation of fibrous fillers. Of secondary importance was the dependence of the variability of test results on the pieces of filler incorporated into the molding powder as well as on the size of the piece. Static breaking strength tests on boxes molded from six representative phenolic materials correlated well with falling-ball impact tests on specimens cut from molded flat sheets. Good correlation was obtained with Izod impact tests on standard test specimens prepared from the molding materials. The static breaking strengths of the boxes do not correlate with the results of tensile or flexural tests on standard specimens.

  3. Monitoring with a modified Robel pole on meadows in the central Black Hills of South Dakaota

    Treesearch

    Daniel W. Uresk; Ted A. Benzon

    2007-01-01

    This study using a modified Robel pole was conducted in the central Black Hills, South Dakota. The objectives were to test the relationship between visual obstruction readings and standing herbage, develop guidelines for monitoring, and estimate sample size. The relationship between visual obstruction and standing herbage was linear with 2 segments in a piecewise model...

  4. Comparison of Classifiers for Decoding Sensory and Cognitive Information from Prefrontal Neuronal Populations

    PubMed Central

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders. PMID:24466019

  5. Predicting size limit of wild blood python (python brongersmai stull, 1938) harvesting in north sumatera

    NASA Astrophysics Data System (ADS)

    Mangantar Pardamean Sianturi, Markus; Jumilawaty, Erni; Delvian; Hartanto, Adrian

    2018-03-01

    Blood python (Python brongersmai Stull, 1938) is one of heavily exploited wildlife in Indonesia. The high demands on its skin trade have made its harvesting regulated under quota-based setting by the government to prevent over-harvesting. To gain understanding on the sustainability of P. brongersmai in the wild, biological characters of wild-caught specimens were studied. Samples were collected from two slaughterhouses from Rantau Prapat and Langkat. Parameters measured were morphological (Snout-vent length (SVL), body mass, abdomen width) and anatomical characters (Fat classes). Total samples of P. brongersmai in this research were 541 with 269 male and 272 female snakes. Female snakes had the highest proportion of individuals with the best quality of abdominal fat reserves (Class 3). Linear models are built and tested for its significance in relation between fat classes as anatomical characters and morphological characters. All tested morphological characters were significant in female snakes. By using linear equation models, we generate size limit to prioritize harvesting in the future. We suggest the use of SVL and stomach width ranging between 139,7 – 141,5 cm and 24,72 – 25,71 cm respectively to achieve sustainability of P. brongersmai in the wild.

  6. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  7. Characterization of Ultrasound Energy Diffusion Due to Small-Size Damage on an Aluminum Plate Using Piezoceramic Transducers

    PubMed Central

    Lu, Guangtao; Feng, Qian; Li, Yourong; Wang, Hao; Song, Gangbing

    2017-01-01

    During the propagation of ultrasonic waves in structures, there is usually energy loss due to ultrasound energy diffusion and dissipation. The aim of this research is to characterize the ultrasound energy diffusion that occurs due to small-size damage on an aluminum plate using piezoceramic transducers, for the future purpose of developing a damage detection algorithm. The ultrasonic energy diffusion coefficient is related to the damage distributed in the medium. Meanwhile, the ultrasonic energy dissipation coefficient is related to the inhomogeneity of the medium. Both are usually employed to describe the characteristics of ultrasound energy diffusion. The existence of multimodes of Lamb waves in metallic plate structures results in the asynchronous energy transport of different modes. The mode of Lamb waves has a great influence on ultrasound energy diffusion as a result, and thus has to be chosen appropriately. In order to study the characteristics of ultrasound energy diffusion in metallic plate structures, an experimental setup of an aluminum plate with a through-hole, whose diameter varies from 0.6 mm to 1.2 mm, is used as the test specimen with the help of piezoceramic transducers. The experimental results of two categories of damages at different locations reveal that the existence of damage changes the energy transport between the actuator and the sensor. Also, when there is only one dominate mode of Lamb wave excited in the structure, the ultrasound energy diffusion coefficient decreases approximately linearly with the diameter of the simulated damage. Meanwhile, the ultrasonic energy dissipation coefficient increases approximately linearly with the diameter of the simulated damage. However, when two or more modes of Lamb waves are excited, due to the existence of different group velocities between the different modes, the energy transport of the different modes is asynchronous, and the ultrasonic energy diffusion is not strictly linear with the size of the damage. Therefore, it is recommended that only one dominant mode of Lamb wave should be excited during the characterization process, in order to ensure that the linear relationship between the damage size and the characteristic parameters is maintained. In addition, the findings from this paper demonstrate the potential of developing future damage detection algorithms using the linear relationships between damage size and the ultrasound energy diffusion coefficient or ultrasonic energy dissipation coefficient when a single dominant mode is excited. PMID:29207530

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mundy, D; Tryggestad, E; Beltran, C

    Purpose: To develop daily and monthly quality assurance (QA) programs in support of a new spot-scanning proton treatment facility using a combination of commercial and custom equipment and software. Emphasis was placed on efficiency and evaluation of key quality parameters. Methods: The daily QA program was developed to test output, spot size and position, proton beam energy, and image guidance using the Sun Nuclear Corporation rf-DQA™3 device and Atlas QA software. The program utilizes standard Atlas linear accelerator tests repurposed for proton measurements and a custom jig for indexing the device to the treatment couch. The monthly QA program wasmore » designed to test mechanical performance, image quality, radiation quality, isocenter coincidence, and safety features. Many of these tests are similar to linear accelerator QA counterparts, but many require customized test design and equipment. Coincidence of imaging, laser marker, mechanical, and radiation isocenters, for instance, is verified using a custom film-based device devised and manufactured at our facility. Proton spot size and position as a function of energy are verified using a custom spot pattern incident on film and analysis software developed in-house. More details concerning the equipment and software developed for monthly QA are included in the supporting document. Thresholds for daily and monthly tests were established via perturbation analysis, early experience, and/or proton system specifications and associated acceptance test results. Results: The periodic QA program described here has been in effect for approximately 9 months and has proven efficient and sensitive to sub-clinical variations in treatment delivery characteristics. Conclusion: Tools and professional guidelines for periodic proton system QA are not as well developed as their photon and electron counterparts. The program described here efficiently evaluates key quality parameters and, while specific to the needs of our facility, could be readily adapted to other proton centers.« less

  9. Numerical calculations of spectral turnover and synchrotron self-absorption in CSS and GPS radio sources

    NASA Astrophysics Data System (ADS)

    Jeyakumar, S.

    2016-06-01

    The dependence of the turnover frequency on the linear size is presented for a sample of Giga-hertz Peaked Spectrum and Compact Steep Spectrum radio sources derived from complete samples. The dependence of the luminosity of the emission at the peak frequency with the linear size and the peak frequency is also presented for the galaxies in the sample. The luminosity of the smaller sources evolve strongly with the linear size. Optical depth effects have been included to the 3D model for the radio source of Kaiser to study the spectral turnover. Using this model, the observed trend can be explained by synchrotron self-absorption. The observed trend in the peak-frequency-linear-size plane is not affected by the luminosity evolution of the sources.

  10. Clinical Implications of the Cervical Papanicolaou Test Results in the Management of Anal Warts in HIV-Infected Women

    PubMed Central

    Luu, Hung N.; Amirian, E. Susan; Piller, Linda; Chan, Wenyaw; Scheurer, Michael E.

    2013-01-01

    The Papanicolaou test (or Pap test) has long been used as a screening tool to detect cervical precancerous/cancerous lesions. However, studies on the use of this test to predict both the presence and change in size of genital warts are limited. We examined whether cervical Papanicolaou test results are associated with the size of the largest anal wart over time in HIV-infected women in an on-going cohort study in the US. A sample of 976 HIV-infected women included in a public dataset obtained from the Women’s Interagency HIV Study (WIHS) was selected for analysis. A linear mixed model was performed to determine the relationship between the size of anal warts and cervical Pap test results. About 32% of participants had abnormal cervical Pap test results at baseline. In the adjusted model, a woman with a result of Atypia Squamous Cell Undetermined Significance/Low-grade Squamous Intraepithelial Lesion (ASCUS/LSIL) had an anal wart, on average, 12.81 mm2 larger than a woman with normal cervical cytology. The growth rate of the largest anal wart after each visit in a woman with ASCUS/LSIL was 1.56 mm2 slower than that of a woman with normal cervical results. However, they were not significant (P = 0.54 and P = 0.82, respectively). This is the first study to examine the relationship between cervical Pap test results and anal wart development in HIV-infected women. Even though no association between the size of anal wart and cervical Pap test results was found, a screening program using anal cytology testing in HIV-infected women should be considered. Further studies in cost-effectiveness and efficacy of an anal cytology test screening program are warranted. PMID:24312348

  11. Nonparametric evaluation of quantitative traits in population-based association studies when the genetic model is unknown.

    PubMed

    Konietschke, Frank; Libiger, Ondrej; Hothorn, Ludwig A

    2012-01-01

    Statistical association between a single nucleotide polymorphism (SNP) genotype and a quantitative trait in genome-wide association studies is usually assessed using a linear regression model, or, in the case of non-normally distributed trait values, using the Kruskal-Wallis test. While linear regression models assume an additive mode of inheritance via equi-distant genotype scores, Kruskal-Wallis test merely tests global differences in trait values associated with the three genotype groups. Both approaches thus exhibit suboptimal power when the underlying inheritance mode is dominant or recessive. Furthermore, these tests do not perform well in the common situations when only a few trait values are available in a rare genotype category (disbalance), or when the values associated with the three genotype categories exhibit unequal variance (variance heterogeneity). We propose a maximum test based on Marcus-type multiple contrast test for relative effect sizes. This test allows model-specific testing of either dominant, additive or recessive mode of inheritance, and it is robust against variance heterogeneity. We show how to obtain mode-specific simultaneous confidence intervals for the relative effect sizes to aid in interpreting the biological relevance of the results. Further, we discuss the use of a related all-pairwise comparisons contrast test with range preserving confidence intervals as an alternative to Kruskal-Wallis heterogeneity test. We applied the proposed maximum test to the Bogalusa Heart Study dataset, and gained a remarkable increase in the power to detect association, particularly for rare genotypes. Our simulation study also demonstrated that the proposed non-parametric tests control family-wise error rate in the presence of non-normality and variance heterogeneity contrary to the standard parametric approaches. We provide a publicly available R library nparcomp that can be used to estimate simultaneous confidence intervals or compatible multiplicity-adjusted p-values associated with the proposed maximum test.

  12. The effect of grain size and cement content on index properties of weakly solidified artificial sandstones

    NASA Astrophysics Data System (ADS)

    Atapour, Hadi; Mortazavi, Ali

    2018-04-01

    The effects of textural characteristics, especially grain size, on index properties of weakly solidified artificial sandstones are studied. For this purpose, a relatively large number of laboratory tests were carried out on artificial sandstones that were produced in the laboratory. The prepared samples represent fifteen sandstone types consisting of five different median grain sizes and three different cement contents. Indices rock properties including effective porosity, bulk density, point load strength index, and Schmidt hammer values (SHVs) were determined. Experimental results showed that the grain size has significant effects on index properties of weakly solidified sandstones. The porosity of samples is inversely related to the grain size and decreases linearly as grain size increases. While a direct relationship was observed between grain size and dry bulk density, as bulk density increased with increasing median grain size. Furthermore, it was observed that the point load strength index and SHV of samples increased as a result of grain size increase. These observations are indirectly related to the porosity decrease as a function of median grain size.

  13. A pocket-sized metabolic analyzer for assessment of resting energy expenditure.

    PubMed

    Zhao, Di; Xian, Xiaojun; Terrera, Mirna; Krishnan, Ranganath; Miller, Dylan; Bridgeman, Devon; Tao, Kevin; Zhang, Lihua; Tsow, Francis; Forzani, Erica S; Tao, Nongjian

    2014-04-01

    The assessment of metabolic parameters related to energy expenditure has a proven value for weight management; however these measurements remain too difficult and costly for monitoring individuals at home. The objective of this study is to evaluate the accuracy of a new pocket-sized metabolic analyzer device for assessing energy expenditure at rest (REE) and during sedentary activities (EE). The new device performs indirect calorimetry by measuring an individual's oxygen consumption (VO2) and carbon dioxide production (VCO2) rates, which allows the determination of resting- and sedentary activity-related energy expenditure. VO2 and VCO2 values of 17 volunteer adult subjects were measured during resting and sedentary activities in order to compare the metabolic analyzer with the Douglas bag method. The Douglas bag method is considered the Gold Standard method for indirect calorimetry. Metabolic parameters of VO2, VCO2, and energy expenditure were compared using linear regression analysis, paired t-tests, and Bland-Altman plots. Linear regression analysis of measured VO2 and VCO2 values, as well as calculated energy expenditure assessed with the new analyzer and Douglas bag method, had the following linear regression parameters (linear regression slope LRS0, and R-squared coefficient, r(2)) with p = 0: LRS0 (SD) = 1.00 (0.01), r(2) = 0.9933 for VO2; LRS0 (SD) = 1.00 (0.01), r(2) = 0.9929 for VCO2; and LRS0 (SD) = 1.00 (0.01), r(2) = 0.9942 for energy expenditure. In addition, results from paired t-tests did not show statistical significant difference between the methods with a significance level of α = 0.05 for VO2, VCO2, REE, and EE. Furthermore, the Bland-Altman plot for REE showed good agreement between methods with 100% of the results within ±2SD, which was equivalent to ≤10% error. The findings demonstrate that the new pocket-sized metabolic analyzer device is accurate for determining VO2, VCO2, and energy expenditure. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  15. High correlations between MRI brain volume measurements based on NeuroQuant® and FreeSurfer.

    PubMed

    Ross, David E; Ochs, Alfred L; Tate, David F; Tokac, Umit; Seabaugh, John; Abildskov, Tracy J; Bigler, Erin D

    2018-05-30

    NeuroQuant ® (NQ) and FreeSurfer (FS) are commonly used computer-automated programs for measuring MRI brain volume. Previously they were reported to have high intermethod reliabilities but often large intermethod effect size differences. We hypothesized that linear transformations could be used to reduce the large effect sizes. This study was an extension of our previously reported study. We performed NQ and FS brain volume measurements on 60 subjects (including normal controls, patients with traumatic brain injury, and patients with Alzheimer's disease). We used two statistical approaches in parallel to develop methods for transforming FS volumes into NQ volumes: traditional linear regression, and Bayesian linear regression. For both methods, we used regression analyses to develop linear transformations of the FS volumes to make them more similar to the NQ volumes. The FS-to-NQ transformations based on traditional linear regression resulted in effect sizes which were small to moderate. The transformations based on Bayesian linear regression resulted in all effect sizes being trivially small. To our knowledge, this is the first report describing a method for transforming FS to NQ data so as to achieve high reliability and low effect size differences. Machine learning methods like Bayesian regression may be more useful than traditional methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  17. Aqueous extraction kinetics of soluble solids, phenolics and flavonoids from sage (Salvia fruticosa Miller) leaves.

    PubMed

    Torun, Mehmet; Dincer, Cuneyt; Topuz, Ayhan; Sahin-Nadeem, Hilal; Ozdemir, Feramuz

    2015-05-01

    In the present study, aqueous extraction kinetics of total soluble solids (TSS), total phenolic content (TPC) and total flavonoid content (TFC) from Salvia fruticosa leaves were investigated throughout 150 min. of extraction period against temperature (60-80 °C), particle size (2-8 mm) and loading percentage (1-4 %). The extract yielded 25 g/100 g TSS which contained 30 g/100 g TPC and 25 g/100 g TFC. The extraction data in time course fit with reversible first order kinetic model. All tested variables showed significant effect on the estimated kinetic parameters except equilibrium concentration. Increasing the extraction temperature resulted high extraction rate constants and equilibrium concentrations of the tested variables notably above 70 °C. By using the Arrhenius relationship, activation energy of the TSS, TPC and TFC were determined as 46.11 ± 5.61, 36.80 ± 3.12 and 33.52 ± 2.23 kj/mol, respectively. By decreasing the particle size, the extraction rate constants and diffusion coefficients exponentially increased whereas equilibrium concentrations did not change significantly. The equilibrium concentrations of the tested parameters showed linear behavior with increasing the loading percentage of the sage, however; the change in extraction rates did not show linear behavior due to submerging effect of 4 % loading.

  18. Mixed Convection Blowoff Limits as a Function of Oxygen Concentration and Upward Forced Stretch Rate for Burning Pmma Rods of Various Sizes

    NASA Technical Reports Server (NTRS)

    Marcum, Jeremy W.; Ferkul, Paul V.; Olson, Sandra L.

    2017-01-01

    Normal gravity flame blowoff limits in an axisymmetric pmma rod geometry in upward axial stagnation flow are compared with microgravity Burning and Suppression of Solids II (BASS-II) results recently obtained aboard the International Space Station. This testing utilized the same BASS-II concurrent rod geometry, but with the addition of normal gravity buoyant flow. Cast polymethylmethacrylate (pmma) rods of diameters ranging from 0.635 cm to 3.81 cm were burned at oxygen concentrations ranging from 14 to 18 by volume. The forced flow velocity where blowoff occurred was determined for each rod size and oxygen concentration. These blowoff limits compare favorably with the BASS-II results when the buoyant stretch is included and the flow is corrected by considering the blockage factor of the fuel. From these results, the normal gravity blowoff boundary for this axisymmetric rod geometry is determined to be linear, with oxygen concentration directly proportional to flow speed. We describe a new normal gravity upward flame spread test method which extrapolates the linear blowoff boundary to the zero stretch limit to resolve microgravity flammability limits, something current methods cannot do. This new test method can improve spacecraft fire safety for future exploration missions by providing a tractable way to obtain good estimates of material flammability in low gravity.

  19. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  20. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  1. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were found on geometric measurement between NewTom VG 0.15 mm and NewTom VG 0.30 mm groups (P = 0.999) while a significant difference was found between VATECH DCTPRO 0.30 mm and NewTom VG 0.30 mm groups (P = 0.006). Conclusions: The 3D reconstruction from CBCT data can achieve a high linear, volumetric, and geometric accuracy. Increasing voxel resolution from 0.30 to 0.15 mm does not result in increased accuracy of 3D tooth reconstruction while different systems can affect the accuracy. PMID:27270544

  2. A Review of the Proposed K (sub Isi) Offset-Secant Method for Size-Independent Linear-Elastic Toughness Evaluation

    NASA Technical Reports Server (NTRS)

    James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim

    2017-01-01

    The proposed size-independent linear-elastic fracture toughness, K (sub Isi), for potential inclusion in ASTM E399 targets a consistent 0.5 millimeters crack extension for all specimen sizes through an offset secant that is a function of the specimen ligament length. The K (sub Isi) method also includes an increase in allowable deformation, and the removal of the P (sub max)/P (sub Q) criterion. A finite element study of the K (sub Isi) test method confirms the viability of the increased deformation limit, but has also revealed a few areas of concern. Findings: 1. The deformation limit, b (sub o) greater than or equal to 1.1 times (K (sub I) divided by delta (sub ys) squared) maintains a K-dominant crack tip field with limited plastic contribution to the fracture energy; 2. The three dimensional effects on compliance and the shape of the force versus CMOD (Crack-Mouth Opening Displacement) trace are significant compared to a plane strain assumption; 3. The non-linearity in the force versus CMOD trace at deformations higher than the current limit of 2.5 times (K (sub I) divided by delta (sub ys) squared) is sufficient to introduce error or even "false calls" regarding crack extension when using a constant offset secant line. This issue is more significant for specimens with W (width) greater than or equal to 2 inches; 4. A non-linear plasticity correction factor in the offset secant may improve the viability of the method at deformations between 2.5 times (K (sub I) divided by delta (sub ys) squared) and 1.1 times (K (sub I) divided by delta (sub ys) squared).

  3. Size effects of single-walled carbon nanotubes on in vivo and in vitro pulmonary toxicity

    PubMed Central

    Fujita, Katsuhide; Fukuda, Makiko; Endoh, Shigehisa; Maru, Junko; Kato, Haruhisa; Nakamura, Ayako; Shinohara, Naohide; Uchino, Kanako; Honda, Kazumasa

    2015-01-01

    Abstract To elucidate the effect of size on the pulmonary toxicity of single-wall carbon nanotubes (SWCNTs), we prepared two types of dispersed SWCNTs, namely relatively thin bundles with short linear shapes (CNT-1) and thick bundles with long linear shapes (CNT-2), and conducted rat intratracheal instillation tests and in vitro cell-based assays using NR8383 rat alveolar macrophages. Total protein levels, MIP-1α expression, cell counts in BALF, and histopathological examinations revealed that CNT-1 caused pulmonary inflammation and slower recovery and that CNT-2 elicited acute lung inflammation shortly after their instillation. Comprehensive gene expression analysis confirmed that CNT-1-induced genes were strongly associated with inflammatory responses, cell proliferation, and immune system processes at 7 or 30 d post-instillation. Numerous genes were significantly upregulated or downregulated by CNT-2 at 1 d post-instillation. In vitro assays demonstrated that CNT-1 and CNT-2 SWCNTs were phagocytized by NR8383 cells. CNT-2 treatment induced cell growth inhibition, reactive oxygen species production, MIP-1α expression, and several genes involved in response to stimulus, whereas CNT-1 treatment did not exert a significant impact in these regards. These results suggest that SWCNTs formed as relatively thin bundles with short linear shapes elicited delayed pulmonary inflammation with slower recovery. In contrast, SWCNTs with a relatively thick bundle and long linear shapes sensitively induced cellular responses in alveolar macrophages and elicited acute lung inflammation shortly after inhalation. We conclude that the pulmonary toxicity of SWCNTs is closely associated with the size of the bundles. These physical parameters are useful for risk assessment and management of SWCNTs. PMID:25865113

  4. Skeletal Maturation and Aerobic Performance in Young Soccer Players from Professional Academies.

    PubMed

    Teixeira, A S; Valente-dos-Santos, J; Coelho-E-Silva, M J; Malina, R M; Fernandes-da-Silva, J; Cesar do Nascimento Salvador, P; De Lucas, R D; Wayhs, M C; Guglielmo, L G A

    2015-11-01

    The contribution of chronological age, skeletal age (Fels method) and body size to variance in peak velocity derived from the Carminatti Test was examined in 3 competitive age groups of Brazilian male soccer players: 10-11 years (U-12, n=15), 12-13 years (U-14, n=54) and 14-15 years (U-16, n=23). Body size and soccer-specific aerobic fitness were measured. Body composition was predicted from skinfolds. Analysis of variance and covariance (controlling for chronological age) were used to compare soccer players by age group and by skeletal maturity status within of each age group, respectively. Relative skeletal age (skeletal age minus chronological age), body size, estimated fat-free mass and performance on the Carminatti Test increased significantly with age. Carminatti Test performance did not differ among players of contrasting skeletal maturity status in the 3 age groups. Results of multiple linear regressions indicated fat mass (negative) and chronological age (positive) were significant predictors of peak velocity derived from the Carminatti Test, whereas skeletal age was not a significant predictor. In conclusion, the Carminatti Test appears to be a potentially interesting field protocol to assess intermittent endurance running capacity in youth soccer programs since it is independent of biological maturity status. © Georg Thieme Verlag KG Stuttgart · New York.

  5. An empirical model of human aspiration in low-velocity air using CFD investigations.

    PubMed

    Anthony, T Renée; Anderson, Kimberly R

    2015-01-01

    Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.

  6. Non-linear behaviour of electrical parameters in porous, water-saturated rocks: a model to predict pore size distribution

    NASA Astrophysics Data System (ADS)

    Hallbauer-Zadorozhnaya, Valeriya; Santarato, Giovanni; Abu Zeid, Nasser

    2015-08-01

    In this paper, two separate but related goals are tackled. The first one is to demonstrate that in some saturated rock textures the non-linear behaviour of induced polarization (IP) and the violation of Ohm's law not only are real phenomena, but they can also be satisfactorily predicted by a suitable physical-mathematical model, which is our second goal. This model is based on Fick's second law. As the model links the specific dependence of resistivity and chargeability of a laboratory sample to the injected current and this in turn to its pore size distribution, it is able to predict pore size distribution from laboratory measurements, in good agreement with mercury injection capillary pressure test results. This fact opens up the possibility for hydrogeophysical applications on a macro scale. Mathematical modelling shows that the chargeability acquired in the field under normal conditions, that is at low current, will always be very small and approximately proportional to the applied current. A suitable field test site for demonstrating the possible reliance of both resistivity and chargeability on current was selected and a specific measuring strategy was established. Two data sets were acquired using different injected current strengths, while keeping the charging time constant. Observed variations of resistivity and chargeability are in agreement with those predicted by the mathematical model. These field test data should however be considered preliminary. If confirmed by further evidence, these facts may lead to changing the procedure of acquiring field measurements in future, and perhaps may encourage the design and building of a new specific geo-resistivity meter. This paper also shows that the well-known Marshall and Madden's equations based on Fick's law cannot be solved without specific boundary conditions.

  7. Leverage Between the Buffering Effect and the Bystander Effect in Social Networking.

    PubMed

    Chiu, Yu-Ping; Chang, Shu-Chen

    2015-08-01

    This study examined encouraged and inhibited social feedback behaviors based on the theories of the buffering effect and the bystander effect. A system program was used to collect personal data and social feedback from a Facebook data set to test the research model. The results revealed that the buffering effect induced a positive relationship between social network size and feedback gained from friends when people's social network size was under a certain cognitive constraint. For people with a social network size that exceeds this cognitive constraint, the bystander effect may occur, in which having more friends may inhibit social feedback. In this study, two social psychological theories were applied to explain social feedback behavior on Facebook, and it was determined that social network size and social feedback exhibited no consistent linear relationship.

  8. Size effects in non-linear heat conduction with flux-limited behaviors

    NASA Astrophysics Data System (ADS)

    Li, Shu-Nan; Cao, Bing-Yang

    2017-11-01

    Size effects are discussed for several non-linear heat conduction models with flux-limited behaviors, including the phonon hydrodynamic, Lagrange multiplier, hierarchy moment, nonlinear phonon hydrodynamic, tempered diffusion, thermon gas and generalized nonlinear models. For the phonon hydrodynamic, Lagrange multiplier and tempered diffusion models, heat flux will not exist in problems with sufficiently small scale. The existence of heat flux needs the sizes of heat conduction larger than their corresponding critical sizes, which are determined by the physical properties and boundary temperatures. The critical sizes can be regarded as the theoretical limits of the applicable ranges for these non-linear heat conduction models with flux-limited behaviors. For sufficiently small scale heat conduction, the phonon hydrodynamic and Lagrange multiplier models can also predict the theoretical possibility of violating the second law and multiplicity. Comparisons are also made between these non-Fourier models and non-linear Fourier heat conduction in the type of fast diffusion, which can also predict flux-limited behaviors.

  9. Respective contribution of orientation contrast and illusion of self-tilt to the rod-and-frame effect.

    PubMed

    Cian, C; Esquivié, D; Barraud, P A; Raphel, C

    1995-01-01

    The visual angle subtended by the frame seems to be an important determinant of the contribution of orientation contrast and illusion of self-tilt (ie vection) to the rod-and-frame effect. Indeed, the visuovestibular factor (which produces vection) seems to be predominant in large displays and the contrast effect in small displays. To determine how these two phenomena are combined to account for the rod-and-frame effect, independent estimates of the magnitude of each component in relation to the angular size subtended by the display were examined. Thirty-five observers were exposed to three sets of experimental situations: body-adjustment test (illusion of self-tilt only), the tilt illusion (contrast only) and the rod-and-frame test, each display subtending 7, 12, 28, and 45 deg of visual angle. Results showed that errors recorded in the three situations increased linearly with the angular size. Whatever the size of the frame, both mechanisms, contrast effect (tilt illusion) and illusory effect on self-orientation (body-adjustment test), are always present. However, rod-and-frame errors became greater at a faster rate than the other two effects as the size of teh stimuli became larger. Neither one nor the other independent phenomenen, nor the combined effect could fully account for the rod-and-frame effect whatever the angular size of the apparatus.

  10. Effect of wire size on maxillary arch force/couple systems for a simulated high canine malocclusion.

    PubMed

    Major, Paul W; Toogood, Roger W; Badawi, Hisham M; Carey, Jason P; Seru, Surbhi

    2014-12-01

    To better understand the effects of copper nickel titanium (CuNiTi) archwire size on bracket-archwire mechanics through the analysis of force/couple distributions along the maxillary arch. The hypothesis is that wire size is linearly related to the forces and moments produced along the arch. An Orthodontic Simulator was utilized to study a simplified high canine malocclusion. Force/couple distributions produced by passive and elastic ligation using two wire sizes (Damon 0.014 and 0.018 inch) measured with a sample size of 144. The distribution and variation in force/couple loading around the arch is a complicated function of wire size. The use of a thicker wire increases the force/couple magnitudes regardless of ligation method. Owing to the non-linear material behaviour of CuNiTi, this increase is less than would occur based on linear theory as would apply for stainless steel wires. The results demonstrate that an increase in wire size does not result in a proportional increase of applied force/moment. This discrepancy is explained in terms of the non-linear properties of CuNiTi wires. This non-proportional force response in relation to increased wire size warrants careful consideration when selecting wires in a clinical setting. © 2014 British Orthodontic Society.

  11. Ultrasound transducer shape has no effect on measurements of lumbar multifidus muscle size.

    PubMed

    Worsley, Peter R; Smith, Nicholas; Warner, Martin B; Stokes, Maria

    2012-04-01

    Evidence is currently lacking for guidance on ultrasound transducer configuration (shape) when imaging muscle to measure its size. This study compared measurements made of lumbar multifidus on images obtained using curvilinear and linear transducers. Fifteen asymptomatic males (aged 21-32 years) had their right lumbar multifidus imaged at L3. Two transverse images were taken with two transducers (5 MHz curvilinear and 6 MHz linear), and linear and cross-sectional area (CSA) measurements were made off-line. Reliability of image interpretation was shown using intra-class correlation coefficients (0.78-0.99). Muscle measurements were compared between transducers using Bland and Altman plots and paired t-tests. Relationships between CSA and linear measurements were examined using Pearson's Correlation Coefficients. There were no significant differences (p > 0.05) in the measurements of the two transducers. Thickness and CSA measurements had small differences between transducers, with mean differences of 0.01 cm (SDdiff = 0.21 cm) and 0.03 cm(2) (SDdiff = 0.58 cm(2)) respectively. Width measures had a mean difference of 0.14 cm, with the linear transducer giving larger measures. Significant correlations (p < 0.001) were found between all linear measures and CSA, with both transducers (r = 0.78-0.89). Measurements of multifidus at L3 were not influenced by the configuration of transducers of similar frequency. For the purposes of image interpretation, the curvilinear transducer produced better definition of the lateral muscle border, suggesting it as the preferable transducer for imaging lumbar multifidus. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  13. Electrophoresis in strong electric fields.

    PubMed

    Barany, Sandor

    2009-01-01

    Two kinds of non-linear electrophoresis (ef) that can be detected in strong electric fields (several hundred V/cm) are considered. The first ("classical" non-linear ef) is due to the interaction of the outer field with field-induced ionic charges in the electric double layer (EDL) under conditions, when field-induced variations of electrolyte concentration remain to be small comparatively to its equilibrium value. According to the Shilov theory, the non-linear component of the electrophoretic velocity for dielectric particles is proportional to the cubic power of the applied field strength (cubic electrophoresis) and to the second power of the particles radius; it is independent of the zeta-potential but is determined by the surface conductivity of particles. The second one, the so-called "superfast electrophoresis" is connected with the interaction of a strong outer field with a secondary diffuse layer of counterions (space charge) that is induced outside the primary (classical) diffuse EDL by the external field itself because of concentration polarization. The Dukhin-Mishchuk theory of "superfast electrophoresis" predicts quadratic dependence of the electrophoretic velocity of unipolar (ionically or electronically) conducting particles on the external field gradient and linear dependence on the particle's size in strong electric fields. These are in sharp contrast to the laws of classical electrophoresis (no dependence of V(ef) on the particle's size and linear dependence on the electric field gradient). A new method to measure the ef velocity of particles in strong electric fields is developed that is based on separation of the effects of sedimentation and electrophoresis using videoimaging and a new flowcell and use of short electric pulses. To test the "classical" non-linear electrophoresis, we have measured the ef velocity of non-conducting polystyrene, aluminium-oxide and (semiconductor) graphite particles as well as Saccharomice cerevisiae yeast cells as a function of the electric field strength, particle size, electrolyte concentration and the adsorbed polymer amount. It has been shown that the electrophoretic velocity of the particles/cells increases with field strength linearly up to about 100 and 200 V/cm (for cells) without and with adsorbed polymers both in pure water and in electrolyte solutions. In line with the theoretical predictions, in stronger fields substantial non-linear effects were recorded (V(ef)~E(3)). The ef velocity of unipolar ion-type conducting (ion-exchanger particles and fibres), electron-type conducting (magnesium and Mg/Al alloy) and semiconductor particles (graphite, activated carbon, pyrite, molybdenite) increases significantly with the electric field (V(ef)~E(2)) and the particle's size but is almost independent of the ionic strength. These trends are inconsistent with Smoluchowski's equation for dielectric particles, but are consistent with the Dukhin-Mishchuk theory of superfast electrophoresis.

  14. Nature of size effects in compact models of field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050

    Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less

  15. Effects of Al(OH)O nanoparticle agglomerate size in epoxy resin on tension, bending, and fracture properties

    NASA Astrophysics Data System (ADS)

    Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten

    2017-04-01

    Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.

  16. Outdoor dissolution of detonation residues of three insensitive munitions (IM) formulations.

    PubMed

    Taylor, Susan; Dontsova, Katerina; Walsh, Marianne E; Walsh, Michael R

    2015-09-01

    We seek to understand the environmental fate of three new insensitive munitions, explosive formulations developed to reduce the incidence of unintended detonations. To this end, we measured the size distribution of residues from low order detonations of IMX 101, IMX 104, and PAX 21-filled munitions and are studying how these three formulations weather and dissolve outdoors. The largest pieces collected from the detonations were centimeter-sized and we studied 12 of these in the outdoors test. We found that the particles break easily and that the dissolution of 2,4-dinitroanisole (DNAN) is quasi-linear as a function of water volume. DNAN is the matrix and the least soluble major constituent of the three formulations. We used DNAN's linear dissolution rate to estimate the life span of the pieces. Particles ranging in mass from 0.3 to 3.5 g will completely dissolve in 3-21 years given 100 cm y(-1) precipitation rates. Published by Elsevier Ltd.

  17. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  18. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  19. Linearity enhancement design of a 16-channel low-noise front-end readout ASIC for CdZnTe detectors

    NASA Astrophysics Data System (ADS)

    Zeng, Huiming; Wei, Tingcun; Wang, Jia

    2017-03-01

    A 16-channel front-end readout application-specific integrated circuit (ASIC) with linearity enhancement design for cadmium zinc telluride (CdZnTe) detectors is presented in this paper. The resistors in the slow shaper are realized using a high-Z circuit to obtain constant resistance value instead of using only a metal-oxide-semiconductor (MOS) transistor, thus the shaping time of the slow shaper can be kept constant for different amounts of input energies. As a result, the linearity of conversion gain is improved significantly. The ASIC was designed and fabricated in a 0.35 μm CMOS process with a die size of 2.60 mm×3.53 mm. The tested results show that a typical channel provides an equivalent noise charge (ENC) of 109.7e-+16.3e-/pF with a power consumption of 4 mW and achieves a conversion gain of 87 mV/fC with a nonlinearity of <0.4%. The linearity of conversion gain is improved by at least 86.6% as compared with the traditional approaches using the same front-end readout architecture and manufacture process. Moreover, the inconsistency among channels is <0.3%. An energy resolution of 2.975 keV (FWHM) for gamma rays of 59.5 keV was measured by connecting the ASIC to a 5 mm×5 mm ×2 mm CdZnTe detector at room temperature. The front-end readout ASIC presented in this paper achieves an outstanding linearity performance without compromising the noise, power consumption, and chip size performances.

  20. Precision Linear Actuator for Space Interferometry Mission (SIM) Siderostat Pointing

    NASA Technical Reports Server (NTRS)

    Cook, Brant; Braun, David; Hankins, Steve; Koenig, John; Moore, Don

    2008-01-01

    'SIM PlanetQuest will exploit the classical measuring tool of astrometry (interferometry) with unprecedented precision to make dramatic advances in many areas of astronomy and astrophysics'(1). In order to obtain interferometric data two large steerable mirrors, or Siderostats, are used to direct starlight into the interferometer. A gimbaled mechanism actuated by linear actuators is chosen to meet the unprecedented pointing and angle tracking requirements of SIM. A group of JPL engineers designed, built, and tested a linear ballscrew actuator capable of performing submicron incremental steps for 10 years of continuous operation. Precise, zero backlash, closed loop pointing control requirements, lead the team to implement a ballscrew actuator with a direct drive DC motor and a precision piezo brake. Motor control commutation using feedback from a precision linear encoder on the ballscrew output produced an unexpected incremental step size of 20 nm over a range of 120 mm, yielding a dynamic range of 6,000,000:1. The results prove linear nanometer positioning requires no gears, levers, or hydraulic converters. Along the way many lessons have been learned and will subsequently be shared.

  1. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  2. Synthesis, crystal growth and studies on non-linear optical property of new chalcones

    NASA Astrophysics Data System (ADS)

    Sarojini, B. K.; Narayana, B.; Ashalatha, B. V.; Indira, J.; Lobo, K. G.

    2006-09-01

    The synthesis, crystal growth and non-linear optical (NLO) property of new chalcone derivatives are reported. 4-Propyloxy and 4-butoxy benzaldehydes were made to under go Claisen-Schmidt condensation with 4-methoxy, 4-nitro and 4-phenoxy acetophenones to form corresponding chalcones. The newly synthesized compounds were characterized by analytical and spectral data. The Second harmonic generation (SHG) efficiency of these compounds was measured by powder technique using Nd:YAG laser. Among tested compounds three chalcones showed NLO property. The chalcone 1-(4-methoxyphenyl)-3-(4-propyloxy phenyl)-2-propen-1-one exhibited SHG conversion efficiency 2.7 times that of urea. The bulk crystal of 1-(4-methoxyphenyl)-3-(4-butoxyphenyl)-2-propen-1-one (crystal size 65×28×15 mm 3) was grown by slow-evaporation technique from acetone. Microhardness of the crystal was tested by Vicker's microhardness method.

  3. Microwave-Assisted Synthesis of Silver Vanadium Phosphorus Oxide, Ag 2VO 2PO 4 : Crystallite Size Control and Impact on Electrochemistry

    DOE PAGES

    Huang, Jianping; Marschilok, Amy C.; Takeuchi, Esther S.; ...

    2016-03-07

    We study silver vanadium phosphorus oxide, Ag 2VO 2PO 4, that is a promising cathode material for Li batteries due in part to its large capacity and high current capability. Herein, a new synthesis of Ag 2VO 2PO 4 based on microwave heating is presented, where the reaction time is reduced by approximately 100× relative to other reported methods, and the crystallite size is controlled via synthesis temperature, showing a linear correlation of crystallite size with temperature. Notably, under galvanostatic reduction, the Ag 2VO 2PO 4 sample with the smallest crystallite size delivers the highest capacity and shows the highestmore » loaded voltage. Further, pulse discharge tests show a significant resistance decrease during the initial discharge coincident with the formation of Ag metal. Thus, the magnitude of the resistance decrease observed during pulse tests depends on the Ag 2VO 2PO 4 crystallite size, with the largest resistance decrease observed for the smallest crystallite size. Additional electrochemical measurements indicate a quasi-reversible redox reaction involving Li + insertion/deinsertion, with capacity fade due to structural changes associated with the discharge/charge process. In summary, this work demonstrates a faster synthetic approach for bimetallic polyanionic materials which also provides the opportunity for tuning of electrochemical properties through control of material physical properties such as crystallite size.« less

  4. Linear viscoelastic limits of asphalt concrete at low and intermediate temperatures

    NASA Astrophysics Data System (ADS)

    Mehta, Yusuf A.

    The purpose of this dissertation is to demonstrate the hypothesis that a region at which the behavior of asphalt concrete can be represented as a linear viscoelastic material can be determined at low and intermediate temperatures considering the stresses and strains typically developed in the pavements under traffic loading. Six mixtures containing different aggregate gradations and nominal maximum aggregate sizes varying from 12.5 to 37.5 mm were used in this study. The asphalt binder grade was the same for all mixtures. The mixtures were compacted to 7 +/- 1% air voids, using the Superpave Gyratory Compactor. Tests were conducted at low temperatures (-20°C and -10°C), using the indirect tensile test machine, and at intermediate temperatures (4°C and 20°C), using the Superpave shear machine. To determine the linear viscoelastic range of asphalt concrete, a relaxation test for 150 s, followed by a creep test for another 150 s, was conducted at 150 and 200 microstrains (1 microstrain = 1 x 10-6), at -20°C, and at 150 and 300 microstrains, at -10°C. A creep test for 200 s, followed by a recovery test for another 200 s, was conducted at stress levels up to 800 kPa at 4°C and up to 500 kPa at 20°C. At -20°C and -10°C, the behavior of the mixtures was linear viscoelastic at 200 and 300 microstrains, respectively. At intermediate temperatures (4°C and 20°C), an envelope defining the linear and nonlinear region in terms of stress as a function of shear creep compliance was constructed for all the mixtures. For creep tests conducted at 20°C, it was discovered that the commonly used protocol to verify the proportionality condition of linear viscoelastic behavior was unable to detect the appearance of nonlinear behavior at certain imposed shear stress levels. Said nonlinear behavior was easily detected, however, when checking the satisfaction of the superposition condition. The envelope constructed for determining when the material becomes nonlinear should be valid for mixtures similar to the ones tested in this study. Different envelopes should be used in the case of mixtures containing a very soft or a very stiff polymer modified binder. At 4°C, the typical values of stresses and material properties of mixtures fell within the linear viscoelastic region, considering the typical shear creep compliance values at loading times and stresses experienced in the field. However, typical values at 20°C fell within a region in which some, but not all of the mixtures tested in this study behaved linearly. It is known that the behavior of asphalt concrete mixture changes from linear to nonlinear, depending on the temperature and loading conditions. However, this study is the first of its kind in which both the proportionality and the superposition condition were evaluated. The experimental design and the analysis procedures presented in this study can be applied to similar experiments that may be conducted in the future to evaluate linearity of different types of asphalt concrete mixtures.

  5. Self-ion irradiation effects on mechanical properties of nanocrystalline zirconium films

    DOE PAGES

    Wang, Baoming; Haque, M. A.; Tomar, Vikas; ...

    2017-07-13

    Zirconium thin films were irradiated at room temperature with an 800 keV Zr + beam using a 6 MV HVE Tandem accelerator to 1.36 displacement per atom damage. Freestanding tensile specimens, 100 nm thick and 10 nm grain size, were tested in-situ inside a transmission electron microscope. Significant grain growth (>300%), texture evolution, and displacement damage defects were observed. Here, stress-strain profiles were mostly linear elastic below 20 nm grain size, but above this limit the samples demonstrated yielding and strain hardening. Experimental results support the hypothesis that grain boundaries in nanocrystalline metals act as very effective defect sinks.

  6. Multiple focused EMAT designs for improved surface breaking defect characterization

    NASA Astrophysics Data System (ADS)

    Thring, C. B.; Fan, Y.; Edwards, R. S.

    2017-02-01

    Ultrasonic Rayleigh waves can be employed for the detection of surface breaking defects such as rolling contact fatigue and stress corrosion cracking. Electromagnetic Acoustic Transducers (EMATs) are well suited to this technique as they can directly generate Rayleigh waves within the sample without the requirement for wedges, and they are robust and inexpensive compared to laser ultrasonics. Three different EMAT coil types have been developed, and these are compared to assess their ability to detect and characterize small (down to 0.5 mm depth, 1 mm diameter) surface breaking defects in aluminium. These designs are: a pair of linear meander coils used in a pseudo-pulse-echo mode, a pair of focused meander coils also used in pseudo-pulse-echo mode, and a pair of focused racetrack coils used in pitch-catch mode. The linear meander coils are able to detect most of the defects tested, but have a much lower signal to noise ratio and give limited sizing information. The focused meander coils and the focused racetrack coils can detect all defects tested, but have the advantage that they can also characterize the defect sizes on the sample surface, and have a stronger sensitivity at their focal point. Measurements using all three EMAT designs are presented and compared for high resolution imaging of surface-breaking defects.

  7. Sexual dimorphism and allometry in the sphecophilous rove beetle Triacrus dilatus.

    PubMed

    Marlowe, Maxwell H; Murphy, Cheryl A; Chatzimanolis, Stylianos

    2015-01-01

    The rove beetle Triacrus dilatus is found in the Atlantic forest of South America and lives in the refuse piles of the paper wasp Agelaia vicina. Adults of T. dilatus are among the largest rove beetles, frequently measuring over 3 cm, and exhibit remarkable variation in body size. To examine sexual dimorphism and allometric relationships we measured the length of the left mandible, ocular distance and elytra. We were interested in determining if there are quantifiable differences between sexes, if there are major and minor forms within each sex and if males exhibit mandibular allometry. For all variables, a t-test was run to determine if there were significant differences between the sexes. Linear regressions were run to examine if there were significant relationships between the different measurements. A heterogeneity of slopes test was used to determine if there were significant differences between males and females. Our results indicated that males had significantly larger mandibles and ocular distances than females, but the overall body length was not significantly different between the sexes. Unlike most insects, both sexes showed positive linear allometric relationships for mandible length and head size (as measured by the ocular distance). We found no evidence of major and minor forms in either sex.

  8. Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case

    NASA Astrophysics Data System (ADS)

    Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.

    2016-11-01

    This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.

  9. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    NASA Astrophysics Data System (ADS)

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-01

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.

  10. Switching times of nanoscale FePt: Finite size effects on the linear reversal mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, M. O. A.; Chantrell, R. W.

    2015-04-20

    The linear reversal mechanism in FePt grains ranging from 2.316 nm to 5.404 nm has been simulated using atomistic spin dynamics, parametrized from ab-initio calculations. The Curie temperature and the critical temperature (T{sup *}), at which the linear reversal mechanism occurs, are observed to decrease with system size whilst the temperature window T{sup *}

  11. Form features provide a cue to the angular velocity of rotating objects

    PubMed Central

    Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul

    2013-01-01

    As an object rotates, each location on the object moves with an instantaneous linear velocity dependent upon its distance from the center of rotation, while the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different sized objects, as changing the size of an object changes the linear velocity of each location on the object’s surface, while maintaining the object’s angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PMID:23750970

  12. Form features provide a cue to the angular velocity of rotating objects.

    PubMed

    Blair, Christopher David; Goold, Jessica; Killebrew, Kyle; Caplovitz, Gideon Paul

    2014-02-01

    As an object rotates, each location on the object moves with an instantaneous linear velocity, dependent upon its distance from the center of rotation, whereas the object as a whole rotates with a fixed angular velocity. Does the perceived rotational speed of an object correspond to its angular velocity, linear velocities, or some combination of the two? We had observers perform relative speed judgments of different-sized objects, as changing the size of an object changes the linear velocity of each location on the object's surface, while maintaining the object's angular velocity. We found that the larger a given object is, the faster it is perceived to rotate. However, the observed relationships between size and perceived speed cannot be accounted for simply by size-related changes in linear velocity. Further, the degree to which size influences perceived rotational speed depends on the shape of the object. Specifically, perceived rotational speeds of objects with corners or regions of high-contour curvature were less affected by size. The results suggest distinct contour features, such as corners or regions of high or discontinuous contour curvature, provide cues to the angular velocity of a rotating object. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Attrition and changes in size distribution of lime sorbents during fluidization in a circulating fluidized bed absorber. Double quarterly report, January 1--August 31, 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sang-Kwun; Keener, T.C.; Cook, J.L.

    1993-12-31

    The experimental data of lime sorbent attrition obtained from attriton tests in a circulating fluidized bed absorber (CFBA) are represented. The results are interpreted as both the weight-based attrition rate and size-based attrition rate. The weight-based attrition rate constants are obtained from a modified second-order attrition model, incorporating a minimum fluidization weight, W{sub min}, and excess velocity. Furthermore, this minimum fluidization weight, or W{sub min} was found to be a function of both particle size and velocity. A plot of the natural log of the overall weight-based attrition rate constants (ln K{sub a}) for Lime 1 (903 MMD) at superficialmore » gas velocities of 2 m/s, 2.35 m/s, and 2.69 m/s and for Lime 2 (1764 MMD) at superficial gas velocities of 2 m/s, 3 m/s, 4 m/s and 5 m/s versus the energy term, 1/(U-U{sub mf}){sup 2}, yielded a linear relationship. And, a regression coefficient of 0.9386 for the linear regression confirms that K{sub a} may be expressed in Arrhenius form. In addition, an unsteady state population model is represented to predict the changes in size distribution of bed materials during fluidization. The unsteady state population model was verified experimentally and the solid size distribution predicted by the model agreed well with the corresponding experimental size distributions. The model may be applicable for the batch and continuous operations of fluidized beds in which the solids size reduction is predominantly resulted from attritions and elutriations. Such significance of the mechanical attrition and elutriation is frequently seen in a fast fluidized bed as well as in a circulating fluidized bed.« less

  14. Markers of physiological stress in juvenile bonobos (Pan paniscus): are enamel hypoplasia, skeletal development and tooth size interrelated?

    PubMed

    Lukacs, John R

    2009-07-01

    A reduction in enamel thickness due to disrupted amelogenesis is referred to as enamel hypoplasia (EH). Linear EH in permanent teeth is a widely accepted marker of systemic physiological stress. An enigmatic, nonlinear form of EH commonly manifest in great ape and human deciduous canines (dc) is known as localized hypoplasia of primary canines (LHPC). The etiology of LHPC and what it signifies-localized traumatic or systemic physiological stress-remains unclear. This report presents frequency data on LHPC, hypostotic cranial traits, and tooth size in a sample of juvenile bonobos, then tests hypotheses of intertrait association that improve knowledge of the etiology and meaning of LHPC. The fenestration hypothesis is tested using hypostotic cranial traits as a proxy for membrane bone ossification, and the relationship between tooth size, LHPC, and hypostosis is investigated. Macroscopic observations of EH, hypostotic traits, and measurements of buccolingual tooth size were conducted according to established standards. LHPC was found in 51.2% of bonobos (n = 86) and in 26% of dc teeth (n = 269). Hypostotic traits were observed in 55.2% of bonobos (n = 96). A test of the association between LHPC and hypostosis yielded nonsignificant results (chi(2) = 2.935; P = 0.0867). Primary canines were larger in specimens with LHPC than in unaffected specimens (paired samples t test; udc, P = 0.011; ldc, P = 0.018), a result consistent with the fenestration hypothesis of LHPC pathogenesis. Hypostosis was not associated with differences in tooth size (P > 0.05). LHPC may be an indirect indicator of physiological stress, resulting from large, buccally displaced primary canines.

  15. Achromatic Focal Plane Mask for Exoplanet Imaging Coronagraphy

    NASA Technical Reports Server (NTRS)

    Newman, Kevin Edward; Belikov, Ruslan; Guyon, Olivier; Balasubramanian, Kunjithapatham; Wilson, Dan

    2013-01-01

    Recent advances in coronagraph technologies for exoplanet imaging have achieved contrasts close to 1e10 at 4 lambda/D and 1e-9 at 2 lambda/D in monochromatic light. A remaining technological challenge is to achieve high contrast in broadband light; a challenge that is largely limited by chromaticity of the focal plane mask. The size of a star image scales linearly with wavelength. Focal plane masks are typically the same size at all wavelengths, and must be sized for the longest wavelength in the observational band to avoid starlight leakage. However, this oversized mask blocks useful discovery space from the shorter wavelengths. We present here the design, development, and testing of an achromatic focal plane mask based on the concept of optical filtering by a diffractive optical element (DOE). The mask consists of an array of DOE cells, the combination of which functions as a wavelength filter with any desired amplitude and phase transmission. The effective size of the mask scales nearly linearly with wavelength, and allows significant improvement in the inner working angle of the coronagraph at shorter wavelengths. The design is applicable to almost any coronagraph configuration, and enables operation in a wider band of wavelengths than would otherwise be possible. We include initial results from a laboratory demonstration of the mask with the Phase Induced Amplitude Apodization coronagraph.

  16. Geotechnical and mineralogical characteristics of marl deposits in Jordan

    NASA Astrophysics Data System (ADS)

    Shaqour, Fathi M.; Jarrar, Ghaleb; Hencher, Steve; Kuisi, Mostafa

    2008-10-01

    Marls and marly limestone deposits cover most of Northern Jordan, where Amman City and its suburbs are located. These deposits serve as foundations for most buildings and roads as well as fill material for structural back filling, especially road bases and sub-bases. The present study aims at investigating the geotechnical characteristics and mineral composition of the marl units of these deposits through field investigations and laboratory testing. Using X-ray diffraction technique along with chemical analysis, representative samples of marl horizons were tested for mineral composition, and for a set of index and geotechnical properties including: specific gravity, grain size, Atterberg limits, Proctor compaction and shear strength properties. The test results show a positive linear relationship as expected between the clay content and both liquid and plastic limits. The tests results also show an inverse linear relationship between the clay content and the maximum dry density in both standard and modified compaction. This is attributed to the adsorption of water by the clay minerals. The relationship is more prominent in the case of modified compaction test. The results also indicate a similar relationship for the angle of internal friction. No clear correlation between cohesion and clay content was apparent.

  17. Performance testing and results of the first Etec CORE-2564

    NASA Astrophysics Data System (ADS)

    Franks, C. Edward; Shikata, Asao; Baker, Catherine A.

    1993-03-01

    In order to be able to write 64 megabit DRAM reticles, to prepare to write 256 megabit DRAM reticles and in general to meet the current and next generation mask and reticle quality requirements, Hoya Micro Mask (HMM) installed in 1991 the first CORE-2564 Laser Reticle Writer from Etec Systems, Inc. The system was delivered as a CORE-2500XP and was subsequently upgraded to a 2564. The CORE (Custom Optical Reticle Engraver) system produces photomasks with an exposure strategy similar to that employed by an electron beam system, but it uses a laser beam to deliver the photoresist exposure energy. Since then the 2564 has been tested by Etec's standard Acceptance Test Procedure and by several supplementary HMM techniques to insure performance to all the Etec advertised specifications and certain additional HMM requirements that were more demanding and/or more thorough than the advertised specifications. The primary purpose of the HMM tests was to more closely duplicate mask usage. The performance aspects covered by the tests include registration accuracy and repeatability; linewidth accuracy, uniformity and linearity; stripe butting; stripe and scan linearity; edge quality; system cleanliness; minimum geometry resolution; minimum address size and plate loading accuracy and repeatability.

  18. Early pregnancy fasting plasma glucose and lipid concentrations in pregnancy and association to offspring size: a retrospective cohort study.

    PubMed

    Liu, Bin; Geng, Huizhen; Yang, Juan; Zhang, Ying; Deng, Langhui; Chen, Weiqing; Wang, Zilian

    2016-03-17

    Hyperlipidemia and high fasting plasma glucose levels at the first prenatal visit (First Visit FPG) are both related to gestational diabetes mellitus, maternal obesity/overweight and fetal overgrowth. The purpose of the present study is to investigate the correlation between First Visit FPG and lipid concentrations, and their potential association with offspring size at delivery. Pregnant women that received regular prenatal care and delivered in our center in 2013 were recruited for the study. Fasting plasma glucose levels were tested at the first prenatal visit (First Visit FPG) and prior to delivery (Before Delivery FPG). HbA1c and lipid profiles were examined at the time of OGTT test. Maternal and neonatal clinical data were collected for analysis. Data was analyzed by independent sample t test, Pearson correlation, and Chi-square test, followed by partial correlation and multiple linear regression analyses to confirm association. Statistical significance level was α =0.05. Analyses were based on 1546 mother-baby pairs. First Visit FPG was not correlated with any lipid parameters after adjusting for maternal pregravid BMI, maternal age and gestational age at First Visit FPG. HbA1c was positively correlated with triglyceride and Apolipoprotein B in the whole cohort and in the NGT group after adjusting for maternal age and maternal BMI at OGTT test. Multiple linear regression analyses showed neonatal birth weight, head circumference and shoulder circumference were all associated with First Visit FPG and triglyceride levels. Fasting plasma glucose at first prenatal visit is not associated with lipid concentrations in mid-pregnancy, but may influence fetal growth together with triglyceride concentration.

  19. A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences

    PubMed Central

    Feingold, Alan

    2013-01-01

    The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615

  20. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  1. Quality control methods for linear accelerator radiation and mechanical axes alignment.

    PubMed

    Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A

    2018-06-01

    The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis and the mechanical collimator rotation axis from the impact of field size asymmetry. The test suite can be performed in a reasonable time (30-35 min) due to simple phantom setup, prescription-based beam delivery, and automated image analysis. As well, it provides a clear description of the relationship between axes. After testing the sensitivity of the test suite to beam steering and mechanical errors, the results of the test suite were used to reduce the misalignment errors of the linac to less than 0.7-mm radius for all axes. The proposed test suite offers sub-millimeter assessment of the coincidence of the radiation and mechanical isocenters and the test automation reduces complexity with improved efficiency. The test suite results can be used to optimize the linear accelerator's radiation to mechanical isocenter alignment by beam steering and mechanical adjustment of gantry and couch. © 2018 American Association of Physicists in Medicine.

  2. Training artificial neural networks directly on the concordance index for censored data using genetic algorithms.

    PubMed

    Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias

    2013-06-01

    The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Discovery of the linear region of Near Infrared Diffuse Reflectance spectra using the Kubelka-Munk theory

    NASA Astrophysics Data System (ADS)

    Dai, Shengyun; Pan, Xiaoning; Ma, Lijuan; Huang, Xingguo; Du, Chenzhao; Qiao, Yanjiang; Wu, Zhisheng

    2018-05-01

    Particle size is of great importance for the quantitative model of the NIR diffuse reflectance. In this paper, the effect of sample particle size on the measurement of harpagoside in Radix Scrophulariae powder by near infrared diffuse (NIR) reflectance spectroscopy was explored. High-performance liquid chromatography (HPLC) was employed as a reference method to construct the quantitative particle size model. Several spectral preprocessing methods were compared, and particle size models obtained by different preprocessing methods for establishing the partial least-squares (PLS) models of harpagoside. Data showed that the particle size distribution of 125-150 μm for Radix Scrophulariae exhibited the best prediction ability with R2pre=0.9513, RMSEP=0.1029 mg·g-1, and RPD = 4.78. For the hybrid granularity calibration model, the particle size distribution of 90-180 μm exhibited the best prediction ability with R2pre=0.8919, RMSEP=0.1632 mg·g-1, and RPD = 3.09. Furthermore, the Kubelka-Munk theory was used to relate the absorption coefficient k (concentration-dependent) and scatter coefficient s (particle size-dependent). The scatter coefficient s was calculated based on the Kubelka-Munk theory to study the changes of s after being mathematically preprocessed. A linear relationship was observed between k/s and absorption A within a certain range and the value for k/s was greater than 4. According to this relationship, the model was more accurately constructed with the particle size distribution of 90-180 μm when s was kept constant or in a small linear region. This region provided a good reference for the linear modeling of diffuse reflectance spectroscopy. To establish a diffuse reflectance NIR model, further accurate assessment should be obtained in advance for a precise linear model.

  4. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  5. Half-size me? How calorie and price information influence ordering on restaurant menus with both half and full entrée portion sizes.

    PubMed

    Haws, Kelly L; Liu, Peggy J

    2016-02-01

    Many restaurants are increasingly required to display calorie information on their menus. We present a study examining how consumers' food choices are affected by the presence of calorie information on restaurant menus. However, unlike prior research on this topic, we focus on the effect of calorie information on food choices made from a menu that contains both full size portions and half size portions of entrées. This different focus is important because many restaurants increasingly provide more than one portion size option per entrée. Additionally, we examine whether the impact of calorie information differs depending on whether full portions are cheaper per unit than half portions (non-linear pricing) or whether they have a similar per unit price (linear pricing). We find that when linear pricing is used, calorie information leads people to order fewer calories. This decrease occurs as people switch from unhealthy full sized portions to healthy full sized portions, not to unhealthy half sized portions. In contrast, when non-linear pricing is used, calorie information has no impact on calories selected. Considering the impact of calorie information on consumers' choices from menus with more than one entrée portion size option is increasingly important given restaurant and legislative trends, and the present research demonstrates that calorie information and pricing scheme may interact to affect choices from such menus. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Size-dependent selective mechanisms on males and females and the evolution of sexual size dimorphism in frogs.

    PubMed

    Nali, Renato C; Zamudio, Kelly R; Haddad, Célio F B; Prado, Cynthia P A

    2014-12-01

    Sexual size dimorphism (SSD) varies in animals from male biased to female biased. The evolution of SSD is potentially influenced by a number of factors, such as territoriality, fecundity, and temporal breeding patterns (explosive vs. prolonged). In general, frogs show female-biased SSD with broad variance among species. Using comparative methods, we examine how different selective forces affect male and female sizes, and we test hypotheses about size-dependent mechanisms shaping SSD in frogs. Male size was weakly associated with SSD in all size classes, and we found no significant association among SSD, male size, temporal breeding pattern, and male territoriality. In contrast, female size best explained SSD variation across all size classes but especially for small-bodied species. We found a stronger evolutionary association between female body size and fecundity, and this fecundity advantage was highest in explosively breeding species. Our data indicate that the fecundity advantage associated with female body size may not be linear, such that intermediate and large females benefit less with body size increases. Therefore, size-dependent selection in females associated with fecundity and breeding patterns is an important mechanism driving SSD evolution in frogs. Our study underscores the fact that lineage-specific ecology and behavior should be incorporated in comparative analyses of animal SSD.

  7. Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.

    2012-01-01

    An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.

  8. [A new method to test vertical ocular deviations using perilimbal light reflexes].

    PubMed

    Breyer, Armin; Rütsche, Adrian; Gampe, Elisabeth; Mojon, Daniel S

    2003-03-01

    To develop a new diagnostic technique to determine vertical ocular deviations when the center of the pupil is covered by swollen eyelids in up- and downgaze. In upgaze (downgaze) the reflex of a diagnostic lamp held at about 50 cm distance from the patient is observed on the lower (upper) limbus. In the case of an asymmetric reflex, prisms are used to obtain symmetrical reflexes. The amount of prisms indicates the size of the vertical misalignment. In five healthy volunteers, the angles of vertical changes of gaze position were plotted against the prism size needed to recenter the perilimbal reflex. There was a linear correlation between the amount of upgaze changes in degrees and the strength of prisms used for compensation in degrees. This linear correlation was also found in downgaze. For both the correlation coefficient was r = 0.98 +/- 0.01. In upgaze the slope of the average regression line was 0.55 +/- 2.3 degrees, in downgaze - 4.1 +/- 0.8 degrees. A prism of 1 degrees corresponds in upgaze to a vertical deviation of about 1.3 +/- 0.14 degrees, in downgaze to a deviation of about 1.1 +/- 0.07 degrees. These results demonstrate that the perilimbal light reflex test is suitable for measuring simulated vertical ocular deviations. Therefore, the test may also be used in patients with vertical deviations who cannot be measured with classical methods. The method is more exact for measurements in upgaze.

  9. Image quality assessment using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Li, Yezhou; Ye, Xiang; Li, Yong

    2017-12-01

    This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.

  10. Design and characterization of a microelectromechanical system electro-thermal linear motor with interlock mechanism for micro manipulators.

    PubMed

    Hu, Tengjiang; Zhao, Yulong; Li, Xiuyuan; Zhao, You; Bai, Yingwei

    2016-03-01

    The design, fabrication, and testing of a novel electro-thermal linear motor for micro manipulators is presented in this paper. The V-shape electro-thermal actuator arrays, micro lever, micro spring, and slider are introduced. In moving operation, the linear motor can move nearly 1 mm displacement with 100 μm each step while keeping the applied voltage as low as 17 V. In holding operation, the motor can stay in one particular position without consuming energy and no creep deformation is found. Actuation force of 12.7 mN indicates the high force generation capability of the device. Experiments of lifetime show that the device can wear over two million cycles of operation. A silicon-on-insulator wafer is introduced to fabricate a high aspect ratio structure and the chip size is 8.5 mm × 8.5 mm × 0.5 mm.

  11. Statistical characterization of a large geochemical database and effect of sample size

    USGS Publications Warehouse

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  12. Wear Behaviour of Al-6061/SiC Metal Matrix Composites

    NASA Astrophysics Data System (ADS)

    Mishra, Ashok Kumar; Srivastava, Rajesh Kumar

    2017-04-01

    Aluminium Al-6061 base composites, reinforced with SiC particles having mesh size of 150 and 600, which is fabricated by stir casting method and their wear resistance and coefficient of friction has been investigated in the present study as a function of applied load and weight fraction of SiC varying from 5, 10, 15, 20, 25, 30, 35 and 40 %. The dry sliding wear properties of composites were investigated by using Pin-on-disk testing machine at sliding velocity of 2 m/s and sliding distance of 2000 m over a various loads of 10, 20 and 30 N. The result shows that the reinforcement of the metal matrix with SiC particulates up to weight percentage of 35 % reduces the wear rate. The result also show that the wear of the test specimens increases with the increasing load and sliding distance. The coefficient of friction slightly decreases with increasing weight percentage of reinforcements. The wear surfaces are examined by optical microscopy which shows that the large grooved regions and cavities with ceramic particles are found on the worn surface of the composite alloy. This indicates an abrasive wear mechanism, which is essentially a result of hard ceramic particles exposed on the worn surfaces. Further, it was found from the experimentation that the wear rate decreases linearly with increasing weight fraction of SiC and average coefficient of friction decreases linearly with increasing applied load, weight fraction of SiC and mesh size of SiC. The best result has been obtained at 35 % weight fraction and 600 mesh size of SiC.

  13. Real-time dissolution measurement of sized and unsized calcium phosphate glass fibers.

    PubMed

    Rinehart, J D; Taylor, T D; Tian, Y; Latour, R A

    1999-01-01

    The objective of this study was to develop an efficient "real time" measurement system able to directly measure, with microgram resolution, the dissolution rate of absorbable glass fibers, and utilize the system to evaluate the effectiveness of silane-based sizing as a means to delay the fiber dissolution process. The absorbable glass fiber used was calcium phosphate (CaP), with tetramethoxysilane selected as the sizing agent. E-glass fiber was used as a relatively nondegrading control. Both the unsized-CaP and sized-CaP degraded linearly at both the 37 degrees C and 60 degrees C test temperature levels used. No significant decrease in weight-loss rate was recorded when the CaP fiber tows were pretreated, using conventional application methods, with the tetramethoxysilane sizing for either temperature condition. The unsized-CaP and sized-CaP weight loss rates were each significantly higher at 60 than at 37 degrees C (both p < 0.02), as expected from dissolution kinetics. In terms of actual weight loss rate measured using our system for phosphate glass fiber, the unsized-CaP fiber we studied dissolved at a rate of 10.90 x 10(-09) and 41.20 x 10(-09) g/min-cm(2) at 37 degrees C and 60 degrees C, respectively. Considering performance validation of the developed system, the slope of the weight loss vs. time plot for the tested E-glass fiber was not significantly different compared to a slope equal to zero for both test temperatures. Copyright 1999 John Wiley & Sons, Inc.

  14. Tests of ecogeographical relationships in a non-native species: what rules avian morphology?

    PubMed

    Cardilini, Adam P A; Buchanan, Katherine L; Sherman, Craig D H; Cassey, Phillip; Symonds, Matthew R E

    2016-07-01

    The capacity of non-native species to undergo rapid adaptive change provides opportunities to research contemporary evolution through natural experiments. This capacity is particularly true when considering ecogeographical rules, to which non-native species have been shown to conform within relatively short periods of time. Ecogeographical rules explain predictable spatial patterns of morphology, physiology, life history and behaviour. We tested whether Australian populations of non-native starling, Sturnus vulgaris, introduced to the country approximately 150 years ago, exhibited predicted environmental clines in body size, appendage size and heart size (Bergmann's, Allen's and Hesse's rules, respectively). Adult starlings (n = 411) were collected from 28 localities from across eastern Australia from 2011 to 2012. Linear models were constructed to examine the relationships between morphology and local environment. Patterns of variation in body mass and bill surface area were consistent with Bergmann's and Allen's rules, respectively (small body size and larger bill size in warmer climates), with maximum summer temperature being a strongly weighted predictor of both variables. In the only intraspecific test of Hesse's rule in birds to date, we found no evidence to support the idea that relative heart size will be larger in individuals which live in colder climates. Our study does provide evidence that maximum temperature is a strong driver of morphological adaptation for starlings in Australia. The changes in morphology presented here demonstrate the potential for avian species to make rapid adaptive changes in relation to a changing climate to ameliorate the effects of heat stress.

  15. Body size ideals and dissatisfaction in Ghanaian adolescents: role of media, lifestyle and well-being.

    PubMed

    Michels, N; Amenyah, S D

    2017-05-01

    To inspire effective health promotion campaigns, we tested the relationship of ideal body size and body size dissatisfaction with (1) the potential resulting health-influencing factors diet, physical activity and well-being; and (2) with media as a potential influencer of body ideals. This is a cross-sectional study in 370 Ghanaian adolescents (aged 11-18 years). Questionnaires included disordered eating (EAT26), diet quality (FFQ), physical activity (IPAQ), well-being (KINDL) and media influence on appearance (SATAQ: pressure, internalisation and information). Ideal body size and body size dissatisfaction were assessed using the Stunkard figure rating scale. Body mass index (BMI), skinfolds and waist were measured. Linear regressions were adjusted for gender, age and parental education. Also, mediation was tested: 'can perceived media influence play a role in the effects of actual body size on body size dissatisfaction?'. Body size dissatisfaction was associated with lower well-being and more media influence (pressure and internalisation) but not with physical activity, diet quality or disordered eating. An underweight body size ideal might worsen disordered eating but was not significantly related to the other predictors of interest. Only a partial mediation effect by media pressure was found: especially overweight adolescents felt media pressure, and this media pressure was associated with more body size dissatisfaction. To prevent disordered eating and low well-being, health messages should include strategies that reduce body size dissatisfaction and increase body esteem by not focussing on the thin body ideal. Changing body size ideals in the media might be an appropriate way since media pressure was a mediator in the BMI-dissatisfaction relation. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  16. Flexible cue combination in the guidance of attention in visual search

    PubMed Central

    Brand, John; Oriet, Chris; Johnson, Aaron P.; Wolfe, Jeremy M.

    2014-01-01

    Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable search. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color X orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color X orientation and by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and user-driven fashion. PMID:25463553

  17. Analysis of airfoil leading edge separation bubbles

    NASA Technical Reports Server (NTRS)

    Carter, J. E.; Vatsa, V. N.

    1982-01-01

    A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.

  18. Tests of size and growth effects on Arctic charr (Salvelinus alpinus) otolith δ18 O and δ13 C values.

    PubMed

    Burbank, J; Kelly, B; Nilsson, J; Power, M

    2018-06-06

    Otolith δ 18 O and δ 13 C values have been used extensively to reconstruct thermal and diet histories. Researchers have suggested that individual growth rate and size may have an effect on otolith isotope ratios and subsequently confound otolith based thermal and diet reconstructions. As few explicit tests of the effect of fish in freshwater environments exist, here we determine experimentally the potential for related growth rate and size effects on otolith δ 18 O and δ 13 C values. Fifty Arctic charr were raised in identical conditions for two years after which their otoliths were removed and analyzed for their δ 18 O and δ 13 C values. The potential effects of final length and the Thermal Growth Coefficient (TGC) on otolith isotope ratios were tested using correlation and regression analysis to determine if significant effects were present and to quantify effects when present. The analyses indicated that TGC and size had significant and similar positive non-linear relationships with δ 13 C values and explained 35% and 42% of the variability, respectively. Conversely, both TGC and size were found to have no significant correlation with otolith δ 18 O values. There was no significant correlation between δ 18 O and δ 13 C values. The investigation indicated the presence of linked growth rate and size effects on otolith δ 13 C values, the nature of which requires further study. Otolith δ 18 O values were unaffected by individual growth rate and size, confirming the applicability of applying these values to thermal reconstructions of fish habitat. This article is protected by copyright. All rights reserved.

  19. The Trail Less Traveled: Individual Decision-Making and Its Effect on Group Behavior

    PubMed Central

    Lanan, Michele C.; Dornhaus, Anna; Jones, Emily I.; Waser, Andrew; Bronstein, Judith L.

    2012-01-01

    Social insect colonies are complex systems in which the interactions of many individuals lead to colony-level collective behaviors such as foraging. However, the emergent properties of collective behaviors may not necessarily be adaptive. Here, we examine symmetry breaking, an emergent pattern exhibited by some social insects that can lead colonies to focus their foraging effort on only one of several available food patches. Symmetry breaking has been reported to occur in several ant species. However, it is not clear whether it arises as an unavoidable epiphenomenon of pheromone recruitment, or whether it is an adaptive behavior that can be controlled through modification of the individual behavior of workers. In this paper, we used a simulation model to test how symmetry breaking is affected by the degree of non-linearity of recruitment, the specific mechanism used by individuals to choose between patches, patch size, and forager number. The model shows that foraging intensity on different trails becomes increasingly asymmetric as the recruitment response of individuals varies from linear to highly non-linear, supporting the predictions of previous work. Surprisingly, we also found that the direction of the relationship between forager number (i.e., colony size) and asymmetry varied depending on the specific details of the decision rule used by individuals. Limiting the size of the resource produced a damping effect on asymmetry, but only at high forager numbers. Variation in the rule used by individual ants to choose trails is a likely mechanism that could cause variation among the foraging behaviors of species, and is a behavior upon which selection could act. PMID:23112880

  20. Morphology filter bank for extracting nodular and linear patterns in medical images.

    PubMed

    Hashimoto, Ryutaro; Uchiyama, Yoshikazu; Uchimura, Keiichi; Koutaki, Gou; Inoue, Tomoki

    2017-04-01

    Using image processing to extract nodular or linear shadows is a key technique of computer-aided diagnosis schemes. This study proposes a new method for extracting nodular and linear patterns of various sizes in medical images. We have developed a morphology filter bank that creates multiresolution representations of an image. Analysis bank of this filter bank produces nodular and linear patterns at each resolution level. Synthesis bank can then be used to perfectly reconstruct the original image from these decomposed patterns. Our proposed method shows better performance based on a quantitative evaluation using a synthesized image compared with a conventional method based on a Hessian matrix, often used to enhance nodular and linear patterns. In addition, experiments show that our method can be applied to the followings: (1) microcalcifications of various sizes in mammograms can be extracted, (2) blood vessels of various sizes in retinal fundus images can be extracted, and (3) thoracic CT images can be reconstructed while removing normal vessels. Our proposed method is useful for extracting nodular and linear shadows or removing normal structures in medical images.

  1. Cytotoxicity of silica-glass fiber reinforced composites.

    PubMed

    Meriç, Gökçe; Dahl, Jon E; Ruyter, I Eystein

    2008-09-01

    Silica-glass fiber reinforced polymers can be used for many kinds of dental applications. The fiber reinforcement enhances the mechanical properties of the polymers, and they have good esthetic attributes. There is good initial bonding of glass fibers to polymers via an interface made from silane coupling agents. The aim of this in vitro study was to determine the cytotoxicity of two polymers reinforced with two differently sized silica-glass fibers before and after thermal cycling. Cytotoxicity of the polymers without fibers was also evaluated. Two different resin mixtures (A and B) were prepared from poly(vinyl chloridecovinylacetate) powder and poly(methyl methacrylate) (PMMA) dissolved in methyl methacrylate and mixed with different cross-linking agents. The resin A contained the cross-linking agents ethylene glycol dimethacrylate and 1,4-butanediol dimethacrylate, and for resin B diethylene glycol dimethacrylate was used. Woven silica-glass fibers were used for reinforcement. The fibers were sized with either linear poly(butyl methacrylate)-sizing or cross-linking PMMA-sizing. Cytotoxicity was evaluated by filter diffusion test (ISO 7405:1997) of newly made and thermocycled test specimens. Extracts were prepared according to ISO 10993-12 from newly made and from thermocycled specimens and tested by the MTT assay. The results from the experiments were statistically analyzed by one-way ANOVA and Tukey's test (rho<0.05). The filter diffusion test disclosed no change in staining intensity at the cell-test sample contact area indicating non-cytotoxicity in all experimental groups. Cell viability assessed by MTT assay was more than 90% in all experimental groups. All are non-cytotoxic. It can be concluded that correctly processed heat polymerized silica-glass fiber reinforced polymers induced no cytotoxicity and that thermocycling did not alter this property.

  2. Study on effective MOSFET channel length extracted from gate capacitance

    NASA Astrophysics Data System (ADS)

    Tsuji, Katsuhiro; Terada, Kazuo; Fujisaka, Hisato

    2018-01-01

    The effective channel length (L GCM) of metal-oxide-semiconductor field-effect transistors (MOSFETs) is extracted from the gate capacitances of actual-size MOSFETs, which are measured by charge-injection-induced-error-free charge-based capacitance measurement (CIEF CBCM). To accurately evaluate the capacitances between the gate and the channel of test MOSFETs, the parasitic capacitances are removed by using test MOSFETs having various channel sizes and a source/drain reference device. A strong linear relationship between the gate-channel capacitance and the design channel length is obtained, from which L GCM is extracted. It is found that L GCM is slightly less than the effective channel length (L CRM) extracted from the measured MOSFET drain current. The reason for this is discussed, and it is found that the capacitance between the gate electrode and the source and drain regions affects this extraction.

  3. Verbal learning on depressive pseudodementia: accentuate impairment of free recall, moderate on learning processes, and spared short-term and recognition memory.

    PubMed

    Paula, Jonas Jardim de; Miranda, Débora Marques; Nicolato, Rodrigo; Moraes, Edgar Nunes de; Bicalho, Maria Aparecida Camargos; Malloy-Diniz, Leandro Fernandes

    2013-09-01

    Depressive pseudodementia (DPD) is a clinical condition characterized by depressive symptoms followed by cognitive and functional impairment characteristics of dementia. Memory complaints are one of the most related cognitive symptoms in DPD. The present study aims to assess the verbal learning profile of elderly patients with DPD. Ninety-six older adults (34 DPD and 62 controls) were assessed by neuropsychological tests including the Rey auditory-verbal learning test (RAVLT). A multivariate general linear model was used to assess group differences and controlled for demographic factors. Moderate or large effects were found on all RAVLT components, except for short-term and recognition memory. DPD impairs verbal memory, with large effect size on free recall and moderate effect size on the learning. Short-term storage and recognition memory are useful in clinical contexts when the differential diagnosis is required.

  4. Verification of intensity modulated profiles using a pixel segmented liquid-filled linear array.

    PubMed

    Pardo, J; Roselló, J V; Sánchez-Doblado, F; Gómez, F

    2006-06-07

    A liquid isooctane (C8H18) filled ionization chamber linear array developed for radiotherapy quality assurance, consisting of 128 pixels (each of them with a 1.7 mm pitch), has been used to acquire profiles of several intensity modulated fields. The results were compared with film measurements using the gamma test. The comparisons show a very good matching, even in high gradient dose regions. The volume-averaging effect of the pixels is negligible and the spatial resolution is enough to verify these regions. However, some mismatches between the detectors have been found in regions where low-energy scattered photons significantly contribute to the total dose. These differences are not very important (in fact, the measurements of both detectors are in agreement using the gamma test with tolerances of 3% and 3 mm in most of those regions), and may be associated with the film energy dependence. In addition, the linear array repeatability (0.27% one standard deviation) is much better than the film one ( approximately 3%). The good repeatability, small pixel size and high spatial resolution make the detector ideal for the real time profile verification of high gradient beam profiles like those present in intensity modulated radiation therapy and radiosurgery.

  5. Incremental validity of the episode size criterion in binge-eating definitions: An examination in women with purging syndromes.

    PubMed

    Forney, K Jean; Bodell, Lindsay P; Haedt-Matt, Alissa A; Keel, Pamela K

    2016-07-01

    Of the two primary features of binge eating, loss of control (LOC) eating is well validated while the role of eating episode size is less clear. Given the ICD-11 proposal to eliminate episode size from the binge-eating definition, the present study examined the incremental validity of the size criterion, controlling for LOC. Interview and questionnaire data come from four studies of 243 women with bulimia nervosa (n = 141) or purging disorder (n = 102). Hierarchical linear regression tested if the largest reported episode size, coded in kilocalories, explained additional variance in eating disorder features, psychopathology, personality traits, and impairment, holding constant LOC eating frequency, age, and body mass index (BMI). Analyses also tested if episode size moderated the association between LOC eating and these variables. Holding LOC constant, episode size explained significant variance in disinhibition, trait anxiety, and eating disorder-related impairment. Episode size moderated the association of LOC eating with purging frequency and depressive symptoms, such that in the presence of larger eating episodes, LOC eating was more closely associated with these features. Neither episode size nor its interaction with LOC explained additional variance in BMI, hunger, restraint, shape concerns, state anxiety, negative urgency, or global functioning. Taken together, results support the incremental validity of the size criterion, in addition to and in combination with LOC eating, for defining binge-eating episodes in purging syndromes. Future research should examine the predictive validity of episode size in both purging and nonpurging eating disorders (e.g., binge eating disorder) to inform nosological schemes. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:651-662). © 2016 Wiley Periodicals, Inc.

  6. Canopy reflectance modelling of semiarid vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet

    1994-01-01

    Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.

  7. Single Droplet Combustion of Decane in Microgravity: Experiments and Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Dietrich, D. L.; Struk, P. M.; Ikegam, M.; Xu, G.

    2004-01-01

    This paper presents experimental data on single droplet combustion of decane in microgravity and compares the results to a numerical model. The primary independent experiment variables are the ambient pressure and oxygen mole fraction, pressure, droplet size (over a relatively small range) and ignition energy. The droplet history (D(sup 2) history) is non-linear with the burning rate constant increasing throughout the test. The average burning rate constant, consistent with classical theory, increased with increasing ambient oxygen mole fraction and was nearly independent of pressure, initial droplet size and ignition energy. The flame typically increased in size initially, and then decreased in size, in response to the shrinking droplet. The flame standoff increased linearly for the majority of the droplet lifetime. The flame surrounding the droplet extinguished at a finite droplet size at lower ambient pressures and an oxygen mole fraction of 0.15. The extinction droplet size increased with decreasing pressure. The model is transient and assumes spherical symmetry, constant thermo-physical properties (specific heat, thermal conductivity and species Lewis number) and single step chemistry. The model includes gas-phase radiative loss and a spherically symmetric, transient liquid phase. The model accurately predicts the droplet and flame histories of the experiments. Good agreement requires that the ignition in the experiment be reasonably approximated in the model and that the model accurately predict the pre-ignition vaporization of the droplet. The model does not accurately predict the dependence of extinction droplet diameter on pressure, a result of the simplified chemistry in the model. The transient flame behavior suggests the potential importance of fuel vapor accumulation. The model results, however, show that the fractional mass consumption rate of fuel in the flame relative to fuel vaporized is close to 1.0 for all but the lowest ambient oxygen mole fractions.

  8. Experimental validation of the Achromatic Telescopic Squeezing (ATS) scheme at the LHC

    NASA Astrophysics Data System (ADS)

    Fartoukh, S.; Bruce, R.; Carlier, F.; Coello De Portugal, J.; Garcia-Tabares, A.; Maclean, E.; Malina, L.; Mereghetti, A.; Mirarchi, D.; Persson, T.; Pojer, M.; Ponce, L.; Redaelli, S.; Salvachua, B.; Skowronski, P.; Solfaroli, M.; Tomas, R.; Valuch, D.; Wegscheider, A.; Wenninger, J.

    2017-07-01

    The Achromatic Telescopic Squeezing scheme offers new techniques to deliver unprecedentedly small beam spot size at the interaction points of the ATLAS and CMS experiments of the LHC, while perfectly controlling the chromatic properties of the corresponding optics (linear and non-linear chromaticities, off-momentum beta-beating, spurious dispersion induced by the crossing bumps). The first series of beam tests with ATS optics were achieved during the LHC Run I (2011/2012) for a first validation of the basics of the scheme at small intensity. In 2016, a new generation of more performing ATS optics was developed and more extensively tested in the machine, still with probe beams for optics measurement and correction at β* = 10 cm, but also with a few nominal bunches to establish first collisions at nominal β* (40 cm) and beyond (33 cm), and to analysis the robustness of these optics in terms of collimation and machine protection. The paper will highlight the most relevant and conclusive results which were obtained during this second series of ATS tests.

  9. Maternal Weight Gain as a Predictor of Litter Size in Swiss Webster, C57BL/6J, and BALB/cJ mice.

    PubMed

    Finlay, James B; Liu, Xueli; Ermel, Richard W; Adamson, Trinka W

    2015-11-01

    An important task facing both researchers and animal core facilities is producing sufficient mice for a given project. The inherent biologic variability of mouse reproduction and litter size further challenges effective research planning. A lack of precision in project planning contributes to the high cost of animal research, overproduction (thus waste) of animals, and inappropriate allocation of facility resources. To examine the extent daily prepartum maternal weight gain predicts litter size in 2 commonly used mouse strains (BALB/cJ and C57BL/6J) and one mouse stock (Swiss Webster), we weighed ≥ 25 pregnant dams of each strain or stock daily from the morning on which a vaginal plug (day 0) was present. On the morning when dams delivered their pups, we recorded the weight of the dam, the weight of the litter itself, and the number of pups. Litter sizes ranged from 1 to 7 pups for BALB/cJ, 2 to 13 for Swiss Webster, and 5 to 11 for C57BL/6J mice. Linear regression models (based on weight change from day 0) demonstrated that maternal weight gain at day 9 (BALB/cJ), day 11 (Swiss Webster), or day 14 (C57BL/6J) was a significant predictor of litter size. When tested prospectively, the linear regression model for each strain or stock was found to be accurate. These data indicate that the number of pups that will be born can be estimated accurately by using maternal weight gain at specific or stock-specific time points.

  10. Investigation of Polymer Liquid Crystals

    NASA Technical Reports Server (NTRS)

    Han, Kwang S.

    1996-01-01

    The positron annihilation lifetime spectroscopy (PALS) using a low energy flux generator may provide a reasonably accurate technique for measuring molecular weights of linear polymers and characterization of thin polyimide films in terms of their dielectric constants and hydrophobity etc. Among the tested samples are glassy poly arylene Ether Ketone films, epoxy and other polyimide films. One of the proposed techniques relates the free volume cell size (V(sub f)) with sample molecular weight (M) in a manner remarkably similar to that obtained by Mark Houwink (M-H) between the inherent viscosity (eta) and molecular wieght of polymer solution. The PALS has also demonstrated that free-volume cell size in thermoset is a versatile, useful parameter that relates directly to the polymer segmental molecular weight, the cross-link density, and the coefficient of thermal expansion. Thus, a determination of free volume cell size provides a viable basis for complete microstructural characterization of thermoset polyimides and also gives direct information about the cross-link density and coefficient of expansion of the test samples. Seven areas of the research conducted are reported here.

  11. Particle Size Effects on Flow Properties of PS304 Plasma Spray Feedstock Powder Blend

    NASA Technical Reports Server (NTRS)

    Stanford, Malcolm K.; DellaCorte, Christopher; Eylon, Daniel

    2002-01-01

    The effects of BaF2-CaF2 particle size and size distribution on PS304 feedstock powder flowability have been investigated. Angular BaF2-CaF2 eutectic powders were produced by comminution and classified by screening to obtain 38 to 45 microns 45 to 106 microns, 63 to 106 microns, 45 to 53 microns, 63 to 75 microns, and 90 to 106 microns particle size distributions. The fluorides were added incrementally from 0 to 10 wt% to the other powder constituents of the PS304 feedstock: nichrome, chromia, and silver powders. The flow rate of the powder blends decreased linearly with increasing concentration of the fluorides. Flow was degraded with decreasing BaF2-CaF2 particle size and with increasing BaF2-CaF2 particle size distribution. A semiempirical relationship is offered to describe the PS304 powder blend flow behavior. The Hausner Ratio confirmed the funnel flow test results, but was slightly less sensitive to differences in BaF2-CaF2 particle size and size distribution. These findings may have applicability to other powders that do not flow easily, such as ceramic powders.

  12. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  13. Atomically precise (catalytic) particles synthesized by a novel cluster deposition instrument

    DOE PAGES

    Yin, C.; Tyo, E.; Kuchta, K.; ...

    2014-05-06

    Here, we report a new high vacuum instrument which is dedicated to the preparation of well-defined clusters supported on model and technologically relevant supports for catalytic and materials investigations. The instrument is based on deposition of size selected metallic cluster ions that are produced by a high flux magnetron cluster source. Furthermore, we maximize the throughput of the apparatus by collecting and focusing ions utilizing a conical octupole ion guide and a linear ion guide. The size selection is achieved by a quadrupole mass filter. The new design of the sample holder provides for the preparation of multiple samples onmore » supports of various sizes and shapes in one session. After cluster deposition onto the support of interest, samples will be taken out of the chamber for a variety of testing and characterization.« less

  14. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-02-21

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  15. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-20

    We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  16. Relationship between fish size and upper thermal tolerance

    USGS Publications Warehouse

    Recsetar, Matthew S.; Zeigler, Matthew P.; Ward, David L.; Bonar, Scott A.; Caldwell, Colleen A.

    2012-01-01

    Using critical thermal maximum (CTMax) tests, we examined the relationship between upper temperature tolerances and fish size (fry-adult or subadult lengths) of rainbow trout Oncorhynchus mykiss (41-200-mm TL), Apache trout O. gilae apache (40-220-mm TL), largemouth bass Micropterus salmoides (72-266-mm TL), Nile tilapia Oreochromis niloticus (35-206-mm TL), channel catfish Ictalurus punctatus (62-264 mm-TL), and Rio Grande cutthroat trout O. clarkii virginalis (36-181-mm TL). Rainbow trout and Apache trout were acclimated at 18°C, Rio Grande cutthroat trout were acclimated at 14°C, and Nile tilapia, largemouth bass, and channel catfish were acclimated at 25°C, all for 14 d. Critical thermal maximum temperatures were estimated and data were analyzed using simple linear regression. There was no significant relationship (P > 0.05) between thermal tolerance and length for Nile tilapia (P = 0.33), channel catfish (P = 0.55), rainbow trout (P = 0.76), or largemouth bass (P = 0.93) for the length ranges we tested. There was a significant negative relationship between thermal tolerance and length for Rio Grande cutthroat trout (R2 = 0.412, P 2 = 0.1374, P = 0.028); however, the difference was less than 1°C across all lengths of Apache trout tested and about 1.3°C across all lengths of Rio Grande cutthroat trout tested. Because there was either no or at most a slight relationship between upper thermal tolerance and size, management and research decisions based on upper thermal tolerance should be similar for the range of sizes within each species we tested. However, the different sizes we tested only encompassed life stages ranging from fry to adult/subadult, so thermal tolerance of eggs, alevins, and larger adults should also be considered before making management decisions affecting an entire species.

  17. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles

    NASA Astrophysics Data System (ADS)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.

    2017-09-01

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  18. Controlling the Size and Shape of the Elastin-Like Polypeptide based Micelles

    NASA Astrophysics Data System (ADS)

    Streletzky, Kiril; Shuman, Hannah; Maraschky, Adam; Holland, Nolan

    Elastin-like polypeptide (ELP) trimer constructs make reliable environmentally responsive micellar systems because they exhibit a controllable transition from being water-soluble at low temperatures to aggregating at high temperatures. It has been shown that depending on the specific details of the ELP design (length of the ELP chain, pH and salt concentration) micelles can vary in size and shape between spherical micelles with diameter 30-100 nm to elongated particles with an aspect ratio of about 10. This makes ELP trimers a convenient platform for developing potential drug delivery and bio-sensing applications as well as for understanding micelle formation in ELP systems. Since at a given salt concentration, the headgroup area for each foldon should be constant, the size of the micelles is expected to be proportional to the volume of the linear ELP available per foldon headgroup. Therefore, adding linear ELPs to a system of ELP-foldon should result in changes of the micelle volume allowing to control micelle size and possibly shape. The effects of addition of linear ELPs on size, shape, and molecular weight of micelles at different salt concentrations were studied by a combination of Dynamic Light Scattering and Static Light Scattering. The initial results on 50 µM ELP-foldon samples (at low salt) show that Rh of mixed micelles increases more than 5-fold as the amount of linear ELP raised from 0 to 50 µM. It was also found that a given mixture of linear and trimer constructs has two temperature-based transitions and therefore displays three predominant size regimes.

  19. Effect of Stress Corrosion and Cyclic Fatigue on Fluorapatite Glass-Ceramic

    NASA Astrophysics Data System (ADS)

    Joshi, Gaurav V.

    2011-12-01

    Objective: The objective of this study was to test the following hypotheses: 1. Both cyclic degradation and stress corrosion mechanisms result in subcritical crack growth in a fluorapatite glass-ceramic. 2. There is an interactive effect of stress corrosion and cyclic fatigue to cause subcritical crack growth (SCG) for this material. 3. The material that exhibits rising toughness curve (R-curve) behavior also exhibits a cyclic degradation mechanism. Materials and Methods: The material tested was a fluorapatite glass-ceramic (IPS e.max ZirPress, Ivoclar-Vivadent). Rectangular beam specimens with dimensions of 25 mm x 4 mm x 1.2 mm were fabricated using the press-on technique. Two groups of specimens (N=30) with polished (15 mum) or air abraded surface were tested under rapid monotonic loading. Additional polished specimens were subjected to cyclic loading at two frequencies, 2 Hz (N=44) and 10 Hz (N=36), and at different stress amplitudes. All tests were performed using a fully articulating four-point flexure fixture in deionized water at 37°C. The SCG parameters were determined by using a statistical approach by Munz and Fett (1999). The fatigue lifetime data were fit to a general log-linear model in ALTA PRO software (Reliasoft). Fractographic techniques were used to determine the critical flaw sizes to estimate fracture toughness. To determine the presence of R-curve behavior, non-linear regression was used. Results: Increasing the frequency of cycling did not cause a significant decrease in lifetime. The parameters of the general log-linear model showed that only stress corrosion has a significant effect on lifetime. The parameters are presented in the following table.* SCG parameters (n=19--21) were similar for both frequencies. The regression model showed that the fracture toughness was significantly dependent (p<0.05) on critical flaw size. Conclusions: 1. Cyclic fatigue does not have a significant effect on the SCG in the fluorapatite glass-ceramic IPS e.max ZirPress. 2. There was no interactive effect between cyclic degradation and stress corrosion for this material. 3. The material exhibited a low level of R-curve behavior. It did not exhibit cyclic degradation. *Please refer to dissertation for table.

  20. Enhancing Scalability and Efficiency of the TOUGH2_MP for LinuxClusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu

    2006-04-17

    TOUGH2{_}MP, the parallel version TOUGH2 code, has been enhanced by implementing more efficient communication schemes. This enhancement is achieved through reducing the amount of small-size messages and the volume of large messages. The message exchange speed is further improved by using non-blocking communications for both linear and nonlinear iterations. In addition, we have modified the AZTEC parallel linear-equation solver to nonblocking communication. Through the improvement of code structuring and bug fixing, the new version code is now more stable, while demonstrating similar or even better nonlinear iteration converging speed than the original TOUGH2 code. As a result, the new versionmore » of TOUGH2{_}MP is improved significantly in its efficiency. In this paper, the scalability and efficiency of the parallel code are demonstrated by solving two large-scale problems. The testing results indicate that speedup of the code may depend on both problem size and complexity. In general, the code has excellent scalability in memory requirement as well as computing time.« less

  1. Combustion monitoring of a water tube boiler using a discriminant radial basis network.

    PubMed

    Sujatha, K; Pappa, N

    2011-01-01

    This research work includes a combination of Fisher's linear discriminant (FLD) analysis and a radial basis network (RBN) for monitoring the combustion conditions for a coal fired boiler so as to allow control of the air/fuel ratio. For this, two-dimensional flame images are required, which were captured with a CCD camera; the features of the images-average intensity, area, brightness and orientation etc of the flame-are extracted after preprocessing the images. The FLD is applied to reduce the n-dimensional feature size to a two-dimensional feature size for faster learning of the RBN. Also, three classes of images corresponding to different burning conditions of the flames have been extracted from continuous video processing. In this, the corresponding temperatures, and the carbon monoxide (CO) emissions and those of other flue gases have been obtained through measurement. Further, the training and testing of Fisher's linear discriminant radial basis network (FLDRBN), with the data collected, have been carried out and the performance of the algorithms is presented. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  3. Effects of wheat source and particle size in meal and pelleted diets on finishing pig growth performance, carcass characteristics, and nutrient digestibility.

    PubMed

    De Jong, J A; DeRouchey, J M; Tokach, M D; Dritz, S S; Goodband, R D; Paulk, C B; Woodworth, J C; Jones, C K; Stark, C R

    2016-08-01

    Two experiments were conducted to test the effects of wheat source and particle size in meal and pelleted diets on finishing pig performance, carcass characteristics, and diet digestibility. In Exp. 1, pigs (PIC 327 × 1050; = 288; initially 43.8 kg BW) were balanced by initial BW and randomly allotted to 1 of 3 treatments with 8 pigs per pen (4 barrows and 4 gilts) and 12 pens per treatment. The 3 dietary treatments were hard red winter wheat ground with a hammer mill to 728, 579, or 326 μm, respectively. From d 0 to 40, decreasing wheat particle size decreased (linear, < 0.033) ADFI but improved (quadratic, < 0.014) G:F. From d 40 to 83, decreasing wheat particle size increased (quadratic, < 0.018) ADG and improved (linear, < 0.002) G:F. Overall from d 0 to 83, reducing wheat particle size improved (linear, < 0.002) G:F. In Exp. 2, pigs (PIC 327 × 1050; = 576; initially 43.4 ± 0.02 kg BW) were used to determine the effects of wheat source and particle size of pelleted diets on finishing pig growth performance and carcass characteristics. Pigs were randomly allotted to pens, and pens of pigs were balanced by initial BW and randomly allotted to 1 of 6 dietary treatments with 12 replications per treatment and 8 pigs/pen. The experimental diets used the same wheat-soybean meal formulation, with the 6 treatments using hard red winter or soft white winter wheat that were processed to 245, 465, and 693 μm and 258, 402, and 710 μm, respectively. All diets were pelleted. Overall, feeding hard red winter wheat increased ( < 0.05) ADG and ADFI when compared with soft white winter wheat. There was a tendency ( < 0.10) for a quadratic particle size × wheat source interaction for ADG, ADFI, and both DM and GE digestibility, as they were decreased for pigs fed 465-μm hard red winter wheat and were greatest for pigs fed 402-μm soft white winter wheat. There were no main or interactive effects of particle size or wheat source on carcass characteristics. In summary, fine grinding hard red winter wheat fed in meal form improved G:F and nutrient digestibility, whereas reducing particle size of wheat from approximately 700 to 250 μm in pelleted diets did not influence growth or carcass traits. Finally, feeding hard red winter wheat improved ADG and ADFI compared with feeding soft white winter wheat.

  4. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  5. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  6. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  7. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  8. The use of generalized linear models and generalized estimating equations in bioarchaeological studies.

    PubMed

    Nikita, Efthymia

    2014-03-01

    The current article explores whether the application of generalized linear models (GLM) and generalized estimating equations (GEE) can be used in place of conventional statistical analyses in the study of ordinal data that code an underlying continuous variable, like entheseal changes. The analysis of artificial data and ordinal data expressing entheseal changes in archaeological North African populations gave the following results. Parametric and nonparametric tests give convergent results particularly for P values <0.1, irrespective of whether the underlying variable is normally distributed or not under the condition that the samples involved in the tests exhibit approximately equal sizes. If this prerequisite is valid and provided that the samples are of equal variances, analysis of covariance may be adopted. GLM are not subject to constraints and give results that converge to those obtained from all nonparametric tests. Therefore, they can be used instead of traditional tests as they give the same amount of information as them, but with the advantage of allowing the study of the simultaneous impact of multiple predictors and their interactions and the modeling of the experimental data. However, GLM should be replaced by GEE for the study of bilateral asymmetry and in general when paired samples are tested, because GEE are appropriate for correlated data. Copyright © 2013 Wiley Periodicals, Inc.

  9. Modelling the variation in skin-test tuberculin reactions, post-mortem lesion counts and case pathology in tuberculosis-exposed cattle: Effects of animal characteristics, histories and co-infection.

    PubMed

    Byrne, A W; Graham, J; Brown, C; Donaghy, A; Guelbenzu-Gonzalo, M; McNair, J; Skuce, R A; Allen, A; McDowell, S W

    2018-06-01

    Correctly identifying bovine tuberculosis (bTB) in cattle remains a significant problem in endemic countries. We hypothesized that animal characteristics (sex, age, breed), histories (herd effects, testing, movement) and potential exposure to other pathogens (co-infection; BVDV, liver fluke and Mycobacterium avium reactors) could significantly impact the immune responsiveness detected at skin testing and the variation in post-mortem pathology (confirmation) in bTB-exposed cattle. Three model suites were developed using a retrospective observational data set of 5,698 cattle culled during herd breakdowns in Northern Ireland. A linear regression model suggested that antemortem tuberculin reaction size (difference in purified protein derivative avium [PPDa] and bovine [PPDb] reactions) was significantly positively associated with post-mortem maximum lesion size and the number of lesions found. This indicated that reaction size could be considered a predictor of both the extent (number of lesions/tissues) and the pathological progression of infection (maximum lesion size). Tuberculin reaction size was related to age class, and younger animals (<2.85 years) displayed larger reaction sizes than older animals. Tuberculin reaction size was also associated with breed and animal movement and increased with the time between the penultimate and disclosing tests. A negative binomial random-effects model indicated a significant increase in lesion counts for animals with M. avium reactions (PPDb-PPDa < 0) relative to non-reactors (PPDb-PPDa = 0). Lesion counts were significantly increased in animals with previous positive severe interpretation skin-test results. Animals with increased movement histories, young animals and non-dairy breed animals also had significantly increased lesion counts. Animals from herds that had BVDV-positive cattle had significantly lower lesion counts than animals from herds without evidence of BVDV infection. Restricting the data set to only animals with a bTB visible lesion at slaughter (n = 2471), an ordinal regression model indicated that liver fluke-infected animals disclosed smaller lesions, relative to liver fluke-negative animals, and larger lesions were disclosed in animals with increased movement histories. © 2018 Blackwell Verlag GmbH.

  10. On the Genealogy of Asexual Diploids

    NASA Astrophysics Data System (ADS)

    Lam, Fumei; Langley, Charles H.; Song, Yun S.

    Given molecular genetic data from diploid individuals that, at present, reproduce mostly or exclusively asexually without recombination, an important problem in evolutionary biology is detecting evidence of past sexual reproduction (i.e., meiosis and mating) and recombination (both meiotic and mitotic). However, currently there is a lack of computational tools for carrying out such a study. In this paper, we formulate a new problem of reconstructing diploid genealogies under the assumption of no sexual reproduction or recombination, with the ultimate goal being to devise genealogy-based tools for testing deviation from these assumptions. We first consider the infinite-sites model of mutation and develop linear-time algorithms to test the existence of an asexual diploid genealogy compatible with the infinite-sites model of mutation, and to construct one if it exists. Then, we relax the infinite-sites assumption and develop an integer linear programming formulation to reconstruct asexual diploid genealogies with the minimum number of homoplasy (back or recurrent mutation) events. We apply our algorithms on simulated data sets with sizes of biological interest.

  11. A comparison of SuperLU solvers on the intel MIC architecture

    NASA Astrophysics Data System (ADS)

    Tuncel, Mehmet; Duran, Ahmet; Celebi, M. Serdar; Akaydin, Bora; Topkaya, Figen O.

    2016-10-01

    In many science and engineering applications, problems may result in solving a sparse linear system AX=B. For example, SuperLU_MCDT, a linear solver, was used for the large penta-diagonal matrices for 2D problems and hepta-diagonal matrices for 3D problems, coming from the incompressible blood flow simulation (see [1]). It is important to test the status and potential improvements of state-of-the-art solvers on new technologies. In this work, sequential, multithreaded and distributed versions of SuperLU solvers (see [2]) are examined on the Intel Xeon Phi coprocessors using offload programming model at the EURORA cluster of CINECA in Italy. We consider a portfolio of test matrices containing patterned matrices from UFMM ([3]) and randomly located matrices. This architecture can benefit from high parallelism and large vectors. We find that the sequential SuperLU benefited up to 45 % performance improvement from the offload programming depending on the sparse matrix type and the size of transferred and processed data.

  12. Comparison of the frictional characteristics of aesthetic orthodontic brackets measured using a modified in vitro technique

    PubMed Central

    Arici, Nursel

    2015-01-01

    Objective The coefficients of friction (COFs) of aesthetic ceramic and stainless steel brackets used in conjunction with stainless steel archwires were investigated using a modified linear tribometer and special computer software, and the effects of the bracket slot size (0.018 inches [in] or 0.022 in) and materials (ceramic or metal) on the COF were determined. Methods Four types of ceramic (one with a stainless steel slot) and one conventional stainless steel bracket were tested with two types of archwire sizes: a 0.017 × 0.025-in wire in the 0.018-in slots and a 0.019 × 0.025-in wire in the 0.022-in slot brackets. For pairwise comparisons between the 0.018-in and 0.022-in slot sizes in the same bracket, an independent sample t-test was used. One-way and two-way analysis of variance (ANOVA) and Tukey's post-hoc test at the 95% confidence level (α = 0.05) were also used for statistical analyses. Results There were significant differences between the 0.022-in and 0.018-in slot sizes for the same brand of bracket. ANOVA also showed that both slot size and bracket slot material had significant effects on COF values (p < 0.001). The ceramic bracket with a 0.022-in stainless steel slot showed the lowest mean COF (µ = 0.18), followed by the conventional stainless steel bracket with a 0.022-in slot (µ = 0.21). The monocrystalline alumina ceramic bracket with a 0.018-in slot had the highest COF (µ = 0.85). Conclusions Brackets with stainless steel slots exhibit lower COFs than ceramic slot brackets. All brackets show lower COFs as the slot size increases. PMID:25667915

  13. Effects of pole flux distribution in a homopolar linear synchronous machine

    NASA Astrophysics Data System (ADS)

    Balchin, M. J.; Eastham, J. F.; Coles, P. C.

    1994-05-01

    Linear forms of synchronous electrical machine are at present being considered as the propulsion means in high-speed, magnetically levitated (Maglev) ground transportation systems. A homopolar form of machine is considered in which the primary member, which carries both ac and dc windings, is supported on the vehicle. Test results and theoretical predictions are presented for a design of machine intended for driving a 100 passenger vehicle at a top speed of 400 km/h. The layout of the dc magnetic circuit is examined to locate the best position for the dc winding from the point of view of minimum core weight. Measurements of flux build-up under the machine at different operating speeds are given for two types of secondary pole: solid and laminated. The solid pole results, which are confirmed theoretically, show that this form of construction is impractical for high-speed drives. Measured motoring characteristics are presented for a short length of machine which simulates conditions at the leading and trailing ends of the full-sized machine. Combination of the results with those from a cylindrical version of the machine make it possible to infer the performance of the full-sized traction machine. This gives 0.8 pf and 0.9 efficiency at 300 km/h, which is much better than the reported performance of a comparable linear induction motor (0.52 pf and 0.82 efficiency). It is therefore concluded that in any projected high-speed Maglev systems, a linear synchronous machine should be the first choice as the propulsion means.

  14. Improved pedagogy for linear differential equations by reconsidering how we measure the size of solutions

    NASA Astrophysics Data System (ADS)

    Tisdell, Christopher C.

    2017-11-01

    For over 50 years, the learning of teaching of a priori bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to a priori bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving second-order, linear problems with constant co-efficients, we believe it is not pedagogically optimal. Moreover, the Euclidean method becomes pedagogically unwieldy in the proofs involving higher-order cases. The purpose of this work is to propose a simpler pedagogical approach to establish a priori bounds on solutions by considering a different way of measuring the size of a solution to linear problems, which we refer to as the Uber size. The Uber form enables a simplification of pedagogy from the literature and the ideas are accessible to learners who have an understanding of the Fundamental Theorem of Calculus and the exponential function, both usually seen in a first course in calculus. We believe that this work will be of mathematical and pedagogical interest to those who are learning and teaching in the area of differential equations or in any of the numerous disciplines where linear differential equations are used.

  15. Analysis of Zenith Tropospheric Delay above Europe based on long time series derived from the EPN data

    NASA Astrophysics Data System (ADS)

    Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej

    2015-04-01

    In recent years, the GNSS system began to play an increasingly important role in the research related to the climate monitoring. Based on the GPS system, which has the longest operational capability in comparison with other systems, and a common computational strategy applied to all observations, long and homogeneous ZTD (Zenith Tropospheric Delay) time series were derived. This paper presents results of analysis of 16-year ZTD time series obtained from the EPN (EUREF Permanent Network) reprocessing performed by the Military University of Technology. To maintain the uniformity of data, analyzed period of time (1998-2013) is exactly the same for all stations - observations carried out before 1998 were removed from time series and observations processed using different strategy were recalculated according to the MUT LAC approach. For all 16-year time series (59 stations) Lomb-Scargle periodograms were created to obtain information about the oscillations in ZTD time series. Due to strong annual oscillations which disturb the character of oscillations with smaller amplitude and thus hinder their investigation, Lomb-Scargle periodograms for time series with the deleted annual oscillations were created in order to verify presence of semi-annual, ter-annual and quarto-annual oscillations. Linear trend and seasonal components were estimated using LSE (Least Square Estimation) and Mann-Kendall trend test were used to confirm the presence of linear trend designated by LSE method. In order to verify the effect of the length of time series on the estimated size of the linear trend, comparison between two different length of ZTD time series was performed. To carry out a comparative analysis, 30 stations which have been operating since 1996 were selected. For these stations two periods of time were analyzed: shortened 16-year (1998-2013) and full 18-year (1996-2013). For some stations an additional two years of observations have significant impact on changing the size of linear trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.

  16. In Vitro Evaluation of the Size, Knot Holding Capacity, and Knot Security of the Forwarder Knot Compared to Square and Surgeon's Knots Using Large Gauge Suture.

    PubMed

    Gillen, Alex M; Munsterman, Amelia S; Hanson, R Reid

    2016-11-01

    To investigate the strength, size, and holding capacity of the self-locking forwarder knot compared to surgeon's and square knots using large gauge suture. In vitro mechanical study. Knotted suture. Forwarder, surgeon's, and square knots were tested on a universal testing machine under linear tension using 2 and 3 USP polyglactin 910 and 2 USP polydioxanone. Knot holding capacity (KHC) and mode of failure were recorded and relative knot security (RKS) was calculated as a percentage of KHC. Knot volume and weight were assessed by digital micrometer and balance, respectively. ANOVA and post hoc testing were used tocompare strength between number of throws, suture, suture size, and knot type. P<.05 was considered significant. Forwarder knots had a higher KHC and RKS than surgeon's or square knots for all suture types and number of throws. No forwarder knots unraveled, but a proportion of square and surgeon's knots with <6 throws did unravel. Forwarder knots had a smaller volume and weight than surgeon's and square knots with equal number of throws. The forwarder knot of 4 throws using 3 USP polyglactin 910 had the highest KHC, RKS, and the smallest size and weight. Forwarder knots may be an alternative for commencing continuous patterns in large gauge suture, without sacrificing knot integrity, but further in vivo and ex vivo testing is required to assess the effects of this sliding knot on tissue perfusion before clinical application. © Copyright 2016 by The American College of Veterinary Surgeons.

  17. The influence of mass configurations on velocity amplified vibrational energy harvesters

    NASA Astrophysics Data System (ADS)

    O'Donoghue, D.; Frizzell, R.; Kelly, G.; Nolan, K.; Punch, J.

    2016-05-01

    Vibrational energy harvesters scavenge ambient vibrational energy, offering an alternative to batteries for the autonomous operation of low power electronics. Velocity amplified electromagnetic generators (VAEGs) utilize the velocity amplification effect to increase power output and operational bandwidth, compared to linear resonators. A detailed experimental analysis of the influence of mass ratio and number of degrees-of-freedom (dofs) on the dynamic behaviour and power output of a macro-scale VAEG is presented. Various mass configurations are tested under drop-test and sinusoidal forced excitation, and the system performances are compared. For the drop-test, increasing mass ratio and number of dofs increases velocity amplification. Under forced excitation, the impacts between the masses are more complex, inducing greater energy losses. This results in the 2-dof systems achieving the highest velocities and, hence, highest output voltages. With fixed transducer size, higher mass ratios achieve higher voltage output due to the superior velocity amplification. Changing the magnet size to a fixed percentage of the final mass showed the increase in velocity of the systems with higher mass ratios is not significant enough to overcome the reduction in transducer size. Consequently, the 3:1 mass ratio systems achieved the highest output voltage. These findings are significant for the design of future reduced-scale VAEGs.

  18. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  19. A complete sample of double-lobed radio quasars for VLBI tests of source models - Definition and statistics

    NASA Technical Reports Server (NTRS)

    Hough, D. H.; Readhead, A. C. S.

    1989-01-01

    A complete, flux-density-limited sample of double-lobed radio quasars is defined, with nuclei bright enough to be mapped with the Mark III VLBI system. It is shown that the statistics of linear size, nuclear strength, and curvature are consistent with the assumption of random source orientations and simple relativistic beaming in the nuclei. However, these statistics are also consistent with the effects of interaction between the beams and the surrounding medium. The distribution of jet velocities in the nuclei, as measured with VLBI, will provide a powerful test of physical theories of extragalactic radio sources.

  20. Effects of body size and gender on the population pharmacokinetics of artesunate and its active metabolite dihydroartemisinin in pediatric malaria patients.

    PubMed

    Morris, Carrie A; Tan, Beesan; Duparc, Stephan; Borghini-Fuhrer, Isabelle; Jung, Donald; Shin, Chang-Sik; Fleckenstein, Lawrence

    2013-12-01

    Despite the important role of the antimalarial artesunate and its active metabolite dihydroartemisinin (DHA) in malaria treatment efforts, there are limited data on the pharmacokinetics of these agents in pediatric patients. This study evaluated the effects of body size and gender on the pharmacokinetics of artesunate-DHA using data from pediatric and adult malaria patients. Nonlinear mixed-effects modeling was used to obtain a base model consisting of first-order artesunate absorption and one-compartment models for artesunate and for DHA. Various methods of incorporating effects of body size descriptors on clearance and volume parameters were tested. An allometric scaling model for weight and a linear body surface area (BSA) model were deemed optimal. The apparent clearance and volume of distribution of DHA obtained with the allometric scaling model, normalized to a 38-kg patient, were 63.5 liters/h and 65.1 liters, respectively. Estimates for the linear BSA model were similar. The 95% confidence intervals for the estimated gender effects on clearance and volume parameters for artesunate fell outside the predefined no-relevant-clinical-effect interval of 0.75 to 1.25. However, the effect of gender on apparent DHA clearance was almost entirely contained within this interval, suggesting a lack of an influence of gender on this parameter. Overall, the pharmacokinetics of artesunate and DHA following oral artesunate administration can be described for pediatric patients using either an allometric scaling or linear BSA model. Both models predict that, for a given artesunate dose in mg/kg of body weight, younger children are expected to have lower DHA exposure than older children or adults.

  1. Construction of trypanosome artificial mini-chromosomes.

    PubMed Central

    Lee, M G; E, Y; Axelrod, N

    1995-01-01

    We report the preparation of two linear constructs which, when transformed into the procyclic form of Trypanosoma brucei, become stably inherited artificial mini-chromosomes. Both of the two constructs, one of 10 kb and the other of 13 kb, contain a T.brucei PARP promoter driving a chloramphenicol acetyltransferase (CAT) gene. In the 10 kb construct the CAT gene is followed by one hygromycin phosphotransferase (Hph) gene, and in the 13 kb construct the CAT gene is followed by three tandemly linked Hph genes. At each end of these linear molecules are telomere repeats and subtelomeric sequences. Electroporation of these linear DNA constructs into the procyclic form of T.brucei generated hygromycin-B resistant cell lines. In these cell lines, the input DNA remained linear and bounded by the telomere ends, but it increased in size. In the cell lines generated by the 10 kb construct, the input DNA increased in size to 20-50 kb. In the cell lines generated by the 13 kb constructs, two sizes of linear DNAs containing the input plasmid were detected: one of 40-50 kb and the other of 150 kb. The increase in size was not the result of in vivo tandem repetitions of the input plasmid, but represented the addition of new sequences. These Hph containing linear DNA molecules were maintained stably in cell lines for at least 20 generations in the absence of drug selection and were subsequently referred to as trypanosome artificial mini-chromosomes, or TACs. Images PMID:8532534

  2. A new theory for multistep discretizations of stiff ordinary differential equations: Stability with large step sizes

    NASA Technical Reports Server (NTRS)

    Majda, G.

    1985-01-01

    A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.

  3. Determinants and consequences of female attractiveness and sexiness: realistic tests with restaurant waitresses.

    PubMed

    Lynn, Michael

    2009-10-01

    Waitresses completed an on-line survey about their physical characteristics, self-perceived attractiveness and sexiness, and average tips. The waitresses' self-rated physical attractiveness increased with their breast sizes and decreased with their ages, waist-to-hip ratios, and body sizes. Similar effects were observed on self-rated sexiness, with the exception of age, which varied with self-rated sexiness in a negative, quadratic relationship rather than a linear one. Moreover, the waitresses' tips varied with age in a negative, quadratic relationship, increased with breast size, increased with having blond hair, and decreased with body size. These findings, which are discussed from an evolutionary perspective, make several contributions to the literature on female physical attractiveness. First, they replicate some previous findings regarding the determinants of female physical attractiveness using a larger, more diverse, and more ecologically valid set of stimuli than has been studied before. Second, they provide needed evidence that some of those determinants of female beauty affect interpersonal behaviors as well as attractiveness ratings. Finally, they indicate that some determinants of female physical attractiveness do not have the same effects on overt interpersonal behavior (such as tipping) that they have on attractiveness ratings. This latter contribution highlights the need for more ecologically valid tests of evolutionary theories about the determinants and consequences of female beauty.

  4. Incremental exercise test for the evaluation of peak oxygen consumption in paralympic swimmers.

    PubMed

    de Souza, Helton; DA Silva Alves, Eduardo; Ortega, Luciana; Silva, Andressa; Esteves, Andrea M; Schwingel, Paulo A; Vital, Roberto; DA Rocha, Edilson A; Rodrigues, Bruno; Lira, Fabio S; Tufik, Sergio; DE Mello, Marco T

    2016-04-01

    Peak oxygen consumption (VO2peak) is a fundamental parameter used to evaluate physical capacity. The objective of this study was to explore two types of incremental exercise tests used to determine VO2peak in four Paralympic swimmers: arm ergometer testing in the laboratory and testing in the swimming pool. On two different days, the VO2peak values of the four athletes were measured in a swimming pool and by a cycle ergometer. The protocols identified the VO2peak by progressive loading until the volitional exhaustion maximum was reached. The results were analyzed using the paired Student's t-test, Cohen's d effect sizes and a linear regression. The results showed that the VO2peak values obtained using the swimming pool protocol were higher (P=0.02) than those obtained by the arm ergometer (45.8±19.2 vs. 30.4±15.5; P=0.02), with a large effect size (d=3.20). When analyzing swimmers 1, 2, 3 and 4 individually, differences of 22.4%, 33.8%, 60.1% and 27.1% were observed, respectively. Field tests similar to the competitive setting are a more accurate way to determine the aerobic capacity of Paralympic swimmers. This approach provides more sensitive data that enable better direction of training, consequently facilitating improved performance.

  5. On the development of HSCT tail sizing criteria using linear matrix inequalities

    NASA Technical Reports Server (NTRS)

    Kaminer, Isaac

    1995-01-01

    This report presents the results of a study to extend existing high speed civil transport (HSCT) tail sizing criteria using linear matrix inequalities (LMI). In particular, the effects of feedback specifications, such as MIL STD 1797 Level 1 and 2 flying qualities requirements, and actuator amplitude and rate constraints on the maximum allowable cg travel for a given set of tail sizes are considered. Results comparing previously developed industry criteria and the LMI methodology on an HSCT concept airplane are presented.

  6. Improved Pedagogy for Linear Differential Equations by Reconsidering How We Measure the Size of Solutions

    ERIC Educational Resources Information Center

    Tisdell, Christopher C.

    2017-01-01

    For over 50 years, the learning of teaching of "a priori" bounds on solutions to linear differential equations has involved a Euclidean approach to measuring the size of a solution. While the Euclidean approach to "a priori" bounds on solutions is somewhat manageable in the learning and teaching of the proofs involving…

  7. No evidence of nonlinear effects of predator density, refuge availability, or body size of prey on prey mortality rates.

    PubMed

    Simkins, Richard M; Belk, Mark C

    2017-08-01

    Predator density, refuge availability, and body size of prey can all affect the mortality rate of prey. We assume that more predators will lead to an increase in prey mortality rate, but behavioral interactions between predators and prey, and availability of refuge, may lead to nonlinear effects of increased number of predators on prey mortality rates. We tested for nonlinear effects in prey mortality rates in a mesocosm experiment with different size classes of western mosquitofish ( Gambusia affinis ) as the prey, different numbers of green sunfish ( Lepomis cyanellus ) as the predators, and different levels of refuge. Predator number and size class of prey, but not refuge availability, had significant effects on the mortality rate of prey. Change in mortality rate of prey was linear and equal across the range of predator numbers. Each new predator increased the mortality rate by about 10% overall, and mortality rates were higher for smaller size classes. Predator-prey interactions at the individual level may not scale up to create nonlinearity in prey mortality rates with increasing predator density at the population level.

  8. Effect of Pore Size, Morphology and Orientation on the Bulk Stiffness of a Porous Ti35Nb4Sn Alloy

    NASA Astrophysics Data System (ADS)

    Torres-Sanchez, Carmen; McLaughlin, John; Bonallo, Ross

    2018-04-01

    The metal foams of a titanium alloy were designed to study porosity as well as pore size and shape independently. These were manufactured using a powder metallurgy/space-holder technique that allowed a fine control of the pore size and morphology; and then characterized and tested against well-established models to predict a relationship between porosity, pore size and shape, and bulk stiffness. Among the typically used correlations, existing power-law models were found to be the best fit for the prediction of macropore morphology against compressive elastic moduli, outperforming other models such as exponential, polynomial or binomial. Other traditional models such as linear ones required of updated coefficients to become relevant to metal porous sintered macrostructures. The new coefficients reported in this study contribute toward a design tool that allows the tailoring of mechanical properties through porosity macrostructure. The results show that, for the same porosity range, pore shape and orientation have a significant effect on mechanical performance and that they can be predicted. Conversely, pore size has only a mild impact on bulk stiffness.

  9. Aerosol Impacts on Cirrus Clouds and High-Power Laser Transmission: A Combined Satellite Observation and Modeling Approach

    DTIC Science & Technology

    2009-03-22

    indirect effect (AIE) index determined from the slope of the fitted linear equation involving cloud particle size vs. aerosol optical depth is about a... raindrop . The model simulations were performed for a 48-hour period, starting at 00Z on 29 March 2007, about 20 hours prior to ABL test flight time...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MS. KRISTEN LUND UNIV OF CALIFORNIA LOS ANGELES, CA 90095 8. PERFORMING

  10. The Parallel of Decomposition of Linear Programs

    DTIC Science & Technology

    1989-11-01

    length is 16*(3+86) = 1424 bytes for all the test problems. Sending a message involves loading it into a buffer and copying the buffer into the proper...3 + r.) Primal PoinL and Ray 16 * (3 + r) Dual Point or Ray 8 * (4 + r.) Table 4.2: Message sizes. into a buffer . Subproblems have one mailbox for...model,i.e., to disaggregate. For instance, "dairy products" becomes milk, cheese, yogurt and ice cream. Adding complexity allows a model to give a more

  11. Spatial structure, sampling design and scale in remotely-sensed imagery of a California savanna woodland

    NASA Technical Reports Server (NTRS)

    Mcgwire, K.; Friedl, M.; Estes, J. E.

    1993-01-01

    This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.

  12. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  13. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  14. Security and Cloud Outsourcing Framework for Economic Dispatch

    DOE PAGES

    Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...

    2017-04-24

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less

  15. [Effect of compaction pressure on the properties of dental machinable zirconia ceramic].

    PubMed

    Huang, Hui; Wei, Bin; Zhang, Fu-qiang; Sun, Jing; Gao, Lian

    2010-10-01

    To investigate the effect of compaction pressure on the linear shrinkage, sintering property and machinability of the dental zirconia ceramic. The nano-size zirconia powder was compacted at different isostatic pressure and sintered at different temperature. The linear shrinkage of sintered body was measured and the relative density was tested using the Archimedes method. The cylindrical surface of pre-sintering blanks was traversed using a hard metal tool. Surface and edge quality were checked visually using light stereo microscopy. The sintering behaviour depended on the compaction pressure. Increasing compaction pressure led to higher sintering rate and lower sintering temperature. Increasing compaction pressure also led to decreasing linear shrinkage of the sintered bodies, from 24.54% of 50 MPa to 20.9% of 400 MPa. Compaction pressure showed only a weak influence on machinability of zirconia blanks, but the higher compaction pressure resulted in the poor surface quality. The better sintering property and machinability of dental zirconia ceramic is found for 200-300 MPa compaction pressure.

  16. Adsorption of polypropylene from dilute solutions on a zeolite column packing.

    PubMed

    Macko, Tibor; Pasch, Harald; Denayer, Joeri F

    2005-01-01

    Faujasite type zeolite CBV-780 was tested as adsorbent for isotactic polypropylene by liquid chromatography. When cyclohexane, cyclohexanol, n-decanol, n-dodecanol, diphenylmethane, or methylcyclohexane was used as mobile phase, polypropylene was fully or partially retained within the column packing. This is the first series of sorbent-solvent systems to show a pronounced retention of isotactic polypropylene. According to the hydrodynamic volumes of polypropylene in solution, macromolecules of polypropylene should be fully excluded from the pore volume of the sorbent. Sizes of polypropylene macromolecules in linear conformations, however, correlate with the pore size of the column packing used. It is presumed that the polypropylene chains partially penetrate into the pores and are retained due to the high adsorption potential in the narrow pores.

  17. [Analysis of breath hydrogen (H2) in diagnosis of gastrointestinal function: validation of a pocket breath H2 test analyzer].

    PubMed

    Braden, B; Braden, C P; Klutz, M; Lembcke, B

    1993-04-01

    Breath hydrogen (H2) analysis, as used in gastroenterologic function tests, requires a stationary analysis system equipped with a gaschromatograph or an electrochemical sensor cell. Now a portable breath H2-analyzer has been miniaturized to pocket size (104 mm x 62 mm x 29 mm). The application of this device in clinical practice has been assessed in comparison to the standard GMI-exhaled monitor. The pocket analyzer showed a linear response to standards with H2-concentrations ranging from 0-100 ppm (n = 7), which was not different from the GMI-apparatus. The correlation of both methods during clinical application (lactose tolerance tests, mouth-to-coecum transit time determined with lactulose) was excellent (Y = 1.08 X + 0.96; r = 0.959). Using the new device, both, analysis (3 s vs. 90 s) and the reset-time (43 s vs. 140 s) were shorter whereas calibration was more feasible with the GMI-apparatus. It is concluded, that the considerably cheaper pocket-sized breath H2-analyzer is as precise and sensitive as the GMI-exhaled monitor, and thus presents a valid alternative for H2-breath tests.

  18. Effect of canard location and size on canard-wing interference and aerodynamic center shift related to maneuvering aircraft at transonic speeds

    NASA Technical Reports Server (NTRS)

    Gloss, B. B.

    1974-01-01

    A generalized wind-tunnel model, typical of highly maneuverable aircraft, was tested in the Langley 8-foot transonic pressure tunnel at Mach numbers from 0.70 to 1.20 to determine the effects of canard location and size on canard-wing interference effects and aerodynamic center shift at transonic speeds. The canards had exposed areas of 16.0 and 28.0 percent of the wing reference area and were located in the chord plane of the wing or in a position 18.5 percent of the wing mean geometric chord above or below the wing chord plane. Two different wing planforms were tested, one with leading-edge sweep of 60 deg and the other 44 deg; both wings had the same reference area and span. The results indicated that the largest benefits in lift and drag were obtained with the canard above the wing chord plane for both wings tested. The low canard configuration for the 60 deg swept wing proved to be more stable and produced a more linear pitching-moment curve than the high and coplanar canard configurations for the subsonic test Mach numbers.

  19. Frequent baked egg ingestion was not associated with change in rate of decline in egg skin prick test in children with challenge confirmed egg allergy.

    PubMed

    Tey, D; Dharmage, S C; Robinson, M N; Allen, K J; Gurrin, L C; Tang, M L K

    2012-12-01

    It is controversial whether egg-allergic children should strictly avoid all forms of egg, or if regular ingestion of baked egg will either delay or hasten the resolution of egg allergy. This is the first study to examine the relationship between frequency of baked egg ingestion and rate of decline in egg skin prick test size in egg-allergic children. This was a retrospective clinical cohort study. All children with challenge-proven egg allergy who attended the Royal Children's Hospital Allergy Department 1996-2005 and had at least two egg skin prick tests performed in this period were included (n = 125). Frequency of baked egg ingestion was assessed by telephone questionnaire as follows: (a) frequent (> once per week), (b) regular (> once every 3 months, up to ≤ once per week) or (c) strict avoidance (≤ once every 3 months). The relationship between frequency of baked egg ingestion and rate of decline in egg skin prick test size was examined by multiple linear regression, adjusting for potential confounders. Mean rate of decline in egg skin prick test size in all children was 0.7 mm/year (95% CI 0.5-1.0 mm/year). There was no evidence (P = 0.57) that the rate of decline in egg skin prick test size differed between children who undertook frequent ingestion (n = 21, mean 0.4 mm/year, 95% CI -0.3-1.2 mm/year), regular ingestion (n = 37, mean 0.9 mm/year, 95% CI 0.4-1.4 mm/year) or strict avoidance (n = 67, mean 0.7 mm/year, 95% CI 0.4-1.1 mm/year) of baked egg. Compared with strict dietary avoidance, frequent consumption of baked egg was not associated with a different rate of decline in egg skin prick test size in egg-allergic children. Given that dietary restrictions can adversely impact on the family, it is reasonable to consider liberalizing baked egg in the diet of egg-allergic children. © 2012 Blackwell Publishing Ltd.

  20. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  1. Characterization of MOSkin detector for in vivo skin dose measurement during megavoltage radiotherapy

    PubMed Central

    Jong, Wei Loong; Wong, Jeannie Hsiu Ding; Ng, Kwan Hoong; Ho, Gwo Fuang; Cutajar, Dean L.; Rosenfeld, Anatoly B.

    2014-01-01

    In vivo dosimetry is important during radiotherapy to ensure the accuracy of the dose delivered to the treatment volume. A dosimeter should be characterized based on its application before it is used for in vivo dosimetry. In this study, we characterize a new MOSFET‐based detector, the MOSkin detector, on surface for in vivo skin dosimetry. The advantages of the MOSkin detector are its water equivalent depth of measurement of 0.07 mm, small physical size with submicron dosimetric volume, and the ability to provide real‐time readout. A MOSkin detector was calibrated and the reproducibility, linearity, and response over a large dose range to different threshold voltages were determined. Surface dose on solid water phantom was measured using MOSkin detector and compared with Markus ionization chamber and GAFCHROMIC EBT2 film measurements. Dependence in the response of the MOSkin detector on the surface of solid water phantom was also tested for different (i) source to surface distances (SSDs); (ii) field sizes; (iii) surface dose; (iv) radiation incident angles; and (v) wedges. The MOSkin detector showed excellent reproducibility and linearity for dose range of 50 cGy to 300 cGy. The MOSkin detector showed reliable response to different SSDs, field sizes, surface, radiation incident angles, and wedges. The MOSkin detector is suitable for in vivo skin dosimetry. PACS number: 87.55.Qr PMID:25207573

  2. Integration of an Autopilot for a Micro Air Vehicle

    NASA Technical Reports Server (NTRS)

    Platanitis, George; Shkarayev, Sergey

    2005-01-01

    Two autopilots providing autonomous flight capabilities are presented herein. The first is the Pico-Pilot, demonstrated for the 12-inch size class of micro air vehicles. The second is the MicroPilot MP2028(sup g), where its integration into a 36-inch Zagi airframe (tailless, elevons only configuration) is investigated and is the main focus of the report. Analytical methods, which include the use of the Advanced Aircraft Analysis software from DARCorp, were used to determine the stability and control derivatives, which were then validated through wind tunnel experiments. From the aerodynamic data, the linear, perturbed equations of motion from steady-state flight conditions may be cast in terms of these derivatives. Using these linear equations, transfer functions for the control and navigation systems were developed and feedback control laws based on Proportional, Integral, and Derivative (PID) control design were developed to control the aircraft. The PID gains may then be programmed into the autopilot software and uploaded to the microprocessor of the autopilot. The Pico-Pilot system was flight tested and shown to be successful in navigating a 12-inch MAV through a course defined by a number of waypoints with a high degree of accuracy, and in 20 mph winds. The system, though, showed problems with control authority in the roll and pitch motion of the aircraft: causing oscillations in these directions, but the aircraft maintained its heading while following the prescribed course. Flight tests were performed in remote control mode to evaluate handling, adjust trim, and test data logging for the Zagi with integrated MP2028(sup g). Ground testing was performed to test GPS acquisition, data logging, and control response in autonomous mode. Technical difficulties and integration limitations with the autopilot prevented fully autonomous flight from taking place, but the integration methodologies developed for this autopilot are, in general, applicable for unmanned air vehicles within the 36-inch size class or larger that use a PID control based autopilot.

  3. Measurement of in-situ strength using projectile penetration: Tests of a new launching system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hearst, J.R.; Newmark, R.L.; Charest, J.A.

    1987-10-01

    The Lawrence Livermore National Laboratory has a continuing need to measure rock strength in situ, both for simple prediction of cavity size, and as input to computational models. In a previous report we compared two methods for measuring formation strength in situ: projectile penetration and a cone penetrometer. We determined that the projectile method was more promising for application to our large-diameter (2-4-m) hole environment. A major practical problem has been the development of a launcher and an apparatus for measuring depth of penetration that would be suitable for use in large-diameter holes. We are developing a gas-gun launcher systemmore » that will be capable of measuring both depth of penetration and deceleration of a reusable projectile. The current version of the launcher is trailer-mounted for testing at our Nevada Test Site (NTS) in tunnels and outcrops, but its design is such that it can be readily adapted for emplacement hole use. We test the current launcher on 60-cm cubes of gypsum cement, mixed to provie a range of densities (1.64 to 2.0 g/cc) and strengths (3 to 17 MPa). We compared depth of penetration of a 84-g projectile from a ''Betsy'' seismic gun - traveling on the order of 500 m/s - with the depth of penetration of a 13-kg projectile from the gas gun - traveling on the order of 30 m/s. For projectiles with the same nose size and shape, impacting targets of approximately constant strength, penetration depth was proportional to projectile kinetic energy. The ratio of kinetic energy to penetration depth was approximately proportional to target strength. Tests in tuffs with a wide range of strengths at NTS gave a similar linear relationship between the ratio of kinetic energy to penetration and target strength, and also a linear relationship between deceleration and strength. It appears that penetration can indeed be used as a semiquantitative measure of strength.« less

  4. Women Build Long Bones With Less Cortical Mass Relative to Body Size and Bone Size Compared With Men.

    PubMed

    Jepsen, Karl J; Bigelow, Erin M R; Schlecht, Stephen H

    2015-08-01

    The twofold greater lifetime risk of fracturing a bone for white women compared with white men and black women has been attributed in part to differences in how the skeletal system accumulates bone mass during growth. On average, women build more slender long bones with less cortical area compared with men. Although slender bones are known to have a naturally lower cortical area compared with wider bones, it remains unclear whether the relatively lower cortical area of women is consistent with their increased slenderness or is reduced beyond that expected for the sex-specific differences in bone size and body size. Whether this sexual dimorphism is consistent with ethnic background and is recapitulated in the widely used mouse model also remains unclear. We asked (1) do black women build bones with reduced cortical area compared with black men; (2) do white women build bones with reduced cortical area compared with white men; and (3) do female mice build bones with reduced cortical area compared with male mice? Bone strength and cross-sectional morphology of adult human and mouse bone were calculated from quantitative CT images of the femoral midshaft. The data were tested for normality and regression analyses were used to test for differences in cortical area between men and women after adjusting for body size and bone size by general linear model (GLM). Linear regression analysis showed that the femurs of black women had 11% lower cortical area compared with those of black men after adjusting for body size and bone size (women: mean=357.7 mm2; 95% confidence interval [CI], 347.9-367.5 mm2; men: mean=400.1 mm2; 95% CI, 391.5-408.7 mm2; effect size=1.2; p<0.001, GLM). Likewise, the femurs of white women had 12% less cortical area compared with those of white men after adjusting for body size and bone size (women: mean=350.1 mm2; 95% CI, 340.4-359.8 mm2; men: mean=394.3 mm2; 95% CI, 386.5-402.1 mm2; effect size=1.3; p<0.001, GLM). In contrast, female and male femora from recombinant inbred mouse strains showed the opposite trend; femurs from female mice had a 4% larger cortical area compared with those of male mice after adjusting for body size and bone size (female: mean=0.73 mm2; 95% CI, 0.71-0.74 mm2; male: mean=0.70 mm2; 95% CI, 0.68-0.71 mm2; effect size=0.74; p=0.04, GLM). Female femurs are not simply a more slender version of male femurs. Women acquire substantially less mass (cortical area) for their body size and bone size compared with men. Our analysis questions whether mouse long bone is a suitable model to study human sexual dimorphism. Identifying differences in the way bones are constructed may be clinically important for developing sex-specific diagnostics and treatment strategies to reduce fragility fractures.

  5. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  6. Sensitivity to mental effort and test-retest reliability of heart rate variability measures in healthy seniors.

    PubMed

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S

    2011-10-01

    To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Design and Operation of a 4kW Linear Motor Driven Pulse Tube Cryocooler

    NASA Astrophysics Data System (ADS)

    Zia, J. H.

    2004-06-01

    A 4 kW electrical input Linear Motor driven pulse tube cryocooler has successfully been designed, built and tested. The optimum operation frequency is 60 Hz with a design refrigeration of >200 W at 80 K. The design exercise involved modeling and optimization in DeltaE software. Load matching between the cold head and linear motor was achieved by careful sizing of the transfer tube. The cryocooler makes use of a dual orifice inertance network and a single compliance tank for phase optimization and streaming suppression in the pulse tube. The in-line cold head design is modular in structure for convenient change-out and re-assembly of various components. The Regenerator consists of layers of two different grades of wire-mesh. The Linear motor is a clearance seal, dual opposed piston design from CFIC Inc. Initial results have demonstrated the refrigeration target of 200 W by liquefying Nitrogen from an ambient temperature and pressure. Overall Carnot efficiencies of 13% have been achieved and efforts to further improve efficiencies are underway. Linear motor efficiencies up to 84% have been observed. Experimental results have shown satisfactory compliance with model predictions, although the effects of streaming were not part of the model. Refrigeration loss due to streaming was minimal at the design operating conditions of 80 K.

  8. Sensitivity to Mental Effort and Test-Retest Reliability of Heart Rate Variability Measures in Healthy Seniors

    PubMed Central

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.

    2011-01-01

    Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665

  9. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  10. Fracture behavior of hybrid composite laminates

    NASA Technical Reports Server (NTRS)

    Kennedy, J. M.

    1983-01-01

    The tensile fracture behavior of 15 center-notched hybrid laminates was studied. Three basic laminate groups were tested: (1) a baseline group with graphite/epoxy plies, (2) a group with the same stacking sequence but where the zero-deg plies were one or two plies of S-glass or Kevlar, and (3) a group with graphite plies but where the zero-deg plies were sandwiched between layers of perforated Mylar. Specimens were loaded linearly with time; load, far field strain, and crack opening displacement (COD) were monitored. The loading was stopped periodically and the notched region was radiographed to reveal the extent and type of damage (failure progression). Results of the tests showed that the hybrid laminates had higher fracture toughnesses than comparable all-graphite laminates. The higher fracture toughness was due primarily to the larger damage region at the ends of the slit; delamination and splitting lowered the stress concentration in the primary load-carrying plies. A linear elastic fracture analysis, which ignored delamination and splitting, underestimated the fracture toughness. For almost all of the laminates, the tests showed that the fracture toughness increased with crack length. The size of the damage region at the ends of the slit and COD measurements also increased with crack length.

  11. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  12. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  13. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid ismore » used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements« less

  14. The non-linear power spectrum of the Lyman alpha forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo

    2015-12-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate themore » comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula.« less

  15. The effect of motorcycle helmet fit on estimating head impact kinematics from residual liner crush.

    PubMed

    Bonin, Stephanie J; Gardiner, John C; Onar-Thomas, Arzu; Asfour, Shihab S; Siegmund, Gunter P

    2017-09-01

    Proper helmet fit is important for optimizing head protection during an impact, yet many motorcyclists wear helmets that do not properly fit their heads. The goals of this study are i) to quantify how a mismatch in headform size and motorcycle helmet size affects headform peak acceleration and head injury criteria (HIC), and ii) to determine if peak acceleration, HIC, and impact speed can be estimated from the foam liner's maximum residual crush depth or residual crush volume. Shorty-style helmets (4 sizes of a single model) were tested on instrumented headforms (4 sizes) during linear impacts between 2.0 and 10.5m/s to the forehead region. Helmets were CT scanned to quantify residual crush depth and volume. Separate linear regression models were used to quantify how the response variables (peak acceleration (g), HIC, and impact speed (m/s)) were related to the predictor variables (maximum crush depth (mm), crush volume (cm 3 ), and the difference in circumference between the helmet and headform (cm)). Overall, we found that increasingly oversized helmets reduced peak headform acceleration and HIC for a given impact speed for maximum residual crush depths less than 7.9mm and residual crush volume less than 40cm 3 . Below these levels of residual crush, we found that peak headform acceleration, HIC, and impact speed can be estimated from a helmet's residual crush. Above these crush thresholds, large variations in headform kinematics are present, possibly related to densification of the foam liner during the impact. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Revision of the Malagasy Camponotus edmondi species group (Hymenoptera, Formicidae, Formicinae): integrating qualitative morphology and multivariate morphometric analysis.

    PubMed

    Rakotonirina, Jean Claude; Csősz, Sándor; Fisher, Brian L

    2016-01-01

    The Malagasy Camponotus edmondi species group is revised based on both qualitative morphological traits and multivariate analysis of continuous morphometric data. To minimize the effect of the scaling properties of diverse traits due to worker caste polymorphism, and to achieve the desired near-linearity of data, morphometric analyses were done only on minor workers. The majority of traits exhibit broken scaling on head size, dividing Camponotus workers into two discrete subcastes, minors and majors. This broken scaling prevents the application of algorithms that uses linear combination of data to the entire dataset, hence only minor workers were analyzed statistically. The elimination of major workers resulted in linearity and the data meet required assumptions. However, morphometric ratios for the subsets of minor and major workers were used in species descriptions and redefinitions. Prior species hypotheses and the goodness of clusters were tested on raw data by confirmatory linear discriminant analysis. Due to the small sample size available for some species, a factor known to reduce statistical reliability, hypotheses generated by exploratory analyses were tested with extreme care and species delimitations were inferred via the combined evidence of both qualitative (morphology and biology) and quantitative data. Altogether, fifteen species are recognized, of which 11 are new to science: Camponotus alamaina sp. n. , Camponotus androy sp. n. , Camponotus bevohitra sp. n. , Camponotus galoko sp. n. , Camponotus matsilo sp. n. , Camponotus mifaka sp. n. , Camponotus orombe sp. n. , Camponotus tafo sp. n. , Camponotus tratra sp. n. , Camponotus varatra sp. n. , and Camponotus zavo sp. n. Four species are redescribed: Camponotus echinoploides Forel, Camponotus edmondi André, Camponotus ethicus Forel, and Camponotus robustus Roger. Camponotus edmondi ernesti Forel, syn. n. is synonymized under Camponotus edmondi . This revision also includes an identification key to species for both minor and major castes, information on geographic distribution and biology, taxonomic discussions, and descriptions of intraspecific variation. Traditional taxonomy and multivariate morphometric analysis are independent sources of information which, in combination, allow more precise species delimitation. Moreover, quantitative characters included in identification keys improve accuracy of determination in difficult cases.

  17. Revision of the Malagasy Camponotus edmondi species group (Hymenoptera, Formicidae, Formicinae): integrating qualitative morphology and multivariate morphometric analysis

    PubMed Central

    Rakotonirina, Jean Claude; Csősz, Sándor; Fisher, Brian L.

    2016-01-01

    Abstract The Malagasy Camponotus edmondi species group is revised based on both qualitative morphological traits and multivariate analysis of continuous morphometric data. To minimize the effect of the scaling properties of diverse traits due to worker caste polymorphism, and to achieve the desired near-linearity of data, morphometric analyses were done only on minor workers. The majority of traits exhibit broken scaling on head size, dividing Camponotus workers into two discrete subcastes, minors and majors. This broken scaling prevents the application of algorithms that uses linear combination of data to the entire dataset, hence only minor workers were analyzed statistically. The elimination of major workers resulted in linearity and the data meet required assumptions. However, morphometric ratios for the subsets of minor and major workers were used in species descriptions and redefinitions. Prior species hypotheses and the goodness of clusters were tested on raw data by confirmatory linear discriminant analysis. Due to the small sample size available for some species, a factor known to reduce statistical reliability, hypotheses generated by exploratory analyses were tested with extreme care and species delimitations were inferred via the combined evidence of both qualitative (morphology and biology) and quantitative data. Altogether, fifteen species are recognized, of which 11 are new to science: Camponotus alamaina sp. n., Camponotus androy sp. n., Camponotus bevohitra sp. n., Camponotus galoko sp. n., Camponotus matsilo sp. n., Camponotus mifaka sp. n., Camponotus orombe sp. n., Camponotus tafo sp. n., Camponotus tratra sp. n., Camponotus varatra sp. n., and Camponotus zavo sp. n. Four species are redescribed: Camponotus echinoploides Forel, Camponotus edmondi André, Camponotus ethicus Forel, and Camponotus robustus Roger. Camponotus edmondi ernesti Forel, syn. n. is synonymized under Camponotus edmondi. This revision also includes an identification key to species for both minor and major castes, information on geographic distribution and biology, taxonomic discussions, and descriptions of intraspecific variation. Traditional taxonomy and multivariate morphometric analysis are independent sources of information which, in combination, allow more precise species delimitation. Moreover, quantitative characters included in identification keys improve accuracy of determination in difficult cases. PMID:28050160

  18. A universal approximation to grain size from images of non-cohesive sediment

    USGS Publications Warehouse

    Buscombe, D.; Rubin, D.M.; Warrick, J.A.

    2010-01-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a “universal approximation” because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  19. Hematoma Shape, Hematoma Size, Glasgow Coma Scale Score and ICH Score: Which Predicts the 30-Day Mortality Better for Intracerebral Hematoma?

    PubMed Central

    Wang, Chih-Wei; Liu, Yi-Jui; Lee, Yi-Hsiung; Hueng, Dueng-Yuan; Fan, Hueng-Chuen; Yang, Fu-Chi; Hsueh, Chun-Jen; Kao, Hung-Wen; Juan, Chun-Jung; Hsu, Hsian-He

    2014-01-01

    Purpose To investigate the performance of hematoma shape, hematoma size, Glasgow coma scale (GCS) score, and intracerebral hematoma (ICH) score in predicting the 30-day mortality for ICH patients. To examine the influence of the estimation error of hematoma size on the prediction of 30-day mortality. Materials and Methods This retrospective study, approved by a local institutional review board with written informed consent waived, recruited 106 patients diagnosed as ICH by non-enhanced computed tomography study. The hemorrhagic shape, hematoma size measured by computer-assisted volumetric analysis (CAVA) and estimated by ABC/2 formula, ICH score and GCS score was examined. The predicting performance of 30-day mortality of the aforementioned variables was evaluated. Statistical analysis was performed using Kolmogorov-Smirnov tests, paired t test, nonparametric test, linear regression analysis, and binary logistic regression. The receiver operating characteristics curves were plotted and areas under curve (AUC) were calculated for 30-day mortality. A P value less than 0.05 was considered as statistically significant. Results The overall 30-day mortality rate was 15.1% of ICH patients. The hematoma shape, hematoma size, ICH score, and GCS score all significantly predict the 30-day mortality for ICH patients, with an AUC of 0.692 (P = 0.0018), 0.715 (P = 0.0008) (by ABC/2) to 0.738 (P = 0.0002) (by CAVA), 0.877 (P<0.0001) (by ABC/2) to 0.882 (P<0.0001) (by CAVA), and 0.912 (P<0.0001), respectively. Conclusion Our study shows that hematoma shape, hematoma size, ICH scores and GCS score all significantly predict the 30-day mortality in an increasing order of AUC. The effect of overestimation of hematoma size by ABC/2 formula in predicting the 30-day mortality could be remedied by using ICH score. PMID:25029592

  20. A universal approximation of grain size from images of noncohesive sediment

    NASA Astrophysics Data System (ADS)

    Buscombe, D.; Rubin, D. M.; Warrick, J. A.

    2010-06-01

    The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a "universal approximation" because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.

  1. A cluster randomized control field trial of the ABRACADABRA web-based reading technology: replication and extension of basic findings

    PubMed Central

    Piquette, Noella A.; Savage, Robert S.; Abrami, Philip C.

    2014-01-01

    The present paper reports a cluster randomized control trial evaluation of teaching using ABRACADABRA (ABRA), an evidence-based and web-based literacy intervention (http://abralite.concordia.ca) with 107 kindergarten and 96 grade 1 children in 24 classes (12 intervention 12 control classes) from all 12 elementary schools in one school district in Canada. Children in the intervention condition received 10–12 h of whole class instruction using ABRA between pre- and post-test. Hierarchical linear modeling of post-test results showed significant gains in letter-sound knowledge for intervention classrooms over control classrooms. In addition, medium effect sizes were evident for three of five outcome measures favoring the intervention: letter-sound knowledge (d= +0.66), phonological blending (d = +0.52), and word reading (d = +0.52), over effect sizes for regular teaching. It is concluded that regular teaching with ABRA technology adds significantly to literacy in the early elementary years. PMID:25538663

  2. Relationships between otolith size and fish length in some mesopelagic teleosts (Myctophidae, Paralepididae, Phosichthyidae and Stomiidae).

    PubMed

    Battaglia, P; Malara, D; Ammendolia, G; Romeo, T; Andaloro, F

    2015-09-01

    Length-mass relationships and linear regressions are given for otolith size (length and height) and standard length (LS ) of certain mesopelagic fishes (Myctophidae, Paralepididae, Phosichthyidae and Stomiidae) living in the central Mediterranean Sea. The length-mass relationship showed isometric growth in six species, whereas linear regressions of LS and otolith size fit the data well for all species. These equations represent a useful tool for dietary studies on Mediterranean marine predators. © 2015 The Fisheries Society of the British Isles.

  3. Shear Melting of a Colloidal Glass

    NASA Astrophysics Data System (ADS)

    Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.

    2010-01-01

    We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.

  4. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  5. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  6. Relationship between seed bank expression, adult longevity and aridity in species of Chaetanthera (Asteraceae) in central Chile.

    PubMed

    Arroyo, M T K; Chacon, P; Cavieres, L A

    2006-09-01

    Broad surveys have detected inverse relationships between seed and adult longevity and between seed size and adult longevity. However, low and unpredictable precipitation is also associated with seed bank (SB) expression in semi-arid and arid areas. The relationship between adult longevity, SB formation, seed mass and aridity is examined in annual and perennial herbs of Chaetanthera (Asteraceae) from the Chilean Mediterranean-type climate and winter-rainfall desert areas over a precipitation range of one order of magnitude. Seeds of 18 species and subtaxa (32 populations) were buried in field locations, and exhumed after two successive germination periods. Seeds not germinating in the field were tested in a growth chamber, and remnant intact seed tested for viability. Seed banks were classed as transient or persistent. The effect of life form, species, population and burial time on persistent SB size was assessed with factorial ANOVA. Persistent seed bank size was compared with the Martonne aridity index (shown to be a surrogate for inter-annual variation in precipitation) and seed size using linear regression. ANCOVA assessed the effect of life-form on SB size with aridity as covariate. Three species had a transient SB and 15 a persistent SB. ANOVA revealed a significant effect of life-form on SB size with annuals having larger SB size and greater capacity to form a persistent SB than perennials. Significant inter-population variation in SB size was found in 64% of cases. Seed mass was negatively correlated with persistent SB size. Persistent seed bank size was significantly correlated with the Martonne aridity index in the perennial and annual species, with species from more arid areas having larger persistent SBs. However, when aridity was considered as a covariate, ANCOVA revealed no significant differences between the annual and perennial herbs. Persistent seed bank size in Chaetanthera appears to reflect environmental selection rather than any trade-off with adult longevity.

  7. VPS GRCop-84 Liner Development Efforts

    NASA Technical Reports Server (NTRS)

    Elam, Sandra K.; Holmes, Richard; McKechnie, Tim; Hickman, Robert; Pickens, Tim

    2003-01-01

    For the past several years, NASA's Marshall Space Flight Center (MSFC) has been working with Plasma Processes, Inc. (PPI) to fabricate combustion chamber liners using the Vacuum Plasma Spray (VPS) process. Multiple liners of a variety of shapes and sizes have been created. Each liner has been fabricated with GRCop-84 (a copper alloy with chromium and niobium) and a functional gradient coating (FGC) on the hot wall. While the VPS process offers versatility and a reduced fabrication schedule, the material system created with VPS allows the liners to operate at higher temperatures, with maximum blanch resistance and improved cycle life. A subscal unit (5K lbf thrust class) is being cycle tested in a LOX/Hydrogen thrust chamber assembly at MSFC. To date, over 75 hot-fire tests have been accumulated on this article. Tests include conditions normally detrimental to conventional materials, yet the VPS GRCop-84 liner has yet to show any signs of degradation. A larger chamber (15K lbf thrust class) has also been fabricated and is being prepared for hot-fire testing at MSFC near the end of 2003. Linear liners have been successfully created to further demonstrate the versatility of the process. Finally, scale up issues for the VPS process are being tackled with efforts to fabricate a full size, engine class liner. Specifically, a liner for the SSME's Main Combustion Chamber (MCC) has recently been attempted. The SSME size was chosen for convenience, since its design was readily available and its size was sufficient to tackle specific issues. Efforts to fabricate these large liners have already provided valuable lessons for using this process for engine programs. The material quality for these large units is being evaluated with destructive analysis and these results will be available by the end of 2003.

  8. Task-switching cost and repetition priming: two overlooked confounds in the first-set procedure of the Sternberg paradigm and how they affect memory set-size effects.

    PubMed

    Jou, Jerwen

    2014-10-01

    Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.

  9. Process scale-up considerations for non-thermal atmospheric-pressure plasma synthesis of nanoparticles by homogenous nucleation

    NASA Astrophysics Data System (ADS)

    Cole, Jonathan; Zhang, Yao; Liu, Tianqi; Liu, Chang-jun; Mohan Sankaran, R.

    2017-08-01

    Scale-up of non-thermal atmospheric-pressure plasma reactors for the synthesis of nanoparticles by homogeneous nucleation is challenging because the active volume is typically reduced to facilitate gas breakdown, enhance discharge stability, and limit particle size and agglomeration, but thus limits throughput. Here, we introduce a dielectric barrier discharge reactor consisting of a coaxial electrode geometry for nanoparticle production that enables a simple scale-up strategy whereby increasing the outer and inner electrode diameters, the plasma volume is increased approximately linearly, while maintaining a sufficiently small electrode gap to maintain the electric field strength. We show with two test reactors that for a given residence time, the nanoparticle production rate increases linearly with volume over a range of precursor concentrations, while having minimal effect on the shape of the particle size distribution. However, our study also reveals that increasing the total gas flow rate in a smaller volume reactor leads to an enhancement of precursor conversion and a comparable production rate to a larger volume reactor. These results suggest that scale-up requires better understanding of the influence of reactor geometry on particle growth dynamics and may not always be a simple function of reactor volume.

  10. Molecular surface area based predictive models for the adsorption and diffusion of disperse dyes in polylactic acid matrix.

    PubMed

    Xu, Suxin; Chen, Jiangang; Wang, Bijia; Yang, Yiqi

    2015-11-15

    Two predictive models were presented for the adsorption affinities and diffusion coefficients of disperse dyes in polylactic acid matrix. Quantitative structure-sorption behavior relationship would not only provide insights into sorption process, but also enable rational engineering for desired properties. The thermodynamic and kinetic parameters for three disperse dyes were measured. The predictive model for adsorption affinity was based on two linear relationships derived by interpreting the experimental measurements with molecular structural parameters and compensation effect: ΔH° vs. dye size and ΔS° vs. ΔH°. Similarly, the predictive model for diffusion coefficient was based on two derived linear relationships: activation energy of diffusion vs. dye size and logarithm of pre-exponential factor vs. activation energy of diffusion. The only required parameters for both models are temperature and solvent accessible surface area of the dye molecule. These two predictive models were validated by testing the adsorption and diffusion properties of new disperse dyes. The models offer fairly good predictive ability. The linkage between structural parameter of disperse dyes and sorption behaviors might be generalized and extended to other similar polymer-penetrant systems. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Objective estimation of tropical cyclone innercore surface wind structure using infrared satellite images

    NASA Astrophysics Data System (ADS)

    Zhang, Changjiang; Dai, Lijie; Ma, Leiming; Qian, Jinfang; Yang, Bo

    2017-10-01

    An objective technique is presented for estimating tropical cyclone (TC) innercore two-dimensional (2-D) surface wind field structure using infrared satellite imagery and machine learning. For a TC with eye, the eye contour is first segmented by a geodesic active contour model, based on which the eye circumference is obtained as the TC eye size. A mathematical model is then established between the eye size and the radius of maximum wind obtained from the past official TC report to derive the 2-D surface wind field within the TC eye. Meanwhile, the composite information about the latitude of TC center, surface maximum wind speed, TC age, and critical wind radii of 34- and 50-kt winds can be combined to build another mathematical model for deriving the innercore wind structure. After that, least squares support vector machine (LSSVM), radial basis function neural network (RBFNN), and linear regression are introduced, respectively, in the two mathematical models, which are then tested with sensitivity experiments on real TC cases. Verification shows that the innercore 2-D surface wind field structure estimated by LSSVM is better than that of RBFNN and linear regression.

  12. Study of microstructure and fracture properties of blunt notched and sharp cracked high density polyethylene specimens.

    PubMed

    Pan, Huanyu; Devasahayam, Sheila; Bandyopadhyay, Sri

    2017-07-21

    This paper examines the effect of a broad range of crosshead speed (0.05 to 100 mm/min) and a small range of temperature (25 °C and 45 °C) on the failure behaviour of high density polyethylene (HDPE) specimens containing a) standard size blunt notch and b) standard size blunt notch plus small sharp crack - all tested in air. It was observed that the yield stress properties showed linear increase with the natural logarithm of strain rate. The stress intensity factors under blunt notch and sharp crack conditions also increased linearly with natural logarithm of the crosshead speed. The results indicate that in the practical temperature range of 25 °C and 45 °C under normal atmosphere and increasing strain rates, HDPE specimens with both blunt notches and sharp cracks possess superior fracture properties. SEM microstructure studies of fracture surfaces showed craze initiation mechanisms at lower strain rate, whilst at higher strain rates there is evidence of dimple patterns absorbing the strain energy and creating plastic deformation. The stress intensity factor and the yield strength were higher at 25 °C compared to those at 45 °C.

  13. Driver electronics design and control for a total artificial heart linear motor.

    PubMed

    Unthan, Kristin; Cuenca-Navalon, Elena; Pelletier, Benedikt; Finocchiaro, Thomas; Steinseifer, Ulrich

    2018-01-27

    For any implantable device size and efficiency are critical properties. Thus, a linear motor for a Total Artificial Heart was optimized with focus on driver electronics and control strategies. Hardware requirements were defined from power supply and motor setup. Four full bridges were chosen for the power electronics. Shunt resistors were set up for current measurement. Unipolar and bipolar switching for power electronics control were compared regarding current ripple and power losses. Here, unipolar switching showed smaller current ripple and required less power to create the necessary motor forces. Based on calculations for minimal power losses Lorentz force was distributed to the actor's four coils. The distribution was determined as ratio of effective magnetic flux through each coil, which was captured by a force test rig. Static and dynamic measurements under physiological conditions analyzed interaction of control and hardware and all efficiencies were over 89%. In conclusion, the designed electronics, optimized control strategy and applied current distribution create the required motor force and perform optimal under physiological conditions. The developed driver electronics and control offer optimized size and efficiency for any implantable or portable device with multiple independent motor coils. Graphical Abstract ᅟ.

  14. Characterization of an acoustic cavitation bubble structure at 230 kHz.

    PubMed

    Thiemann, Andrea; Nowak, Till; Mettin, Robert; Holsteyns, Frank; Lippert, Alexander

    2011-03-01

    A generic bubble structure in a 230 kHz ultrasonic field is observed in a partly developed standing wave field in water. It is characterized by high-speed imaging, sonoluminescence recordings, and surface cleaning tests. The structure has two distinct bubble populations. Bigger bubbles (much larger than linear resonance size) group on rings in planes parallel to the transducer surface, apparently in locations of driving pressure minima. They slowly rise in a jittering, but synchronous way, and they can have smaller satellite bubbles, thus resembling the arrays of bubbles observed by Miller [D. Miller, Stable arrays of resonant bubbles in a 1-MHz standing-wave acoustic field, J. Acoust. Soc. Am. 62 (1977) 12]. Smaller bubbles (below and near linear resonance size) show a fast "streamer" motion perpendicular to and away from the transducer surface. While the bigger bubbles do not emit light, the smaller bubbles in the streamers show sonoluminescence when they pass the planes of high driving pressure. Both bubble populations exhibit cleaning potential with respect to micro-particles attached to a glass substrate. The respective mechanisms of particle removal, though, might be different. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Application of Nearly Linear Solvers to Electric Power System Computation

    NASA Astrophysics Data System (ADS)

    Grant, Lisa L.

    To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.

  16. Manufacturing Challenges and Benefits when Scaling the HIAD Stacked-Torus Aeroshell to a 15m-Class System

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory; Cheatwood, Neil; Johnson, Keith; Calomino, Anthony; Gilles, Brian; Anderson, Paul; Bond, Bruce

    2016-01-01

    Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6m, with cone angles of 60 and 70 deg. To meet NASA and commercial near term objectives, the HIAD team must scale the current technology up to 12-15m in diameter. Therefore, the HIAD projects experience in scaling the technology has reached a critical juncture. Growing from a 6m to a 15m-class system will introduce many new structural and logistical challenges to an already complicated manufacturing process.Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15m-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6m aeroshell (the largest HIAD built to date), a 12m aeroshell has four times the cross-sectional area, and a 15m one has over six times the area. This means that fabrication and test procedures will need to be reexamined to ac-count for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs.There are also noteworthy benefits of scaling up the HIAD aeroshell to a 15m-class system. Two complications in working with handmade textile structures are the non-linearity of the material components and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the material components out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15m HIAD.In this presentation, a handful of the challenges and associated mitigation plans will be discussed, as well as an update on current 12m aeroshell manufacturing and testing that is addressing these challenges

  17. Manufacturing Challenges and Benefits When Scaling the HIAD Stacked-Torus Aeroshell to a 15 Meter Class System

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Cheatwood, F. M.; Johnson, R. K.; Hughes, S. J.; Calomino, A. M.

    2016-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD project's second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6 meters, with cone angles of 60 and 70 degrees. To meet NASA and commercial near-term objectives, the HIAD team must scale the current technology up to 12-15 meters in diameter. Therefore, the HIAD project's experience in scaling the technology has reached a critical juncture. Growing from a 6-meter to a 15-meter class system will introduce many new structural and logistical challenges to an already complicated manufacturing process. Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15-meter-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6-meter aeroshell (the largest HIAD built to date), a 12-meter aeroshell has four times the cross-sectional area, and a 15-meter one has over six times the area. This means that fabrication and test procedures will need to be reexamined to account for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs. There are also noteworthy benefits of scaling up the HIAD aeroshell to a 15m-class system. Two complications in working with handmade textile structures are the non-linearity of the material components and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the material components out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15-meter HIAD. In this presentation, a handful of the challenges and associated mitigation plans will be discussed, as well as an update on current manufacturing and testing that addressing these challenges.

  18. Comparing performance on the MNREAD iPad application with the MNREAD acuity chart.

    PubMed

    Calabrèse, Aurélie; To, Long; He, Yingchen; Berkholtz, Elizabeth; Rafian, Paymon; Legge, Gordon E

    2018-01-01

    Our purpose was to compare reading performance measured with the MNREAD Acuity Chart and an iPad application (app) version of the same test for both normally sighted and low-vision participants. Our methods included 165 participants with normal vision and 43 participants with low vision tested on the standard printed MNREAD and on the iPad app version of the test. Maximum Reading Speed, Critical Print Size, Reading Acuity, and Reading Accessibility Index were compared using linear mixed-effects models to identify any potential differences in test performance between the printed chart and the iPad app. Our results showed the following: For normal vision, chart and iPad yield similar estimates of Critical Print Size and Reading Acuity. The iPad provides significantly slower estimates of Maximum Reading Speed than the chart, with a greater difference for faster readers. The difference was on average 3% at 100 words per minute (wpm), 6% at 150 wpm, 9% at 200 wpm, and 12% at 250 wpm. For low vision, Maximum Reading Speed, Reading Accessibility Index, and Critical Print Size are equivalent on the iPad and chart. Only the Reading Acuity is significantly smaller (I. E., better) when measured on the digital version of the test, but by only 0.03 logMAR (p = 0.013). Our conclusions were that, overall, MNREAD parameters measured with the printed chart and the iPad app are very similar. The difference found in Maximum Reading Speed for the normally sighted participants can be explained by differences in the method for timing the reading trials.

  19. Comparing performance on the MNREAD iPad application with the MNREAD acuity chart

    PubMed Central

    Calabrèse, Aurélie; To, Long; He, Yingchen; Berkholtz, Elizabeth; Rafian, Paymon; Legge, Gordon E.

    2018-01-01

    Our purpose was to compare reading performance measured with the MNREAD Acuity Chart and an iPad application (app) version of the same test for both normally sighted and low-vision participants. Our methods included 165 participants with normal vision and 43 participants with low vision tested on the standard printed MNREAD and on the iPad app version of the test. Maximum Reading Speed, Critical Print Size, Reading Acuity, and Reading Accessibility Index were compared using linear mixed-effects models to identify any potential differences in test performance between the printed chart and the iPad app. Our results showed the following: For normal vision, chart and iPad yield similar estimates of Critical Print Size and Reading Acuity. The iPad provides significantly slower estimates of Maximum Reading Speed than the chart, with a greater difference for faster readers. The difference was on average 3% at 100 words per minute (wpm), 6% at 150 wpm, 9% at 200 wpm, and 12% at 250 wpm. For low vision, Maximum Reading Speed, Reading Accessibility Index, and Critical Print Size are equivalent on the iPad and chart. Only the Reading Acuity is significantly smaller (I. E., better) when measured on the digital version of the test, but by only 0.03 logMAR (p = 0.013). Our conclusions were that, overall, MNREAD parameters measured with the printed chart and the iPad app are very similar. The difference found in Maximum Reading Speed for the normally sighted participants can be explained by differences in the method for timing the reading trials. PMID:29351351

  20. Rapid social network assessment for predicting HIV and STI risk among men attending bars and clubs in San Diego, California.

    PubMed

    Drumright, Lydia N; Frost, Simon D W

    2010-12-01

    To test the use of a rapid assessment tool to determine social network size, and to test whether social networks with a high density of HIV/sexually transmitted infection (STI) or substance using persons were independent predictors of HIV and STI status among men who have sex with men (MSM) using a rapid tool for collecting network information. We interviewed 609 MSM from 14 bars in San Diego, California, USA, using an enhanced version of the Priorities for Local AIDS Control Efforts (PLACE) methodology. Social network size was assessed using a series of 19 questions of the form 'How many people do you know that have the name X?', where X included specific male and female names (eg, Keith), use illicit substances, and have HIV. Generalised linear models were used to estimate average and group-specific network sizes, and their association with HIV status, STI history and methamphetamine use. Despite possible errors in ascertaining network size, average reported network sizes were larger for larger groups. Those who reported having HIV infection or having past STI reported significantly more HIV infected and methamphetamine or popper using individuals in their social network. There was a dose-dependent effect of social network size of HIV infected individuals on self-reported HIV status, past STI and use of methamphetamine in the last 12 months, after controlling for age, ethnicity and numbers of sexual partners in the last year. Relatively simple measures of social networks are associated with HIV/STI risk, and may provide a useful tool for targeting HIV/STI surveillance and prevention.

  1. Comparison of futility monitoring guidelines using completed phase III oncology trials.

    PubMed

    Zhang, Qiang; Freidlin, Boris; Korn, Edward L; Halabi, Susan; Mandrekar, Sumithra; Dignam, James J

    2017-02-01

    Futility (inefficacy) interim monitoring is an important component in the conduct of phase III clinical trials, especially in life-threatening diseases. Desirable futility monitoring guidelines allow timely stopping if the new therapy is harmful or if it is unlikely to demonstrate to be sufficiently effective if the trial were to continue to its final analysis. There are a number of analytical approaches that are used to construct futility monitoring boundaries. The most common approaches are based on conditional power, sequential testing of the alternative hypothesis, or sequential confidence intervals. The resulting futility boundaries vary considerably with respect to the level of evidence required for recommending stopping the study. We evaluate the performance of commonly used methods using event histories from completed phase III clinical trials of the Radiation Therapy Oncology Group, Cancer and Leukemia Group B, and North Central Cancer Treatment Group. We considered published superiority phase III trials with survival endpoints initiated after 1990. There are 52 studies available for this analysis from different disease sites. Total sample size and maximum number of events (statistical information) for each study were calculated using protocol-specified effect size, type I and type II error rates. In addition to the common futility approaches, we considered a recently proposed linear inefficacy boundary approach with an early harm look followed by several lack-of-efficacy analyses. For each futility approach, interim test statistics were generated for three schedules with different analysis frequency, and early stopping was recommended if the interim result crossed a futility stopping boundary. For trials not demonstrating superiority, the impact of each rule is summarized as savings on sample size, study duration, and information time scales. For negative studies, our results show that the futility approaches based on testing the alternative hypothesis and repeated confidence interval rules yielded less savings (compared to the other two rules). These boundaries are too conservative, especially during the first half of the study (<50% of information). The conditional power rules are too aggressive during the second half of the study (>50% of information) and may stop a trial even when there is a clinically meaningful treatment effect. The linear inefficacy boundary with three or more interim analyses provided the best results. For positive studies, we demonstrated that none of the futility rules would have stopped the trials. The linear inefficacy boundary futility approach is attractive from statistical, clinical, and logistical standpoints in clinical trials evaluating new anti-cancer agents.

  2. Optimal group size in a highly social mammal

    PubMed Central

    Markham, A. Catherine; Gesquiere, Laurence R.; Alberts, Susan C.; Altmann, Jeanne

    2015-01-01

    Group size is an important trait of social animals, affecting how individuals allocate time and use space, and influencing both an individual’s fitness and the collective, cooperative behaviors of the group as a whole. Here we tested predictions motivated by the ecological constraints model of group size, examining the effects of group size on ranging patterns and adult female glucocorticoid (stress hormone) concentrations in five social groups of wild baboons (Papio cynocephalus) over an 11-y period. Strikingly, we found evidence that intermediate-sized groups have energetically optimal space-use strategies; both large and small groups experience ranging disadvantages, in contrast to the commonly reported positive linear relationship between group size and home range area and daily travel distance, which depict a disadvantage only in large groups. Specifically, we observed a U-shaped relationship between group size and home range area, average daily distance traveled, evenness of space use within the home range, and glucocorticoid concentrations. We propose that a likely explanation for these U-shaped patterns is that large, socially dominant groups are constrained by within-group competition, whereas small, socially subordinate groups are constrained by between-group competition and predation pressures. Overall, our results provide testable hypotheses for evaluating group-size constraints in other group-living species, in which the costs of intra- and intergroup competition vary as a function of group size. PMID:26504236

  3. COMPARISON OF IMPLICIT SCHEMES TO SOLVE EQUATIONS OF RADIATION HYDRODYNAMICS WITH A FLUX-LIMITED DIFFUSION APPROXIMATION: NEWTON–RAPHSON, OPERATOR SPLITTING, AND LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp

    Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less

  4. Estimating Patient Dose from X-ray Tube Output Metrics: Automated Measurement of Patient Size from CT Images Enables Large-scale Size-specific Dose Estimates

    PubMed Central

    Ikuta, Ichiro; Warden, Graham I.; Andriole, Katherine P.; Khorasani, Ramin

    2014-01-01

    Purpose To test the hypothesis that patient size can be accurately calculated from axial computed tomographic (CT) images, including correction for the effects of anatomy truncation that occur in routine clinical CT image reconstruction. Materials and Methods Institutional review board approval was obtained for this HIPAA-compliant study, with waiver of informed consent. Water-equivalent diameter (DW) was computed from the attenuation-area product of each image within 50 adult CT scans of the thorax and of the abdomen and pelvis and was also measured for maximal field of view (FOV) reconstructions. Linear regression models were created to compare DW with the effective diameter (Deff) used to select size-specific volume CT dose index (CTDIvol) conversion factors as defined in report 204 of the American Association of Physicists in Medicine. Linear regression models relating reductions in measured DW to a metric of anatomy truncation were used to compensate for the effects of clinical image truncation. Results In the thorax, DW versus Deff had an R2 of 0.51 (n = 200, 50 patients at four anatomic locations); in the abdomen and pelvis, R2 was 0.90 (n = 150, 50 patients at three anatomic locations). By correcting for image truncation, the proportion of clinically reconstructed images with an extracted DW within ±5% of the maximal FOV DW increased from 54% to 90% in the thorax (n = 3602 images) and from 95% to 100% in the abdomen and pelvis (6181 images). Conclusion The DW extracted from axial CT images is a reliable measure of patient size, and varying degrees of clinical image truncation can be readily corrected. Automated measurement of patient size combined with CT radiation exposure metrics may enable patient-specific dose estimation on a large scale. © RSNA, 2013 PMID:24086075

  5. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  6. CCD developments for particle colliders

    NASA Astrophysics Data System (ADS)

    Stefanov, Konstantin D.

    2006-09-01

    Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. First set of prototype devices have been designed, manufactured and successfully tested, with second-generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype is in production.

  7. CCD-based vertex detector for ILC

    NASA Astrophysics Data System (ADS)

    Stefanov, Konstantin D.

    2006-12-01

    Charge Coupled Devices (CCDs) have been successfully used in several high-energy physics experiments over the last 20 years. Their small pixel size and excellent precision provide a superb tool for studying of short-lived particles and understanding the nature at fundamental level. Over the last few years the Linear Collider Flavour Identification (LCFI) collaboration has developed Column-Parallel CCDs (CPCCD) and CMOS readout chips, to be used for the vertex detector at the International Linear Collider (ILC). The CPCCDs are very fast devices capable of satisfying the challenging requirements imposed by the beam structure of the superconducting accelerator. The first set of prototype devices have been successfully designed, manufactured and tested, with second generation chips on the way. Another idea for CCD-based device, the In-situ Storage Image Sensor (ISIS) is also under development and the first prototype has been manufactured.

  8. Local-scale drivers of tree survival in a temperate forest.

    PubMed

    Wang, Xugao; Comita, Liza S; Hao, Zhanqing; Davies, Stuart J; Ye, Ji; Lin, Fei; Yuan, Zuoqiang

    2012-01-01

    Tree survival plays a central role in forest ecosystems. Although many factors such as tree size, abiotic and biotic neighborhoods have been proposed as being important in explaining patterns of tree survival, their contributions are still subject to debate. We used generalized linear mixed models to examine the relative importance of tree size, local abiotic conditions and the density and identity of neighbors on tree survival in an old-growth temperate forest in northeastern China at three levels (community, guild and species). Tree size and both abiotic and biotic neighborhood variables influenced tree survival under current forest conditions, but their relative importance varied dramatically within and among the community, guild and species levels. Of the variables tested, tree size was typically the most important predictor of tree survival, followed by biotic and then abiotic variables. The effect of tree size on survival varied from strongly positive for small trees (1-20 cm dbh) and medium trees (20-40 cm dbh), to slightly negative for large trees (>40 cm dbh). Among the biotic factors, we found strong evidence for negative density and frequency dependence in this temperate forest, as indicated by negative effects of both total basal area of neighbors and the frequency of conspecific neighbors. Among the abiotic factors tested, soil nutrients tended to be more important in affecting tree survival than topographic variables. Abiotic factors generally influenced survival for species with relatively high abundance, for individuals in smaller size classes and for shade-tolerant species. Our study demonstrates that the relative importance of variables driving patterns of tree survival differs greatly among size classes, species guilds and abundance classes in temperate forest, which can further understanding of forest dynamics and offer important insights into forest management.

  9. Local-Scale Drivers of Tree Survival in a Temperate Forest

    PubMed Central

    Wang, Xugao; Comita, Liza S.; Hao, Zhanqing; Davies, Stuart J.; Ye, Ji; Lin, Fei; Yuan, Zuoqiang

    2012-01-01

    Tree survival plays a central role in forest ecosystems. Although many factors such as tree size, abiotic and biotic neighborhoods have been proposed as being important in explaining patterns of tree survival, their contributions are still subject to debate. We used generalized linear mixed models to examine the relative importance of tree size, local abiotic conditions and the density and identity of neighbors on tree survival in an old-growth temperate forest in northeastern China at three levels (community, guild and species). Tree size and both abiotic and biotic neighborhood variables influenced tree survival under current forest conditions, but their relative importance varied dramatically within and among the community, guild and species levels. Of the variables tested, tree size was typically the most important predictor of tree survival, followed by biotic and then abiotic variables. The effect of tree size on survival varied from strongly positive for small trees (1–20 cm dbh) and medium trees (20–40 cm dbh), to slightly negative for large trees (>40 cm dbh). Among the biotic factors, we found strong evidence for negative density and frequency dependence in this temperate forest, as indicated by negative effects of both total basal area of neighbors and the frequency of conspecific neighbors. Among the abiotic factors tested, soil nutrients tended to be more important in affecting tree survival than topographic variables. Abiotic factors generally influenced survival for species with relatively high abundance, for individuals in smaller size classes and for shade-tolerant species. Our study demonstrates that the relative importance of variables driving patterns of tree survival differs greatly among size classes, species guilds and abundance classes in temperate forest, which can further understanding of forest dynamics and offer important insights into forest management. PMID:22347996

  10. Short Round Sub-Linear Zero-Knowledge Argument for Linear Algebraic Relations

    NASA Astrophysics Data System (ADS)

    Seo, Jae Hong

    Zero-knowledge arguments allows one party to prove that a statement is true, without leaking any other information than the truth of the statement. In many applications such as verifiable shuffle (as a practical application) and circuit satisfiability (as a theoretical application), zero-knowledge arguments for mathematical statements related to linear algebra are essentially used. Groth proposed (at CRYPTO 2009) an elegant methodology for zero-knowledge arguments for linear algebraic relations over finite fields. He obtained zero-knowledge arguments of the sub-linear size for linear algebra using reductions from linear algebraic relations to equations of the form z = x *' y, where x, y ∈ Fnp are committed vectors, z ∈ Fp is a committed element, and *' : Fnp × Fnp → Fp is a bilinear map. These reductions impose additional rounds on zero-knowledge arguments of the sub-linear size. The round complexity of interactive zero-knowledge arguments is an important measure along with communication and computational complexities. We focus on minimizing the round complexity of sub-linear zero-knowledge arguments for linear algebra. To reduce round complexity, we propose a general transformation from a t-round zero-knowledge argument, satisfying mild conditions, to a (t - 2)-round zero-knowledge argument; this transformation is of independent interest.

  11. Resonant mode controllers for launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Schreiner, Ken E.; Roth, Mary Ellen

    1992-01-01

    Electro-mechanical actuator (EMA) systems are currently being investigated for the National Launch System (NLS) as a replacement for hydraulic actuators due to the large amount of manpower and support hardware required to maintain the hydraulic systems. EMA systems in weight sensitive applications, such as launch vehicles, have been limited to around 5 hp due to system size, controller efficiency, thermal management, and battery size. Presented here are design and test data for an EMA system that competes favorably in weight and is superior in maintainability to the hydraulic system. An EMA system uses dc power provided by a high energy density bipolar lithium thionyl chloride battery, with power conversion performed by low loss resonant topologies, and a high efficiency induction motor controlled with a high performance field oriented controller to drive a linear actuator.

  12. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS).

    PubMed

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy.

  13. Effect of the surface heterogeneity of the stationary phase on the range of concentration for linear chromatography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritti, Fabrice; Guiochon, Georges A

    2005-02-01

    The range of sample sizes within which linear chromatographic behavior is achieved in a column depends on the surface heterogeneity of the RPLC adsorbents. Two widely different commercial adsorbents were tested, the end-capped XTerra-C{sub 18} and the non-end-capped Resolve-C{sub 18}. Adsorption isotherm data of caffeine were acquired by frontal analysis. These data were modeled and used to calculate the adsorption energy distribution (AED). This double analysis informs on the degree of surface heterogeneity. The best adsorption isotherm models are the bi-Langmuir and the tetra-Langmuir isotherms for XTerra and Resolve, respectively. Their respective AEDs are bimodal and quadrimodal distributions. This interpretationmore » of the results and the actual presence of a low density of high-energy adsorption sites on Resolve-C{sub 18} were validated by measuring the dependence of the peak retention times on the size of caffeine samples (20-{micro}L volume, concentrations 10, 1, 0.1, 1 x 10{sup -2}, 1 x 10{sup -3}, 1 x 10{sup -4}, and 1 x 10{sup -5} g/L). The experimental chromatograms agree closely with the band profiles calculated from the best isotherms. On Resolve-C{sub 18}, the retention time decreases by 40% when the sample concentration is increased from 1 x 10{sup -5} to 10 g/L. The decrease is only 10% for Xterra-C{sub 18} under the same conditions. The upper limit for linear behavior is 1 x 10{sup -4} g/L for the former adsorbent and 0.01 g/L for the latter. The presence of a few high-energy adsorption sites on Resolve-C{sub 18}, with an adsorption energy 20 kJ/mol larger than that of the low-energy sites while the same difference on Xterra is only 5 kJ/mol, explains this difference. The existence of adsorption sites with a very high energy for certain compounds affects the reproducibility of their retention times and a rapid loss of efficiency in a sample size range within which linear behavior is incorrectly anticipated.« less

  14. Avoidance, biomass and survival response of soil dwelling (endogeic) earthworms to OECD artificial soil: potential implications for earthworm ecotoxicology.

    PubMed

    Brami, C; Glover, A R; Butt, K R; Lowe, C N

    2017-05-01

    Soil dwelling earthworms are now adopted more widely in ecotoxicology, so it is vital to establish if standardised test parameters remain applicable. The main aim of this study was to determine the influence of OECD artificial soil on selected soil-dwelling, endogeic earthworm species. In an initial experiment, biomass change in mature Allolobophora chlorotica was recorded in Standard OECD Artificial Soil (AS) and also in Kettering Loam (KL). In a second experiment, avoidance behaviour was recorded in a linear gradient with varying proportions of AS and KL (100% AS, 75% AS + 25% KL, 50% KS + 50% KL, 25% AS + 75% KL, 100% KL) with either A. chlorotica or Octolasion cyaneum. Results showed a significant decrease in A. chlorotica biomass in AS relative to KL, and in the linear gradient, both earthworm species preferentially occupied sections containing higher proportions of KL over AS. Soil texture and specifically % composition and particle size of sand are proposed as key factors that influenced observed results. This research suggests that more suitable substrates are required for ecotoxicology tests with soil dwelling earthworms.

  15. Development of the Main Wing Structure of a High Altitude Long Endurance UAV

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Shin, Jeong Woo; Kim, Tae-Uk

    2018-04-01

    To enhance the flight endurance of a HALE UAV, the main wing of the UAV should have a high aspect ratio and low structural weight. Since a main wing constructed with the thin walled and slender components needed for low structural weight can suffer catastrophic failure during flight, it is important to develop a light-weight airframe without sacrificing structural integrity. In this paper, the design of the main wing of the HALE UAV was conducted using spars which were composed of a carbon-epoxy cylindrical tube and bulkheads to achieve both the weight reduction and structural integrity. The spars were sized using numerical analysis considering non-linear deformation under bending moment. Static strength testing of the wing was conducted under the most critical load condition. Then, the experimental results obtained for the wing were compared to the analytical result from the non-linear finite-element analysis. It was found that the developed main wing reduced its structural weight without any failure under the ultimate load condition of the static strength testing.

  16. Using 3D dynamic cartography and hydrological modelling for linear streamflow mapping

    NASA Astrophysics Data System (ADS)

    Drogue, G.; Pfister, L.; Leviandier, T.; Humbert, J.; Hoffmann, L.; El Idrissi, A.; Iffly, J.-F.

    2002-10-01

    This paper presents a regionalization methodology and an original representation of the downstream variation of daily streamflow using a conceptual rainfall-runoff model (HRM) and the 3D visualization tools of the GIS ArcView. The regionalization of the parameters of the HRM model was obtained by fitting simultaneously the runoff series from five sub-basins of the Alzette river basin (Grand-Duchy of Luxembourg) according to the permeability of geological formations. After validating the transposability of the regional parameter values on five test basins, streamflow series were simulated with the model at ungauged sites in one medium size geologically contrasted test basin and interpolated assuming a linear increase of streamflow between modelling points. 3D spatio-temporal cartography of mean annual and high raw and specific discharges are illustrated. During a severe flooding, the propagation of the flood waves in the different parts of the stream network shows an important contribution of sub-basins lying on impervious geological formations (direct runoff) compared with those including permeable geological formations which have a more contrasted hydrological response. The effect of spatial variability of rainfall is clearly perceptible.

  17. For the depolarization of linearly polarized light by smoke particles

    NASA Astrophysics Data System (ADS)

    Sun, Wenbo; Liu, Zhaoyan; Videen, Gorden; Fu, Qiang; Muinonen, Karri; Winker, David M.; Lukashin, Constantine; Jin, Zhonghai; Lin, Bing; Huang, Jianping

    2013-06-01

    The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ∼2% for smoke, compared to ∼1% for marine aerosols and ∼15% for dust. The observed ∼2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. In this study, the depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and smoke aerosol shape and size. It is found that the depolarization ratio curves of Gaussian-deformed spheres are very similar to sphere aggregates in terms of scattering-angle dependence and particle size parameters when particle size parameter is smaller than 1.0π. This demonstrates that small randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for characterization and active remote sensing of smoke particles using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. From the calculation results for light depolarization ratio by Gaussian-shaped smoke particles and the CALIPSO-measured light depolarization ratio of ∼2% for smoke, the mean particle size of South-African smoke is estimated to be about half of the 532nm wavelength of the CALIPSO lidar.

  18. Electric-field-induced association of colloidal particles

    NASA Astrophysics Data System (ADS)

    Fraden, Seth; Hurd, Alan J.; Meyer, Robert B.

    1989-11-01

    Dilute suspensions of micron diameter dielectric spheres confined to two dimensions are induced to aggregate linearly by application of an electric field. The growth of the average cluster size agrees well with the Smoluchowski equation, but the evolution of the measured cluster size distribution exhibits significant departures from theory at large times due to the formation of long linear clusters which effectively partition space into isolated one-dimensional strips.

  19. SU-G-BRB-02: An Open-Source Software Analysis Library for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, J; Yaldo, D

    Purpose: Routine linac quality assurance (QA) tests have become complex enough to require automation of most test analyses. A new data analysis software library was built that allows physicists to automate routine linear accelerator quality assurance tests. The package is open source, code tested, and benchmarked. Methods: Images and data were generated on a TrueBeam linac for the following routine QA tests: VMAT, starshot, CBCT, machine logs, Winston Lutz, and picket fence. The analysis library was built using the general programming language Python. Each test was analyzed with the library algorithms and compared to manual measurements taken at the timemore » of acquisition. Results: VMAT QA results agreed within 0.1% between the library and manual measurements. Machine logs (dynalogs & trajectory logs) were successfully parsed; mechanical axis positions were verified for accuracy and MLC fluence agreed well with EPID measurements. CBCT QA measurements were within 10 HU and 0.2mm where applicable. Winston Lutz isocenter size measurements were within 0.2mm of TrueBeam’s Machine Performance Check. Starshot analysis was within 0.2mm of the Winston Lutz results for the same conditions. Picket fence images with and without a known error showed that the library was capable of detecting MLC offsets within 0.02mm. Conclusion: A new routine QA software library has been benchmarked and is available for use by the community. The library is open-source and extensible for use in larger systems.« less

  20. Fatigue Life Methodology for Tapered Composite Flexbeam Laminates

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.; OBrien, T. Kevin; Rousseau, Carl Q.

    1997-01-01

    The viability of a method for determining the fatigue life of composite rotor hub flexbeam laminates using delamination fatigue characterization data and a geometric non-linear finite element (FE) analysis was studied. Combined tension and bending loading was applied to non-linear tapered flexbeam laminates with internal ply drops. These laminates, consisting of coupon specimens cut from a full-size S2/E7T1 glass-epoxy flexbeam were tested in a hydraulic load frame under combined axial-tension and transverse cyclic bending. The magnitude of the axial load remained constant and the direction of the load rotated with the specimen as the cyclic bending load was applied. The first delamination damage observed in the specimens occurred at the area around the tip of the outermost ply-drop group. Subsequently, unstable delamination occurred by complete delamination along the length of the specimen. Continued cycling resulted in multiple delaminations. A 2D finite element model of the flexbeam was developed and a geometrically non-linear analysis was performed. The global responses of the model and test specimens agreed very well in terms of the transverse displacement. The FE model was used to calculate strain energy release rates (G) for delaminations initiating at the tip of the outer ply-drop area and growing toward the thick or thin regions of the flexbeam, as was observed in the specimens. The delamination growth toward the thick region was primarily mode 2, whereas delamination growth toward the thin region was almost completely mode 1. Material characterization data from cyclic double-cantilevered beam tests was used with the peak calculated G values to generate a curve predicting fatigue failure by unstable delamination as a function of the number of loading cycles. The calculated fatigue lives compared well with the test data.

  1. A Sawmill Manager Adapts To Change With Linear Programming

    Treesearch

    George F. Dutrow; James E. Granskog

    1973-01-01

    Linear programming provides guidelines for increasing sawmill capacity and flexibility and for determining stumpagepurchasing strategy. The operator of a medium-sized sawmill implemented improvements suggested by linear programming analysis; results indicate a 45 percent increase in revenue and a 36 percent hike in volume processed.

  2. Permeability and compression characteristics of municipal solid waste samples

    NASA Astrophysics Data System (ADS)

    Durmusoglu, Ertan; Sanchez, Itza M.; Corapcioglu, M. Yavuz

    2006-08-01

    Four series of laboratory tests were conducted to evaluate the permeability and compression characteristics of municipal solid waste (MSW) samples. While the two series of tests were conducted using a conventional small-scale consolidometer, the two others were conducted in a large-scale consolidometer specially constructed for this study. In each consolidometer, the MSW samples were tested at two different moisture contents, i.e., original moisture content and field capacity. A scale effect between the two consolidometers with different sizes was investigated. The tests were carried out on samples reconsolidated to pressures of 123, 246, and 369 kPa. Time settlement data gathered from each load increment were employed to plot strain versus log-time graphs. The data acquired from the compression tests were used to back calculate primary and secondary compression indices. The consolidometers were later adapted for permeability experiments. The values of indices and the coefficient of compressibility for the MSW samples tested were within a relatively narrow range despite the size of the consolidometer and the different moisture contents of the specimens tested. The values of the coefficient of permeability were within a band of two orders of magnitude (10-6-10-4 m/s). The data presented in this paper agreed very well with the data reported by previous researchers. It was concluded that the scale effect in the compression behavior was significant. However, there was usually no linear relationship between the results obtained in the tests.

  3. The influence of coarse aggregate size and volume on the fracture behavior and brittleness of self-compacting concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beygi, Morteza H.A., E-mail: M.beygi@nit.ac.ir; Kazemi, Mohammad Taghi, E-mail: Kazemi@sharif.edu; Nikbin, Iman M., E-mail: nikbin@iaurasht.ac.ir

    2014-12-15

    This paper presents the results of an experimental investigation on fracture characteristics and brittleness of self-compacting concrete (SCC), involving the tests of 185 three point bending beams with different coarse aggregate size and content. Generally, the parameters were analyzed by the work of fracture method (WFM) and the size effect method (SEM). The results showed that with increase of size and content of coarse aggregate, (a) the fracture energy increases which is due to the change in fractal dimensions, (b) behavior of SCC beams approaches strength criterion, (c) characteristic length, which is deemed as an index of brittleness, increases linearly.more » It was found with decrease of w/c ratio that fracture energy increases which may be explained by the improvement in structure of aggregate-paste transition zone. Also, the results showed that there is a correlation between the fracture energy measured by WFM (G{sub F}) and the value measured through SEM (G{sub f}) (G{sub F} = 3.11G{sub f})« less

  4. Caprylate Salts Based on Amines as Volatile Corrosion Inhibitors for Metallic Zinc: Theoretical and Experimental Studies.

    PubMed

    Valente, Marco A G; Teixeira, Deiver A; Azevedo, David L; Feliciano, Gustavo T; Benedetti, Assis V; Fugivara, Cecílio S

    2017-01-01

    The interaction of volatile corrosion inhibitors (VCI), caprylate salt derivatives from amines, with zinc metallic surfaces is assessed by density functional theory (DFT) computer simulations, electrochemical impedance (EIS) measurements and humid chamber tests. The results obtained by the different methods were compared, and linear correlations were obtained between theoretical and experimental data. The correlations between experimental and theoretical results showed that the molecular size is the determining factor in the inhibition efficiency. The models used and experimental results indicated that dicyclohexylamine caprylate is the most efficient inhibitor.

  5. Effects of dust size distribution on dust acoustic waves in two-dimensional unmagnetized dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He Guangjun; Duan Wenshan; Tian Duoxiang

    2008-04-15

    For unmagnetized dusty plasma with many different dust grain species containing both hot isothermal electrons and ions, both the linear dispersion relation and the Kadomtsev-Petviashvili equation for small, but finite amplitude dust acoustic waves are obtained. The linear dispersion relation is investigated numerically. Furthermore, the variations of amplitude, width, and propagation velocity of the nonlinear solitary wave with an arbitrary dust size distribution function are studied as well. Moreover, both the power law distribution and the Gaussian distribution are approximately simulated by using appropriate arbitrary dust size distribution functions.

  6. Linear theory on temporal instability of megahertz faraday waves for monodisperse microdroplet ejection.

    PubMed

    Tsai, Shirley C; Tsai, Chen S

    2013-08-01

    A linear theory on temporal instability of megahertz Faraday waves for monodisperse microdroplet ejection based on mass conservation and linearized Navier-Stokes equations is presented using the most recently observed micrometer- sized droplet ejection from a millimeter-sized spherical water ball as a specific example. The theory is verified in the experiments utilizing silicon-based multiple-Fourier horn ultrasonic nozzles at megahertz frequency to facilitate temporal instability of the Faraday waves. Specifically, the linear theory not only correctly predicted the Faraday wave frequency and onset threshold of Faraday instability, the effect of viscosity, the dynamics of droplet ejection, but also established the first theoretical formula for the size of the ejected droplets, namely, the droplet diameter equals four-tenths of the Faraday wavelength involved. The high rate of increase in Faraday wave amplitude at megahertz drive frequency subsequent to onset threshold, together with enhanced excitation displacement on the nozzle end face, facilitated by the megahertz multiple Fourier horns in resonance, led to high-rate ejection of micrometer- sized monodisperse droplets (>10(7) droplets/s) at low electrical drive power (<;1 W) with short initiation time (<;0.05 s). This is in stark contrast to the Rayleigh-Plateau instability of a liquid jet, which ejects one droplet at a time. The measured diameters of the droplets ranging from 2.2 to 4.6 μm at 2 to 1 MHz drive frequency fall within the optimum particle size range for pulmonary drug delivery.

  7. A study on size effect of carboxymethyl starch nanogel crosslinked by electron beam radiation

    NASA Astrophysics Data System (ADS)

    Binh, Doan; Pham Thi Thu Hong; Nguyen Ngoc Duy; Nguyen Thanh Duoc; Nguyen Nguyet Dieu

    2012-07-01

    The formation of carboxymethyl starch (CMS) nanogel with 50 nm less particle size was carried out through a radiation crosslinked process on the electron beam (EB) linear accelerator. Changes of intrinsic viscosities and weight averaged molecular weight in the CMS concentration, which ranged from 3 to 10 mg ml-1 in absorbed doses were investigated. There were some new peaks in the 1H NMR spectra of CMS nanogel compared with those of CMS polymer. These results were anticipated that the predominant intramolecular crosslinking of dilute CMS aqueous solution occurred while being exposed to a short intense pulse of ionizing radiation. Hydrodynamic radius (often called particle size, Rh) and distribution of particle size were measured by a dynamic light scattering technique. The radiation yield of intermolecular crosslinking of CMS solution was calculated from the expression of Gx (Charlesby, 1960; Jung-Chul, 2010). The influence of the "size effect" was demonstrated by testing culture of Lactobacillus bacteria on MRS agar culture medium containing CMS nanogel and polymer. Results showed that the number of Lactobacillus bacteria growing on nanogel containing culture medium is about 170 cfu/ml and on polymer containing culture medium is only 6 cfu/ml.

  8. Magnetically Suspended Linear Pulse Motor for Semiconductor Wafer Transfer in Vacuum Chamber

    NASA Technical Reports Server (NTRS)

    Moriyama, Shin-Ichi; Hiraki, Naoji; Watanabe, Katsuhide; Kanemitsu, Yoichi

    1996-01-01

    This paper describes a magnetically suspended linear pulse motor for a semiconductor wafer transfer robot in a vacuum chamber. The motor can drive a wafer transfer arm horizontally without mechanical contact. In the construction of the magnetic suspension system, four pairs of linear magnetic bearings for the lift control are used for the guidance control as well. This approach allows us to make the whole motor compact in size and light in weight. The tested motor consists of a double-sided stator and a transfer arm with a width of 50 mm and a total length of 700 mm. The arm, like a ladder in shape, is designed as the floating element with a tooth width of 4 mm (a tooth pitch of 8 mm). The mover mass is limited to about 1.6 kg by adopting such an arm structure, and the ratio of thrust to mover mass reaches to 3.2 N/kg under a broad air gap (1 mm) between the stator teeth and the mover teeth. The performance testing was carried out with a transfer distance less than 450 mm and a transfer speed less than 560 mm/s. The attitude of the arm was well controlled by the linear magnetic bearings with a combined use, and consequently the repeatability on the positioning of the arm reached to about 2 micron. In addition, the positioning accuracy was improved up to about 30 micron through a compensation of the 128-step wave current which was used for the micro-step drive with a step increment of 62.5 micron.

  9. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  10. Effect of size and indium-composition on linear and nonlinear optical absorption of InGaN/GaN lens-shaped quantum dot

    NASA Astrophysics Data System (ADS)

    Ahmed, S. Jbara; Zulkafli, Othaman; M, A. Saeed

    2016-05-01

    Based on the Schrödinger equation for envelope function in the effective mass approximation, linear and nonlinear optical absorption coefficients in a multi-subband lens quantum dot are investigated. The effects of quantum dot size on the interband and intraband transitions energy are also analyzed. The finite element method is used to calculate the eigenvalues and eigenfunctions. Strain and In-mole-fraction effects are also studied, and the results reveal that with the decrease of the In-mole fraction, the amplitudes of linear and nonlinear absorption coefficients increase. The present computed results show that the absorption coefficients of transitions between the first excited states are stronger than those of the ground states. In addition, it has been found that the quantum dot size affects the amplitudes and peak positions of linear and nonlinear absorption coefficients while the incident optical intensity strongly affects the nonlinear absorption coefficients. Project supported by the Ministry of Higher Education and Scientific Research in Iraq, Ibnu Sina Institute and Physics Department of Universiti Teknologi Malaysia (UTM RUG Vote No. 06-H14).

  11. Critical Nucleation Length for Accelerating Frictional Slip

    NASA Astrophysics Data System (ADS)

    Aldam, Michael; Weikamp, Marc; Spatschek, Robert; Brener, Efim A.; Bouchbinder, Eran

    2017-11-01

    The spontaneous nucleation of accelerating slip along slowly driven frictional interfaces is central to a broad range of geophysical, physical, and engineering systems, with particularly far-reaching implications for earthquake physics. A common approach to this problem associates nucleation with an instability of an expanding creep patch upon surpassing a critical length Lc. The critical nucleation length Lc is conventionally obtained from a spring-block linear stability analysis extended to interfaces separating elastically deformable bodies using model-dependent fracture mechanics estimates. We propose an alternative approach in which the critical nucleation length is obtained from a related linear stability analysis of homogeneous sliding along interfaces separating elastically deformable bodies. For elastically identical half-spaces and rate-and-state friction, the two approaches are shown to yield Lc that features the same scaling structure, but with substantially different numerical prefactors, resulting in a significantly larger Lc in our approach. The proposed approach is also shown to be naturally applicable to finite-size systems and bimaterial interfaces, for which various analytic results are derived. To quantitatively test the proposed approach, we performed inertial Finite-Element-Method calculations for a finite-size two-dimensional elastically deformable body in rate-and-state frictional contact with a rigid body under sideway loading. We show that the theoretically predicted Lc and its finite-size dependence are in reasonably good quantitative agreement with the full numerical solutions, lending support to the proposed approach. These results offer a theoretical framework for predicting rapid slip nucleation along frictional interfaces.

  12. QSAR analysis for nano-sized layered manganese-calcium oxide in water oxidation: An application of chemometric methods in artificial photosynthesis.

    PubMed

    Shahbazy, Mohammad; Kompany-Zareh, Mohsen; Najafpour, Mohammad Mahdi

    2015-11-01

    Water oxidation is among the most important reactions in artificial photosynthesis, and nano-sized layered manganese-calcium oxides are efficient catalysts toward this reaction. Herein, a quantitative structure-activity relationship (QSAR) model was constructed to predict the catalytic activities of twenty manganese-calcium oxides toward water oxidation using multiple linear regression (MLR) and genetic algorithm (GA) for multivariate calibration and feature selection, respectively. Although there are eight controlled parameters during synthesizing of the desired catalysts including ripening time, temperature, manganese content, calcium content, potassium content, the ratio of calcium:manganese, the average manganese oxidation state and the surface of catalyst, by using GA only three of them (potassium content, the ratio of calcium:manganese and the average manganese oxidation state) were selected as the most effective parameters on catalytic activities of these compounds. The model's accuracy criteria such as R(2)test and Q(2)test in order to predict catalytic rate for external test set experiments; were equal to 0.941 and 0.906, respectively. Therefore, model reveals acceptable capability to anticipate the catalytic activity. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Measuring and statistically testing the size of the effect of a chemical compound on a continuous in-vitro pharmacological response through a new statistical model of response detection limit

    PubMed Central

    Diaz, Francisco J.; McDonald, Peter R.; Pinter, Abraham; Chaguturu, Rathnam

    2018-01-01

    Biomolecular screening research frequently searches for the chemical compounds that are most likely to make a biochemical or cell-based assay system produce a strong continuous response. Several doses are tested with each compound and it is assumed that, if there is a dose-response relationship, the relationship follows a monotonic curve, usually a version of the median-effect equation. However, the null hypothesis of no relationship cannot be statistically tested using this equation. We used a linearized version of this equation to define a measure of pharmacological effect size, and use this measure to rank the investigated compounds in order of their overall capability to produce strong responses. The null hypothesis that none of the examined doses of a particular compound produced a strong response can be tested with this approach. The proposed approach is based on a new statistical model of the important concept of response detection limit, a concept that is usually neglected in the analysis of dose-response data with continuous responses. The methodology is illustrated with data from a study searching for compounds that neutralize the infection by a human immunodeficiency virus of brain glioblastoma cells. PMID:24905187

  14. Inventory of forest and rangeland and detection of forest stress

    NASA Technical Reports Server (NTRS)

    Heller, R. C. (Principal Investigator); Aldrich, R. C.; Weber, F. P.; Driscoll, R. S.

    1973-01-01

    The author has identified the following significant results. Three small scales of CIR photography were interpreted to determine the number of bark beetle-killed trees detected in each of six spot size categories. A procedure was developed to predict the probability of detecting spots in each spot size category and in turn to estimate the number of infestations and dead trees even on the smallest scale. Statistical tests of the data indicated that the linear model did not fit the data and that other models should be tested. As a result of daily monitoring of Black Hills radiometric instruments it was possible to show the spectral energy relationships in the ponderosa pine ecosystems over time. These data have been helpful for comparison with radiance signatures extracted from ERTS-1 bulk 70mm using precision microdensitometry. Effects of atmospheric interference were shown by a 30 percent increase in scene radiance on channel 4 of the satellite imagery. A calibration and scaling technique was developed and tested to enable interpretation of ERTS-1 bulk and precision data for the Atlanta test site. The technique includes calibration of a photographic copy system for the I2S image combiner and the production of scaled overlays of grid coordinate systems, study area locations, and outline maps of county boundaries.

  15. Manufacturing Challenges and Benefits when Scaling the HIAD Stacked-Torus Aeroshell to a 15m Class System

    NASA Technical Reports Server (NTRS)

    Cheatwood, F. McNeil; Swanson, Gregory T.; Johnson, R. Keith; Hughes, Stephen; Calomino, Anthony; Gilles, Brian; Anderson, Paul; Bond, Bruce

    2016-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) deployable aeroshell technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD project's second generation (Gen-2) aeroshell system. The HIAD project team has developed, fabricated, and tested stacked-torus inflatable structures (IS) with flexible thermal protection systems (F-TPS) ranging in diameters from 3-6m, with cone angles of 60 and 70 deg. To meet NASA and commercial near term objectives, the HIAD team must scale the current technology up to 12-15m in diameter. The HIAD project's experience in scaling the technology has reached a critical juncture. Growing from a 6m to a 15m class system will introduce many new structural and logistical challenges to an already complicated manufacturing process. Although the general architecture and key aspects of the HIAD design scale well to larger vehicles, details of the technology will need to be reevaluated and possibly redesigned for use in a 15m-class HIAD system. These include: layout and size of the structural webbing that transfers load throughout the IS, inflatable gas barrier design, torus diameter and braid construction, internal pressure and inflation line routing, adhesives used for coating and bonding, and F-TPS gore design and seam fabrication. The logistics of fabricating and testing the IS and the F-TPS also become more challenging with increased scale. Compared to the 6m aeroshell (the largest HIAD built to date), a 12m aeroshell has four times the cross-sectional area, and a 15m one has over six times the area. This means that fabrication and test procedures will need to be reexamined to account for the sheer size and weight of the aeroshell components. This will affect a variety of steps in the manufacturing process, such as: stacking the tori during assembly, stitching the structural webbing, initial inflation of tori, and stitching of F-TPS gores. Additionally, new approaches and hardware will be required for handling and ground testing of both individual tori and the fully assembled HIADs. There are also noteworthy benefits of scaling up the HIAD aeroshell to 15m-class system. Two complications in working with handmade textiles structures are the non-linearity of the materials and the role of human accuracy during fabrication. Larger, more capable, HIAD structures should see much larger operational loads, potentially bringing the structural response of the materials out of the non-linear regime and into the preferred linear response range. Also, making the reasonable assumption that the magnitude of fabrication accuracy remains constant as the structures grow, the relative effect of fabrication errors should decrease as a percentage of the textile component size. Combined, these two effects improve the predictive capability and the uniformity of the structural response for a 12-15m class HIAD. In this paper, the challenges and associated mitigation plans related to scaling up the HIAD stacked-torus aeroshell to a 15m class system will be discussed. In addition, the benefits of enlarging the structure will be further explored.

  16. Second-order processing of four-stroke apparent motion.

    PubMed

    Mather, G; Murdoch, L

    1999-05-01

    In four-stroke apparent motion displays, pattern elements oscillate between two adjacent positions and synchronously reverse in contrast, but appear to move unidirectionally. For example, if rightward shifts preserve contrast but leftward shifts reverse contrast, consistent rightward motion is seen. In conventional first-order displays, elements reverse in luminance contrast (e.g. light elements become dark, and vice-versa). The resulting perception can be explained by responses in elementary motion detectors turned to spatio-temporal orientation. Second-order motion displays contain texture-defined elements, and there is some evidence that they excite second-order motion detectors that extract spatio-temporal orientation following the application of a non-linear 'texture-grabbing' transform by the visual system. We generated a variety of second-order four-stroke displays, containing texture-contrast reversals instead of luminance contrast reversals, and used their effectiveness as a diagnostic test for the presence of various forms of non-linear transform in the second-order motion system. Displays containing only forward or only reversed phi motion sequences were also tested. Displays defined by variation in luminance, contrast, orientation, and size were effective. Displays defined by variation in motion, dynamism, and stereo were partially or wholly ineffective. Results obtained with contrast-reversing and four-stroke displays indicate that only relatively simple non-linear transforms (involving spatial filtering and rectification) are available during second-order energy-based motion analysis.

  17. The use of imputed sibling genotypes in sibship-based association analysis: on modeling alternatives, power and model misspecification.

    PubMed

    Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I

    2013-05-01

    When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.

  18. Bill size variation in northern cardinals associated with anthropogenic drivers across North America.

    PubMed

    Miller, Colleen R; Latimer, Christopher E; Zuckerberg, Benjamin

    2018-05-01

    Allen's rule predicts that homeotherms inhabiting cooler climates will have smaller appendages, while those inhabiting warmer climates will have larger appendages relative to body size. Birds' bills tend to be larger at lower latitudes, but few studies have tested whether modern climate change and urbanization affect bill size. Our study explored whether bill size in a wide-ranging bird would be larger in warmer, drier regions and increase with rising temperatures. Furthermore, we predicted that bill size would be larger in densely populated areas, due to urban heat island effects and the higher concentration of supplementary foods. Using measurements from 605 museum specimens, we explored the effects of climate and housing density on northern cardinal bill size over an 85-year period across the Linnaean subspecies' range. We quantified the geographic relationships between bill surface area, housing density, and minimum temperature using linear mixed effect models and geographically weighted regression. We then tested whether bill surface area changed due to housing density and temperature in three subregions (Chicago, IL., Washington, D.C., and Ithaca, NY). Across North America, cardinals occupying drier regions had larger bills, a pattern strongest in males. This relationship was mediated by temperature such that birds in warm, dry areas had larger bills than those in cool, dry areas. Over time, female cardinals' bill size increased with warming temperatures in Washington, D.C., and Ithaca. Bill size was smaller in developed areas of Chicago, but larger in Washington, D.C., while there was no pattern in Ithaca, NY. We found that climate and urbanization were strongly associated with bill size for a wide-ranging bird. These biogeographic relationships were characterized by sex-specific differences, varying relationships with housing density, and geographic variability. It is likely that anthropogenic pressures will continue to influence species, potentially promoting microevolutionary changes over space and time.

  19. Observation of Droplet Size Oscillations in a Two Phase Fluid under Shear Flow

    NASA Astrophysics Data System (ADS)

    Courbin, Laurent; Panizza, Pascal

    2004-11-01

    It is well known that complex fluids exhibit strong couplings between their microstructure and the flow field. Such couplings may lead to unusual non linear rheological behavior. Because energy is constantly brought to the system, richer dynamic behavior such as non linear oscillatory or chaotic response is expected. We report on the observation of droplet size oscillations at fixed shear rate. At low shear rates, we observe two steady states for which the droplet size results from a balance between capillary and viscous stress. For intermediate shear rates, the droplet size becomes a periodic function of time. We propose a phenomenological model to account for the observed phenomenon and compare numerical results to experimental data.

  20. On a report that the 2012 M 6.0 earthquake in Italy was predicted after seeing an unusual cloud formation

    USGS Publications Warehouse

    Thomas, J.N.; Masci, F; Love, Jeffrey J.

    2015-01-01

    Several recently published reports have suggested that semi-stationary linear-cloud formations might be causally precursory to earthquakes. We examine the report of Guangmeng and Jie (2013), who claim to have predicted the 2012 M 6.0 earthquake in the Po Valley of northern Italy after seeing a satellite photograph (a digital image) showing a linear-cloud formation over the eastern Apennine Mountains of central Italy. From inspection of 4 years of satellite images we find numerous examples of linear-cloud formations over Italy. A simple test shows no obvious statistical relationship between the occurrence of these cloud formations and earthquakes that occurred in and around Italy. All of the linear-cloud formations we have identified in satellite images, including that which Guangmeng and Jie (2013) claim to have used to predict the 2012 earthquake, appear to be orographic – formed by the interaction of moisture-laden wind flowing over mountains. Guangmeng and Jie (2013) have not clearly stated how linear-cloud formations can be used to predict the size, location, and time of an earthquake, and they have not published an account of all of their predictions (including any unsuccessful predictions). We are skeptical of the validity of the claim by Guangmeng and Jie (2013) that they have managed to predict any earthquakes.

  1. Stability with large step sizes for multistep discretizations of stiff ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Majda, George

    1986-01-01

    One-leg and multistep discretizations of variable-coefficient linear systems of ODEs having both slow and fast time scales are investigated analytically. The stability properties of these discretizations are obtained independent of ODE stiffness and compared. The results of numerical computations are presented in tables, and it is shown that for large step sizes the stability of one-leg methods is better than that of the corresponding linear multistep methods.

  2. Linear Chord Diagrams with Long Chords

    NASA Astrophysics Data System (ADS)

    Sullivan, Everett

    A linear chord diagram of size n is a partition of the first 2n integers into sets of size two. These diagrams appear in many different contexts in combinatorics and other areas of mathematics, particularly knot theory. We explore various constraints that produce diagrams which have no short chords. A number of patterns appear from the results of these constraints which we can prove using techniques ranging from explicit bijections to non-commutative algebra.

  3. Detection range enhancement using circularly polarized light in scattering environments for infrared wavelengths

    DOE PAGES

    van der Laan, J. D.; Sandia National Lab.; Scrymgeour, D. A.; ...

    2015-03-13

    We find for infrared wavelengths there are broad ranges of particle sizes and refractive indices that represent fog and rain where the use of circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that for specific scene parameters circular polarization outperforms linear polarization in maintaining the intended polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Specifically, circular polarization persists bettermore » than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave infrared and the long-wave infrared, and large particle sizes of Sahara dust around the 4 micron wavelength.« less

  4. Quick probabilistic binary image matching: changing the rules of the game

    NASA Astrophysics Data System (ADS)

    Mustafa, Adnan A. Y.

    2016-09-01

    A Probabilistic Matching Model for Binary Images (PMMBI) is presented that predicts the probability of matching binary images with any level of similarity. The model relates the number of mappings, the amount of similarity between the images and the detection confidence. We show the advantage of using a probabilistic approach to matching in similarity space as opposed to a linear search in size space. With PMMBI a complete model is available to predict the quick detection of dissimilar binary images. Furthermore, the similarity between the images can be measured to a good degree if the images are highly similar. PMMBI shows that only a few pixels need to be compared to detect dissimilarity between images, as low as two pixels in some cases. PMMBI is image size invariant; images of any size can be matched at the same quick speed. Near-duplicate images can also be detected without much difficulty. We present tests on real images that show the prediction accuracy of the model.

  5. Multiplexed and scalable super-resolution imaging of three-dimensional protein localization in size-adjustable tissues.

    PubMed

    Ku, Taeyun; Swaney, Justin; Park, Jeong-Yoon; Albanese, Alexandre; Murray, Evan; Cho, Jae Hun; Park, Young-Gyun; Mangena, Vamsi; Chen, Jiapei; Chung, Kwanghun

    2016-09-01

    The biology of multicellular organisms is coordinated across multiple size scales, from the subnanoscale of molecules to the macroscale, tissue-wide interconnectivity of cell populations. Here we introduce a method for super-resolution imaging of the multiscale organization of intact tissues. The method, called magnified analysis of the proteome (MAP), linearly expands entire organs fourfold while preserving their overall architecture and three-dimensional proteome organization. MAP is based on the observation that preventing crosslinking within and between endogenous proteins during hydrogel-tissue hybridization allows for natural expansion upon protein denaturation and dissociation. The expanded tissue preserves its protein content, its fine subcellular details, and its organ-scale intercellular connectivity. We use off-the-shelf antibodies for multiple rounds of immunolabeling and imaging of a tissue's magnified proteome, and our experiments demonstrate a success rate of 82% (100/122 antibodies tested). We show that specimen size can be reversibly modulated to image both inter-regional connections and fine synaptic architectures in the mouse brain.

  6. Endotracheal tube leak pressure and tracheal lumen size in swine.

    PubMed

    Finholt, D A; Audenaert, S M; Stirt, J A; Marcella, K L; Frierson, H F; Suddarth, L T; Raphaely, R C

    1986-06-01

    Endotracheal tube "leak" is often estimated in children to judge the fit of uncuffed endotracheal tubes within the trachea. Twenty-five swine were intubated with uncuffed tracheal tubes to determine whether a more sensitive measurement of leaks could be devised and whether leak pressure estimates fit between tracheal tube and trachea. We compared leak pressure measurement using a stethoscope and aneroid manometer with a technique using a microphone, pressure transducer, and recorder, and found no differences between the two methods. The tracheas were then removed and slides prepared of tracheal cross-sectional specimens. Regression analysis revealed a linear relationship between tracheal lumen size and tracheal tube size for both low leak pressure (y = -0.4 + 0.79x, r = 0.88, P less than 0.05) and high leak pressure (y = -2.9 + 0.71x, r = 0.92, P less than 0.05) groups. We conclude that leak testing with a stethoscope and aneroid manometer is sensitive and accurate, and that tracheal tube leak pressure accurately portrays fit between tube and trachea.

  7. Body size and lower limb posture during walking in humans.

    PubMed

    Hora, Martin; Soumar, Libor; Pontzer, Herman; Sládek, Vladimír

    2017-01-01

    We test whether locomotor posture is associated with body mass and lower limb length in humans and explore how body size and posture affect net joint moments during walking. We acquired gait data for 24 females and 25 males using a three-dimensional motion capture system and pressure-measuring insoles. We employed the general linear model and commonality analysis to assess the independent effect of body mass and lower limb length on flexion angles at the hip, knee, and ankle while controlling for sex and velocity. In addition, we used inverse dynamics to model the effect of size and posture on net joint moments. At early stance, body mass has a negative effect on knee flexion (p < 0.01), whereas lower limb length has a negative effect on hip flexion (p < 0.05). Body mass uniquely explains 15.8% of the variance in knee flexion, whereas lower limb length uniquely explains 5.4% of the variance in hip flexion. Both of the detected relationships between body size and posture are consistent with the moment moderating postural adjustments predicted by our model. At late stance, no significant relationship between body size and posture was detected. Humans of greater body size reduce the flexion of the hip and knee at early stance, which results in the moderation of net moments at these joints.

  8. Home range size variation in female arctic grizzly bears relative to reproductive status and resource availability.

    PubMed

    Edwards, Mark A; Derocher, Andrew E; Nagy, John A

    2013-01-01

    The area traversed in pursuit of resources defines the size of an animal's home range. For females, the home range is presumed to be a function of forage availability. However, the presence of offspring may also influence home range size due to reduced mobility, increased nutritional need, and behavioral adaptations of mothers to increase offspring survival. Here, we examine the relationship between resource use and variation in home range size for female barren-ground grizzly bears (Ursus arctos) of the Mackenzie Delta region in Arctic Canada. We develop methods to test hypotheses of home range size that address selection of cover where cover heterogeneity is low, using generalized linear mixed-effects models and an information-theoretic approach. We found that the reproductive status of female grizzlies affected home range size but individually-based spatial availability of highly selected cover in spring and early summer was a stronger correlate. If these preferred covers in spring and early summer, a period of low resource availability for grizzly bears following den-emergence, were patchy and highly dispersed, females travelled farther regardless of the presence or absence of offspring. Increased movement to preferred covers, however, may result in greater risk to the individual or family.

  9. The effects of delay duration on visual working memory for orientation.

    PubMed

    Shin, Hongsup; Zou, Qijia; Ma, Wei Ji

    2017-12-01

    We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.

  10. Application of porous titanium in prosthesis production using a moldless process: Evaluation of physical and mechanical properties with various particle sizes, shapes, and mixing ratios.

    PubMed

    Prananingrum, Widyasri; Tomotake, Yoritoki; Naito, Yoshihito; Bae, Jiyoung; Sekine, Kazumitsu; Hamada, Kenichi; Ichikawa, Tetsuo

    2016-08-01

    The prosthetic applications of titanium have been challenging because titanium does not possess suitable properties for the conventional casting method using the lost wax technique. We have developed a production method for biomedical application of porous titanium using a moldless process. This study aimed to evaluate the physical and mechanical properties of porous titanium using various particle sizes, shapes, and mixing ratio of titanium powder to wax binder for use in prosthesis production. CP Ti powders with different particle sizes, shapes, and mixing ratios were divided into five groups. A 90:10wt% mixture of titanium powder and wax binder was prepared manually at 70°C. After debinding at 380°C, the specimen was sintered in Ar at 1100°C without a mold for 1h. The linear shrinkage ratio of sintered specimens ranged from 2.5% to 14.2%. The linear shrinkage ratio increased with decreasing particle size. While the linear shrinkage ratio of Groups 3, 4, and 5 were approximately 2%, Group 1 showed the highest shrinkage of all. The bending strength ranged from 106 to 428MPa under the influence of porosity. Groups 1 and 2 presented low porosity followed by higher strength. The shear bond strength ranged from 32 to 100MPa. The shear bond strength was also particle-size dependent. The decrease in the porosity increased the linear shrinkage ratio and bending strength. Shrinkage and mechanical strength required for prostheses were dependent on the particle size and shape of titanium powders. These findings suggested that this production method can be applied to the prosthetic framework by selecting the material design. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Dependence of Raman Spectral Intensity on Crystal Size in Organic Nano Energetics.

    PubMed

    Patel, Rajen B; Stepanov, Victor; Qiu, Hongwei

    2016-08-01

    Raman spectra for various nitramine energetic compounds were investigated as a function of crystal size at the nanoscale regime. In the case of 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20), there was a linear relationship between intensity of Raman spectra and crystal size. Notably, the Raman modes between 120 cm(-1) and 220 cm(-1) were especially affected, and at the smallest crystal size, were completely eliminated. The Raman spectral intensity of octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), like that of CL-20's, depended linearly on crystal size. The Raman spectral intensity of 1,3,5-trinitroperhydro-1,3,5-triazine (RDX), however, was not observably changed by crystal size. A non-nitramine explosive compound, 2,4,6-triamino-1,3,5- trinitrobenzene (TATB), was also investigated. Its spectral intensity was also found to correlate linearly with crystal size, although substantially less so than that of HMX and CL-20. To explain the observed trends, it is hypothesized that disordered molecular arrangement, originating from the crystal surface, may be responsible. In particular, it appears that the thickness of the disordered surface layer is dependent on molecular characteristics, including size and conformational flexibility. Furthermore, as the mean crystal size decreases, the volume fraction of disordered molecules within a specimen increases, consequently, weakening the Raman intensity. These results could have practical benefit for allowing the facile monitoring of crystal size during manufacturing. Finally, these findings could lead to deep insights into the general structure of the surface of crystals. © The Author(s) 2016.

  12. Axial diffusivity of the corona radiata correlated with ventricular size in adult hydrocephalus.

    PubMed

    Cauley, Keith A; Cataltepe, Oguz

    2014-07-01

    Hydrocephalus causes changes in the diffusion-tensor properties of periventricular white matter. Understanding the nature of these changes may aid in the diagnosis and treatment planning of this relatively common neurologic condition. Because ventricular size is a common measure of the severity of hydrocephalus, we hypothesized that a quantitative correlation could be made between the ventricular size and diffusion-tensor changes in the periventricular corona radiata. In this article, we investigated this relationship in adult patients with hydrocephalus and in healthy adult subjects. Diffusion-tensor imaging metrics of the corona radiata were correlated with ventricular size in 14 adult patients with acute hydrocephalus, 16 patients with long-standing hydrocephalus, and 48 consecutive healthy adult subjects. Regression analysis was performed to investigate the relationship between ventricular size and the diffusion-tensor metrics of the corona radiata. Subject age was analyzed as a covariable. There is a linear correlation between fractional anisotropy of the corona radiata and ventricular size in acute hydrocephalus (r = 0.784, p < 0.001), with positive correlation with axial diffusivity (r = 0.636, p = 0.014) and negative correlation with radial diffusivity (r = 0.668, p = 0.009). In healthy subjects, axial diffusion in the periventricular corona radiata is more strongly correlated with ventricular size than with patient age (r = 0.466, p < 0.001, compared with r = 0.058, p = 0.269). Axial diffusivity of the corona radiata is linearly correlated with ventricular size in healthy adults and in patients with hydrocephalus. Radial diffusivity of the corona radiata decreases linearly with ventricular size in acute hydrocephalus but is not significantly correlated with ventricular size in healthy subjects or in patients with long-standing hydrocephalus.

  13. Testing goodness of fit in regression: a general approach for specified alternatives.

    PubMed

    Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J

    2012-12-10

    When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Delamination growth in composite materials

    NASA Technical Reports Server (NTRS)

    Gillespie, J. W., Jr.; Carlsson, L. A.; Pipes, R. B.; Rothschilds, R.; Trethewey, B.; Smiley, A.

    1986-01-01

    The Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) specimens are employed to characterize MODE I and MODE II interlaminar fracture resistance of graphite/epoxy (CYCOM 982) and graphite/PEEK (APC2) composites. Sizing of test specimen geometries to achieve crack growth in the linear elastic regime is presented. Data reduction schemes based upon beam theory are derived for the ENF specimen and include the effects of shear deformation and friction between crack surfaces on compliance, C, and strain energy release rate, G sub II. Finite element (FE) analyses of the ENF geometry including the contact problem with friction are presented to assess the accuracy of beam theory expressions for C and G sub II. Virtual crack closure techniques verify that the ENF specimen is a pure Mode II test. Beam theory expressions are shown to be conservative by 20 to 40 percent for typical unidirectional test specimen geometries. A FE parametric study investigating the influence of delamination length and depth, span, thickness and material properties on G sub II is presented. Mode I and II interlaminar fracture test results are presented. Important experimental parameters are isolated, such as precracking techniques, rate effects, and nonlinear load-deflection response. It is found that subcritical crack growth and inelastic materials behavior, responsible for the observed nonlinearities, are highly rate-dependent phenomena with high rates generally leading to linear elastic response.

  15. Testing and Life Prediction for Composite Rotor Hub Flexbeams

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2004-01-01

    A summary of several studies of delamination in tapered composite laminates with internal ply-drops is presented. Initial studies used 2D FE models to calculate interlaminar stresses at the ply-ending locations in linear tapered laminates under tension loading. Strain energy release rates for delamination in these laminates indicated that delamination would likely start at the juncture of the tapered and thin regions and grow unstably in both directions. Tests of glass/epoxy and graphite/epoxy linear tapered laminates under axial tension delaminated as predicted. Nonlinear tapered specimens were cut from a full-size helicopter rotor hub and were tested under combined constant axial tension and cyclic transverse bending loading to simulate the loading experienced by a rotorhub flexbeam in flight. For all the tested specimens, delamination began at the tip of the outermost dropped ply group and grew first toward the tapered region. A 2D FE model was created that duplicated the test flexbeam layup, geometry, and loading. Surface strains calculated by the model agreed very closely with the measured surface strains in the specimens. The delamination patterns observed in the tests were simulated in the model by releasing pairs of MPCs along those interfaces. Strain energy release rates associated with the delamination growth were calculated for several configurations and using two different FE analysis codes. Calculations from the codes agreed very closely. The strain energy release rate results were used with material characterization data to predict fatigue delamination onset lives for nonlinear tapered flexbeams with two different ply-dropping schemes. The predicted curves agreed well with the test data for each case studied.

  16. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  17. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  19. A powerful and flexible approach to the analysis of RNA sequence count data.

    PubMed

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A

    2011-10-01

    A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.

  20. Trajectory measurements and correlations in the final focus beam line at the KEK Accelerator Test Facility

    NASA Astrophysics Data System (ADS)

    Renier, Y.; Bambade, P.; Tauchi, T.; White, G. R.; Boogert, S.

    2013-06-01

    The Accelerator Test Facility 2 (ATF2) commissioning group aims to demonstrate the feasibility of the beam delivery system of the next linear colliders (ILC and CLIC) as well as to define and to test the tuning methods. As the design vertical beam sizes of the linear colliders are about few nanometers, the stability of the trajectory as well as the control of the aberrations are very critical. ATF2 commissioning started in December 2008, and thanks to submicron resolution beam position monitors (BPMs), it has been possible to measure the beam position fluctuation along the final focus of ATF2 during the 2009 runs. The optics was not the nominal one yet, with a lower focusing to make the tuning easier. In this paper, a method to measure the noise of each BPM every pulse, in a model-independent way, will be presented. A method to reconstruct the trajectory’s fluctuations is developed which uses the previously determined BPM resolution. As this reconstruction provides a measurement of the beam energy fluctuations, it was also possible to measure the horizontal and vertical dispersion function at each BPMs parasitically. The spatial and angular dispersions can be fitted from these measurements with uncertainties comparable with usual measurements.

  1. QSAR study of curcumine derivatives as HIV-1 integrase inhibitors.

    PubMed

    Gupta, Pawan; Sharma, Anju; Garg, Prabha; Roy, Nilanjan

    2013-03-01

    A QSAR study was performed on curcumine derivatives as HIV-1 integrase inhibitors using multiple linear regression. The statistically significant model was developed with squared correlation coefficients (r(2)) 0.891 and cross validated r(2) (r(2) cv) 0.825. The developed model revealed that electronic, shape, size, geometry, substitution's information and hydrophilicity were important atomic properties for determining the inhibitory activity of these molecules. The model was also tested successfully for external validation (r(2) pred = 0.849) as well as Tropsha's test for model predictability. Furthermore, the domain analysis was carried out to evaluate the prediction reliability of external set molecules. The model was statistically robust and had good predictive power which can be successfully utilized for screening of new molecules.

  2. Simultaneous Determination of Piperine, Capsaicin, and Dihydrocapsaicin in Korean Instant-Noodle (Ramyun) Soup Base Using High-Performance Liquid Chromatography with Ultraviolet Detection.

    PubMed

    Shim, You-Shin; Kim, Jong-Chan; Jeong, Seung-Weon

    2016-01-01

    A simultaneous analytical method for piperine, capsaicin, and dihydrocapsaicin in Korean instant-noodle soup base using HPLC was validated in terms of precision, accuracy, sensitivity, and linearity. The HPLC separation was performed on a reversed-phase C18 column (5 μm particle size, 4.6 mm id, 250 mm length) using a UV detector fixed at 280 nm. The LOD and LOQ of the HPLC analyses ranged from 0.25 to 1.03 mg/kg. The intraday and interday precisions of the individual piperine, capsaicin, and dihydrocapsaicin were <10.55%, and the recovery values ranged from 85.43 to 94.68%. The calibration curves exhibited good linearity (r(2) = 0.999) within the tested ranges. These results suggest that the analytical method in this study can be used to classify Korean instant noodles based on their levels of spiciness.

  3. On solving three-dimensional open-dimension rectangular packing problems

    NASA Astrophysics Data System (ADS)

    Junqueira, Leonardo; Morabito, Reinaldo

    2017-05-01

    In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.

  4. On the analytical modeling of the nonlinear vibrations of pretensioned space structures

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Belvin, W. K.

    1983-01-01

    Pretensioned structures are receiving considerable attention as candidate large space structures. A typical example is a hoop-column antenna. The large number of preloaded members requires efficient analytical methods for concept validation and design. Validation through analyses is especially important since ground testing may be limited due to gravity effects and structural size. The present investigation has the objective to present an examination of the analytical modeling of pretensioned members undergoing nonlinear vibrations. Two approximate nonlinear analysis are developed to model general structural arrangements which include beam-columns and pretensioned cables attached to a common nucleus, such as may occur at a joint of a pretensioned structure. Attention is given to structures undergoing nonlinear steady-state oscillations due to sinusoidal excitation forces. Three analyses, linear, quasi-linear, and nonlinear are conducted and applied to study the response of a relatively simple cable stiffened structure.

  5. Browndye: A software package for Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Huber, Gary A.; McCammon, J. Andrew

    2010-11-01

    A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. Program summaryProgram title: Browndye Catalogue identifier: AEGT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license, included in distribution No. of lines in distributed program, including test data, etc.: 143 618 No. of bytes in distributed program, including test data, etc.: 1 067 861 Distribution format: tar.gz Programming language: C++, OCaml ( http://caml.inria.fr/) Computer: PC, Workstation, Cluster Operating system: Linux Has the code been vectorised or parallelized?: Yes. Runs on multiple processors with shared memory using pthreads RAM: Depends linearly on size of physical system Classification: 3 External routines: uses the output of APBS [1] ( http://www.poissonboltzmann.org/apbs/) as input. APBS must be obtained and installed separately. Expat 2.0.1, CLAPACK, ocaml-expat, Mersenne Twister. These are included in the Browndye distribution. Nature of problem: Exploration and determination of rate constants of bimolecular interactions involving large biological molecules. Solution method: Brownian dynamics with electrostatic, excluded volume, van der Waals, and desolvation forces. Running time: Depends linearly on size of physical system and quadratically on precision of results. The included example executes in a few minutes.

  6. Influence of equilibrium shear flow in the parallel magnetic direction on edge localized mode crash

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Xiong, Y. Y.; Chen, S. Y., E-mail: sychen531@163.com

    2016-04-15

    The influence of the parallel shear flow on the evolution of peeling-ballooning (P-B) modes is studied with the BOUT++ four-field code in this paper. The parallel shear flow has different effects in linear simulation and nonlinear simulation. In the linear simulations, the growth rate of edge localized mode (ELM) can be increased by Kelvin-Helmholtz term, which can be caused by the parallel shear flow. In the nonlinear simulations, the results accord with the linear simulations in the linear phase. However, the ELM size is reduced by the parallel shear flow in the beginning of the turbulence phase, which is recognizedmore » as the P-B filaments' structure. Then during the turbulence phase, the ELM size is decreased by the shear flow.« less

  7. Linear Approximation SAR Azimuth Processing Study

    NASA Technical Reports Server (NTRS)

    Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.

    1979-01-01

    A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.

  8. Aerobic power and flight capacity in birds: a phylogenetic test of the heart-size hypothesis.

    PubMed

    Nespolo, Roberto F; González-Lagos, César; Solano-Iguaran, Jaiber J; Elfwing, Magnus; Garitano-Zavala, Alvaro; Mañosa, Santiago; Alonso, Juan Carlos; Altimiras, Jordi

    2018-01-09

    Flight capacity is one of the most important innovations in animal evolution; it only evolved in insects, birds, mammals and the extinct pterodactyls. Given that powered flight represents a demanding aerobic activity, an efficient cardiovascular system is essential for the continuous delivery of oxygen to the pectoral muscles during flight. It is well known that the limiting step in the circulation is stroke volume (the volume of blood pumped from the ventricle to the body during each beat), which is determined by the size of the ventricle. Thus, the fresh mass of the heart represents a simple and repeatable anatomical measure of the aerobic power of an animal. Although several authors have compared heart masses across bird species, a phylogenetic comparative analysis is still lacking. By compiling heart sizes for 915 species and applying several statistical procedures controlling for body size and/or testing for adaptive trends in the dataset (e.g. model selection approaches, phylogenetic generalized linear models), we found that (residuals of) heart size is consistently associated with four categories of flight capacity. In general, our results indicate that species exhibiting continuous hovering flight (i.e. hummingbirds) have substantially larger hearts than other groups, species that use flapping flight and gliding show intermediate values, and that species categorized as poor flyers show the smallest values. Our study reveals that on a broad scale, routine flight modes seem to have shaped the energetic requirements of birds sufficiently to be anatomically detected at the comparative level. © 2018. Published by The Company of Biologists Ltd.

  9. [Detection of linear chromosomes and plasmids among 15 genera in the Actinomycetales].

    PubMed

    Ma, Ning; Ma, Wei; Jiang, Chenglin; Fang, Ping; Qin, Zhongjun

    2003-10-01

    Bacterial chromosomes and plasmids are commonly circular, however, linear chromosomes and plasmids were discovered among 5 genera of the Actinomycetales. Here, we use pulsed field gel electrophoresis to study the genomes of 19 species which belong to 15 genera in the Actinomycetales. All chromosomes of 19 species are linear DNA, and linear plasmids with different sizes and copy numbers are detected among 5 species. This work provide basis for investigating the possible novel functions of linear replicons beyond Streptomyces and also helps to develop Actinomycetales artificial linear chromosome.

  10. SU-E-T-354: Efficient and Enhanced QA Testing of Linear Accelerators Using a Real-Time Beam Monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, J; Farrokhkish, M; Norrlinger, B

    2015-06-15

    Purpose: To investigate the feasibility of performing routine QA tests of linear accelerators (Linac) using the Integral Quality Monitoring (IQM) system. The system, consisting of a 1-D sensitivity gradient large area ion-chamber mounted at the collimator, allows automatic collection and analysis of beam data. Methods: The IQM was investigated to perform several QA constancy tests, similar to those recommended by AAPM TG142, of a Linac including: beam output, MLC calibration, beam symmetry, relative dose factor (RDF), dose linearity, output as a function of gantry angle and dose rate. All measurements by the IQM system accompanied a reference measurement using amore » conventional dosimetry system and were performed on an Elekta Infinity Linac with Agility MLC. The MLC calibration check is done using a Picket-Fence type 2×10cm{sup 2} field positioned at different off-axis locations along the chamber gradient. Beam symmetry constancy values are established by signals from an 4×4cm{sup 2} aperture located at various off-axis positions; the sensitivity of the test was determined by the changes in the signals in response to a tilt in the beam. The data for various square field sizes were used to develop a functional relationship with RDF. Results: The IQM tracked the beam output well within 1% of the reference ion-chamber readings. The Picket-Fence type field test detected a 1mm shift error of one MLC bank. The system was able to detect 2.5% or greater beam asymmetry. The IQM results for all other QA tests were found to agree with the reference values to within 0.5%. Conclusion: It was demonstrated that the IQM system can effectively monitor the Linac performance parameters for the purpose of routine QA constancy tests. With minimum user interactions a comprehensive set of tests can be performed efficiently, allowing frequent monitoring of the Linac. The presenting author’s salary is funded by the manufacturer of the QA device. All the other authors have financial interests with the commercialization of this QA device.« less

  11. Effects of ration size on preferred temperature of lake charr Salvelinus namaycush

    USGS Publications Warehouse

    Mac, Michael J.

    1985-01-01

    I tested the effects of different ration sizes on preferred temperatures of yearling lake charr,Salvelinus namaycush, by feeding them for about 2 weeks on one of four rations and then allowing them to thermoregulate in a temporal thermal gradient for 2 to 3 days. Selected temperatures and ration were directly and linearly correlated: the larger the ration, the higher the temperature selected. Mean preferred temperatures at different rations (shown in parentheses as percent of body weight per day) were as follows: 9.2°C (0.3); 10.6°C (0.8); 11.7°C (2.0); and 12.6°C (5.5). While the shift to lower temperature, under restricted ration, would maximize food conversion efficiency, previous growth studies indicate that even lower selected temperature would have been more beneficial.

  12. Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS)

    PubMed Central

    Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao

    2009-01-01

    In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy. PMID:22573960

  13. Sample size considerations for paired experimental design with incomplete observations of continuous outcomes.

    PubMed

    Zhu, Hong; Xu, Xiaohan; Ahn, Chul

    2017-01-01

    Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.

  14. Simulating galactic dust grain evolution on a moving mesh

    NASA Astrophysics Data System (ADS)

    McKinnon, Ryan; Vogelsberger, Mark; Torrey, Paul; Marinacci, Federico; Kannan, Rahul

    2018-05-01

    Interstellar dust is an important component of the galactic ecosystem, playing a key role in multiple galaxy formation processes. We present a novel numerical framework for the dynamics and size evolution of dust grains implemented in the moving-mesh hydrodynamics code AREPO suited for cosmological galaxy formation simulations. We employ a particle-based method for dust subject to dynamical forces including drag and gravity. The drag force is implemented using a second-order semi-implicit integrator and validated using several dust-hydrodynamical test problems. Each dust particle has a grain size distribution, describing the local abundance of grains of different sizes. The grain size distribution is discretised with a second-order piecewise linear method and evolves in time according to various dust physical processes, including accretion, sputtering, shattering, and coagulation. We present a novel scheme for stochastically forming dust during stellar evolution and new methods for sub-cycling of dust physics time-steps. Using this model, we simulate an isolated disc galaxy to study the impact of dust physical processes that shape the interstellar grain size distribution. We demonstrate, for example, how dust shattering shifts the grain size distribution to smaller sizes resulting in a significant rise of radiation extinction from optical to near-ultraviolet wavelengths. Our framework for simulating dust and gas mixtures can readily be extended to account for other dynamical processes relevant in galaxy formation, like magnetohydrodynamics, radiation pressure, and thermo-chemical processes.

  15. Visibility vs. biomass in flowers: exploring corolla allocation in Mediterranean entomophilous plants.

    PubMed

    Herrera, Javier

    2009-05-01

    While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200-400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed.

  16. Incorporating TPC observed parameters and QuikSCAT surface wind observations into hurricane initialization using 4D-VAR approaches

    NASA Astrophysics Data System (ADS)

    Park, Kyungjeen

    This study aims to develop an objective hurricane initialization scheme which incorporates not only forecast model constraints but also observed features such as the initial intensity and size. It is based on the four-dimensional variational (4D-Var) bogus data assimilation (BDA) scheme originally proposed by Zou and Xiao (1999). The 4D-Var BDA consists of two steps: (i) specifying a bogus sea level pressure (SLP) field based on parameters observed by the Tropical Prediction Center (TPC) and (ii) assimilating the bogus SLP field under a forecast model constraint to adjust all model variables. This research focuses on improving the specification of the bogus SLP indicated in the first step. Numerical experiments are carried out for Hurricane Bonnie (1998) and Hurricane Gordon (2000) to test the sensitivity of hurricane track and intensity forecasts to specification of initial vortex. Major results are listed below: (1) A linear regression model is developed for determining the size of initial vortex based on the TPC observed radius of 34kt. (2) A method is proposed to derive a radial profile of SLP from QuikSCAT surface winds. This profile is shown to be more realistic than ideal profiles derived from Fujita's and Holland's formulae. (3) It is found that it takes about 1 h for hurricane prediction model to develop a conceptually correct hurricane structure, featuring a dominant role of hydrostatic balance at the initial time and a dynamic adjustment in less than 30 minutes. (4) Numerical experiments suggest that track prediction is less sensitive to the specification of initial vortex structure than intensity forecast. (5) Hurricane initialization using QuikSCAT-derived initial vortex produced a reasonably good forecast for hurricane landfall, with a position error of 25 km and a 4-h delay at landfalling. (6) Numerical experiments using the linear regression model for the size specification considerably outperforms all the other formulations tested in terms of the intensity prediction for both Hurricanes. For examples, the maximum track error is less than 110 km during the entire three-day forecasts for both hurricanes. The simulated Hurricane Gordon using the linear regression model made a nearly perfect landfall, with no position error and only 1-h error in landfalling time. (7) Diagnosis of model output indicates that the initial vortex specified by the linear regression model produces larger surface fluxes of sensible heat, latent heat and moisture, as well as stronger downward angular momentum transport than all the other schemes do. These enhanced energy supplies offset the energy lost caused by friction and gravity wave propagation, allowing for the model to maintain a strong and realistic hurricane during the entire forward model integration.

  17. Exceptionally Stable Fluorous Emulsions for the Intravenous Delivery of Volatile General Anesthetics

    PubMed Central

    Jee, Jun-Pil; Parlato, Maria C.; Perkins, Mark G.; Mecozzi, Sandro; Pearce, Robert A.

    2012-01-01

    Background Intravenous delivery of volatile fluorinated anesthetics has a number of potential advantages when compared to the current inhalation method of administration. We reported previously that the IV delivery of sevoflurane can be achieved through an emulsion composed of a linear fluorinated diblock copolymer, a stabilizer, and the anesthetic. However, this original emulsion was subject to particle size growth that would limit its potential clinical utility. We hypothesized that the use of bulkier fluorous groups and smaller poly(ethylene glycol) moieties in the polymer design would result in improved emulsion stability while maintaining anesthetic functionality. Methods The authors prepared emulsions incorporating sevoflurane, perfluorooctyl bromide as a stabilizing agent, and combinations of linear fluorinated diblock copolymer and a novel dibranched fluorinated diblock copolymer. Emulsion stability was assessed using dynamic light scattering. The ability of the emulsions to induce anesthesia was tested in vivo by administering them intravenously to fifteen male Sprague-Dawley rats and measuring loss of the forepaw righting reflex. Results 20% (volume/volume) sevoflurane emulsions incorporating mixtures of dibranched- and linear diblock copolymers had improved stability, with those containing an excess of the dibranched polymers displaying stability of particle size for over one year. The ED50s for loss of forepaw righting reflex were all similar, and ranged between 0.55 and 0.60 ml/kg body weight. Conclusions Hemifluorinated dibranched polymers can be used to generate exceptionally stable sevoflurane nanoemulsions, as required of formulations intended for clinical use. Intravenous delivery of the emulsion in rats resulted in induction of anesthesia with rapid onset and smooth and rapid recovery. PMID:22354241

  18. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    NASA Astrophysics Data System (ADS)

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.

  19. Plastic strain is a mixture of avalanches and quasireversible deformations: Study of various sizes

    NASA Astrophysics Data System (ADS)

    Szabó, Péter; Ispánovity, Péter Dusán; Groma, István

    2015-02-01

    The size dependence of plastic flow is studied by discrete dislocation dynamical simulations of systems with various amounts of interacting dislocations while the stress is slowly increased. The regions between avalanches in the individual stress curves as functions of the plastic strain were found to be nearly linear and reversible where the plastic deformation obeys an effective equation of motion with a nearly linear force. For small plastic deformation, the mean values of the stress-strain curves obey a power law over two decades. Here and for somewhat larger plastic deformations, the mean stress-strain curves converge for larger sizes, while their variances shrink, both indicating the existence of a thermodynamical limit. The converging averages decrease with increasing size, in accordance with size effects from experiments. For large plastic deformations, where steady flow sets in, the thermodynamical limit was not realized in this model system.

  20. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  1. Superconducting antennas for telecommunication applications based on dual mode cross slotted patches

    NASA Astrophysics Data System (ADS)

    Cassinese, A.; Barra, M.; Fragalà, I.; Kusunoki, M.; Malandrino, G.; Nakagawa, T.; Perdicaro, L. M. S.; Sato, K.; Ohshima, S.; Vaglio, R.

    2002-08-01

    Dual mode devices based on high temperature superconducting films represent an interesting class for telecommunication applications since they combine a miniaturized size with a good power handling. Here we report on a novel compact antenna obtained by crossing a square patch with two or more slots. The proposed design has an antenna size reduction of about 40% as compared to the conventional square patch microstrip antennas. Single patch antenna both with linear (LP) and circular (CP) polarization operating in the X-band have been designed and tested at prototype level. They are realized by using double sided (YBa 2Cu 3O 7- x) YBCO and Tl 2Ba 2Ca 1Cu 2O 8 (Tl-2212) superconducting films grown on MgO substrates and tested with a portable cryocooler. They showed at T=77 K a return loss <25 dB and a power handling of 23 dBm. Exemplary 16 elements arrays LP antennas operating in the X band have been also realized by using YBCO film grown on 2 ″ diameter MgO substrate.

  2. Algorithms for Efficient Computation of Transfer Functions for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1998-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, still-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open- and closed-loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, the present method was up to two orders of magnitude faster than a traditional method. The present method generally showed good to excellent accuracy throughout the range of test frequencies, while traditional methods gave adequate accuracy for lower frequencies, but generally deteriorated in performance at higher frequencies with worst case errors being many orders of magnitude times the correct values.

  3. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS.

    PubMed

    Arce, Pedro; Lagares, Juan Ignacio

    2018-01-25

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm 2 to 40  ×  40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  4. Attention induced neural response trade-off in retinotopic cortex under load.

    PubMed

    Torralbo, Ana; Kelley, Todd A; Rees, Geraint; Lavie, Nilli

    2016-09-14

    The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of 'inattentional blindness' associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2-V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness.

  5. Attention induced neural response trade-off in retinotopic cortex under load

    PubMed Central

    Torralbo, Ana; Kelley, Todd A.; Rees, Geraint; Lavie, Nilli

    2016-01-01

    The effects of perceptual load on visual cortex response to distractors are well established and various phenomena of ‘inattentional blindness’ associated with elimination of visual cortex response to unattended distractors, have been documented in tasks of high load. Here we tested an account for these effects in terms of a load-induced trade-off between target and distractor processing in retinotopic visual cortex. Participants were scanned using fMRI while performing a visual-search task and ignoring distractor checkerboards in the periphery. Retinotopic responses to target and distractors were assessed as a function of search load (comparing search set-sizes two, three and five). We found that increased load not only increased activity in frontoparietal network, but also had opposite effects on retinotopic responses to target and distractors. Target-related signals in areas V2–V3 linearly increased, while distractor response linearly decreased, with increased load. Critically, the slopes were equivalent for both load functions, thus demonstrating resource trade-off. Load effects were also found in displays with the same item number in the distractor hemisphere across different set sizes, thus ruling out local intrahemispheric interactions as the cause. Our findings provide new evidence for load theory proposals of attention resource sharing between target and distractor leading to inattentional blindness. PMID:27625311

  6. Extrachromosomal genetic elements in Micrococcus.

    PubMed

    Dib, Julián Rafael; Liebl, Wolfgang; Wagenknecht, Martin; Farías, María Eugenia; Meinhardt, Friedhelm

    2013-01-01

    Micrococci are Gram-positive G + C-rich, nonmotile, nonspore-forming actinomycetous bacteria. Micrococcus comprises ten members, with Micrococcus luteus being the type species. Representatives of the genus play important roles in the biodegradation of xenobiotics, bioremediation processes, production of biotechnologically important enzymes or bioactive compounds, as test strains in biological assays for lysozyme and antibiotics, and as infective agents in immunocompromised humans. The first description of plasmids dates back approximately 28 years, when several extrachromosomal elements ranging in size from 1.5 to 30.2 kb were found in Micrococcus luteus. Up to the present, a number of circular plasmids conferring antibiotic resistance, the ability to degrade aromatic compounds, and osmotolerance are known, as well as cryptic elements with unidentified functions. Here, we review the Micrococcus extrachromosomal traits reported thus far including phages and the only quite recently described large linear extrachromosomal genetic elements, termed linear plasmids, which range in size from 75 kb (pJD12) to 110 kb (pLMA1) and which confer putative advantageous capabilities, such as antibiotic or heavy metal resistances (inferred from sequence analyses and curing experiments). The role of the extrachromosomal elements for the frequently proven ecological and biotechnological versatility of the genus will be addressed as well as their potential for the development and use as genetic tools.

  7. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  8. Continuous Flow Hygroscopicity-Resolved Relaxed Eddy Accumulation (Hy-Res REA) Method of Measuring Size-Resolved Sea-Salt Particle Fluxes

    NASA Astrophysics Data System (ADS)

    Meskhidze, N.; Royalty, T. M.; Phillips, B.; Dawson, K. W.; Petters, M. D.; Reed, R.; Weinstein, J.; Hook, D.; Wiener, R.

    2017-12-01

    The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here we present the design, testing, and analysis of data collected through the first instrument capable of measuring hygroscopicity-based, size-resolved particle fluxes using a continuous-flow Hygroscopicity-Resolved Relaxed Eddy Accumulation (Hy-Res REA) technique. The different components of the instrument were extensively tested inside the US Environmental Protection Agency's Aerosol Test Facility for sea-salt and ammoniums sulfate particle fluxes. The new REA system design does not require particle accumulation, therefore avoids the diffusional wall losses associated with long residence times of particles inside the air collectors of the traditional REA devices. The Hy-Res REA system used in this study includes a 3-D sonic anemometer, two fast-response solenoid valves, two Condensation Particle Counters (CPCs), a Scanning Mobility Particle Sizer (SMPS), and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA). A linear relationship was found between the sea-salt particle fluxes measured by eddy covariance and REA techniques, with comparable theoretical (0.34) and measured (0.39) proportionality constants. The sea-salt particle detection limit of the Hy-Res REA flux system is estimated to be 6x105 m-2s-1. For the conditions of ammonium sulfate and sea-salt particles of comparable source strength and location, the continuous-flow Hy-Res REA instrument was able to achieve better than 90% accuracy of measuring the sea-salt particle fluxes. In principle, the instrument can be applied to measure fluxes of particles of variable size and distinct hygroscopic properties (i.e., mineral dust, black carbon, etc.).

  9. An investigation of the microstructure, mechanical properties, and tribological performance of ultra high molecular weight polyethylene for applications in total joint arthroplasty

    NASA Astrophysics Data System (ADS)

    van Citters, Douglas W.

    Ultra high molecular weight polyethylene (UHMWPE) is the most common bearing material in joint arthroplasty due to its biocompatibility, its wear resistance, and its mechanical toughness. Despite the favorable properties of UHMWPE and its success as a biomaterial, billions of dollars are spent annually to revise tens of thousands of failed artificial joints. Over half of these revision procedures are related to mechanical failure of the polymer bearing or osteolysis resulting from polymer wear. Contemporary material processing steps involving thermal treatment and/or radiation treatment seek to improve outcomes through improving the tribological properties of UHMWPE. However, it is widely recognized that achieving wear resistance through radiation-induced crosslinking comes at the cost of reduced mechanical properties. Moreover, current wear theories for orthopaedic UHMWPE are incomplete in that they predict zero wear in the absence of crossing motion. Wear nonetheless occurs in linear reciprocation, necessitating an alternate theory. The present work explains the effects of thermal treatments and radiation treatments on the properties of GUR1050 UHMWPE. A test matrix allows comparisons of different treatments across different test platforms. Characterization techniques include DSC, FTIR spectroscopy, tensile testing, x-ray diffraction, and electron microscopy. A novel quantitative stereology technique is developed to quantify crystallite size in the semicrystalline material. Seven clinically relevant materials are subjected to rolling-sliding tribotesting to determine polyethylene wear behavior in linear reciprocation. The multi-station tribotester employed for this work enables high throughput testing, and the specimen geometry allows direct measurement of wear rates without a gravimetric soak control. The results of the material characterization tests can be used to accurately predict the rolling-sliding wear behavior of UHMWPE. Wear rate is directly related to crystallite size divided by the material yield strength. A modification of the delamination theory of wear is proposed to explain the wear mechanism. The results and conclusions of the present study can be used to specify future UHMWPE treatments that might eliminate a toughness-reducing radiation dose while improving the wear properties of the polymer. Such treatments would improve the in vivo performance of UHMWPE and hence would improve orthopaedic surgery outcomes.

  10. Mistletoe Infection in an Oak Forest Is Influenced by Competition and Host Size

    PubMed Central

    Matula, Radim; Svátek, Martin; Pálková, Marcela; Volařík, Daniel; Vrška, Tomáš

    2015-01-01

    Host size and distance from an infected plant have been previously found to affect mistletoe occurrence in woody vegetation but the effect of host plant competition on mistletoe infection has not been empirically tested. For an individual tree, increasing competition from neighbouring trees decreases its resource availability, and resource availability is also known to affect the establishment of mistletoes on host trees. Therefore, competition is likely to affect mistletoe infection but evidence for such a mechanism is lacking. Based on this, we hypothesised that the probability of occurrence as well as the abundance of mistletoes on a tree would increase not only with increasing host size and decreasing distance from an infected tree but also with decreasing competition by neighbouring trees. Our hypothesis was tested using generalized linear models (GLMs) with data on Loranthus europaeus Jacq., one of the two most common mistletoes in Europe, on 1015 potential host stems collected in a large fully mapped plot in the Czech Republic. Because many trees were multi-stemmed, we ran the analyses for both individual stems and whole trees. We found that the probability of mistletoe occurrence on individual stems was affected mostly by stem size, whereas competition had the most important effects on the probability of mistletoe occurrence on whole trees as well as on mistletoe abundance. Therefore, we confirmed our hypothesis that competition among trees has a negative effect on mistletoe occurrence. PMID:25992920

  11. Sci-Fri AM: Quality, Safety, and Professional Issues 01: CPQR Technical Quality Control Suite Development including Quality Control Workload Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malkoske, Kyle; Nielsen, Michelle; Brown, Erika

    A close partnership between the Canadian Partnership for Quality Radiotherapy (CPQR) and the Canadian Organization of Medical Physicist’s (COMP) Quality Assurance and Radiation Safety Advisory Committee (QARSAC) has resulted in the development of a suite of Technical Quality Control (TQC) Guidelines for radiation treatment equipment, that outline specific performance objectives and criteria that equipment should meet in order to assure an acceptable level of radiation treatment quality. The framework includes consolidation of existing guidelines and/or literature by expert reviewers, structured stages of public review, external field-testing and ratification by COMP. The adopted framework for the development and maintenance of themore » TQCs ensures the guidelines incorporate input from the medical physics community during development, measures the workload required to perform the QC tests outlined in each TQC, and remain relevant (i.e. “living documents”) through subsequent planned reviews and updates. This presentation will show the Multi-Leaf Linear Accelerator document as an example of how feedback and cross-national work to achieve a robust guidance document. During field-testing, each technology was tested at multiple centres in a variety of clinic environments. As part of the defined feedback, workload data was captured. This lead to average time associated with testing as defined in each TQC document. As a result, for a medium-sized centre comprising 6 linear accelerators and a comprehensive brachytherapy program, we evaluate the physics workload to 1.5 full-time equivalent physicist per year to complete all QC tests listed in this suite.« less

  12. Nature of bonding and cooperativity in linear DMSO clusters: A DFT, AIM and NCI analysis.

    PubMed

    Venkataramanan, Natarajan Sathiyamoorthy; Suvitha, Ambigapathy

    2018-05-01

    This study aims to cast light on the nature of interactions and cooperativity that exists in linear dimethyl sulfoxide (DMSO) clusters using dispersion corrected density functional theory. In the linear DMSO, DMSO molecules in the middle of the clusters are bound strongly than at the terminal. The plot of the total binding energy of the clusters vs the cluster size and mean polarizabilities vs cluster size shows an excellent linearity demonstrating the presence of cooperativity effect. The computed incremental binding energy of the clusters remains nearly constant, implying that DMSO addition at the terminal site can happen to form an infinite chain. In the linear clusters, two σ-hole at the terminal DMSO molecules were found and the value on it was found to increase with the increase in cluster size. The quantum theory of atoms in molecules topography shows the existence of hydrogen and SO⋯S type in linear tetramer and larger clusters. In the dimer and trimer SO⋯OS type of interaction exists. In 2D non-covalent interactions plot, additional peaks in the regions which contribute to the stabilization of the clusters were observed and it splits in the trimer and intensifies in the larger clusters. In the trimer and larger clusters in addition to the blue patches due to hydrogen bonds, additional, light blue patches were seen between the hydrogen atom of the methyl groups and the sulphur atom of the nearby DMSO molecule. Thus, in addition to the strong H-bonds, strong electrostatic interactions between the sulphur atom and methyl hydrogens exists in the linear clusters. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Protograph based LDPC codes with minimum distance linearly growing with block size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.

  14. Linear-sweep voltammetry of a soluble redox couple in a cylindrical electrode

    NASA Technical Reports Server (NTRS)

    Weidner, John W.

    1991-01-01

    An approach is described for using the linear sweep voltammetry (LSV) technique to study the kinetics of flooded porous electrodes by assuming a porous electrode as a collection of identical noninterconnected cylindrical pores that are filled with electrolyte. This assumption makes possible to study the behavior of this ideal electrode as that of a single pore. Alternatively, for an electrode of a given pore-size distribution, it is possible to predict the performance of different pore sizes and then combine the performance values.

  15. Accuracy of cochlear implant recipients on pitch perception, melody recognition, and speech reception in noise.

    PubMed

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Zhang, Xuyang; Gantz, Bruce; Froman, Rebecca; Olszewski, Carol

    2007-06-01

    The purposes of this study were to (a) examine the accuracy of cochlear implant recipients who use different types of devices and signal processing strategies on pitch ranking as a function of size of interval and frequency range and (b) to examine the relations between this pitch perception measure and demographic variables, melody recognition, and speech reception in background noise. One hundred fourteen cochlear implant users and 21 normal-hearing adults were tested on a pitch discrimination task (pitch ranking) that required them to determine direction of pitch change as a function of base frequency and interval size. Three groups were tested: (a) long electrode cochlear implant users (N = 101); (b) short electrode users that received acoustic plus electrical stimulation (A+E) (N = 13); and (c) a normal-hearing (NH) comparison group (N = 21). Pitch ranking was tested at standard frequencies of 131 to 1048 Hz, and the size of the pitch-change intervals ranged from 1 to 4 semitones. A generalized linear mixed model (GLMM) was fit to predict pitch ranking and to determine if group differences exist as a function of base frequency and interval size. Overall significance effects were measured with Chi-square tests and individual effects were measured with t-tests. Pitch ranking accuracy was correlated with demographic measures (age at time of testing, length of profound deafness, months of implant use), frequency difference limens, familiar melody recognition, and two measures of speech reception in noise. The long electrode recipients performed significantly poorer on pitch discrimination than the NH and A+E group. The A+E users performed similarly to the NH listeners as a function of interval size in the lower base frequency range, but their pitch discrimination scores deteriorated slightly in the higher frequency range. The long electrode recipients, although less accurate than participants in the NH and A+E groups, tended to perform with greater accuracy within the higher frequency range. There were statistically significant correlations between pitch ranking and familiar melody recognition as well as with pure-tone frequency difference limens at 200 and 400 Hz. Low-frequency acoustic hearing improves pitch discrimination as compared with traditional, electric-only cochlear implants. These findings have implications for musical tasks such as familiar melody recognition.

  16. Ostracod Body Size Change Across Space and Time

    NASA Astrophysics Data System (ADS)

    Nolen, L.; Llarena, L. A.; Saux, J.; Heim, N. A.; Payne, J.

    2014-12-01

    Many factors drive evolution, although it is not always clear which factors are more influential. Miller et al. (2009) found that there is a change in geographic disparity in diversity in marine biotas over time. We tested if there was also geographic disparity in body size during different epochs. We used marine ostracods, which are tiny crustaceans, as a study group for this analysis. We also studied which factor is more influential in body size change: distance or time. We compared the mean body size from different geologic time intervals as well as the mean body size from different locations for each epoch. We grouped ostracod occurrences from the Paleobiology Database into 10º x 10º grid cells on a paleogeographic map. Then we calculated the difference in mean size and the distance between the grid cells containing specimens. Our size data came from the Ellis & Messina"Catalogue of Ostracod" as well as the"Treatise on Invertebrate Paleontology". Sizes were calculated by applying the formula for the volume of an ellipsoid to three linear dimensions of the ostracod carapace (anteroposterior, dorsoventral, and right-left lengths). Throughout this analysis we have come to the realization that there is a trend in ostracods towards smaller size over time. Therefore there is also a trend through time of decreasing difference in size between occurrences in different grid cells. However, if time is not taken into account, there is no correlation between size and geographic distance. This may be attributed to the fact that one might not expect a big size difference between locations that are far apart but still at a similar latitude (for example, at the equator). This analysis suggests that distance alone is not the main factor in driving changes in ostracod size over time.

  17. Linear Proof-Mass Actuator

    NASA Technical Reports Server (NTRS)

    Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.

    1995-01-01

    Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.

  18. Development of a J-T Micro Compressor

    NASA Astrophysics Data System (ADS)

    Champagne, P.; Olson, J. R.; Nast, T.; Roth, E.; Collaco, A.; Kaldas, G.; Saito, E.; Loung, V.

    2015-12-01

    Lockheed Martin has developed and tested a space-quality compressor capable of delivering closed-loop gas flow with a high pressure ratio, suitable for driving a Joule- Thomson cold head. The compressor is based on a traditional “Oxford style” dual-opposed piston compressor with linear drive motors and flexure-bearing clearance-seal technology for high reliability and long life. This J-T compressor retains the approximate size, weight, and cost of the ultra-compact, 200 gram Lockheed Martin Pulse Tube Micro Compressor, despite the addition of a flow-rectifying system to convert the AC pressure wave into a steady flow.

  19. High precision optical fiber Fabry-Perot sensor for gas pressure detection

    NASA Astrophysics Data System (ADS)

    Mao, Yan; Tong, Xing-lin

    2013-09-01

    An optical fiber Fabry-Perot (F-P) sensor with quartz diaphragm for gas pressure testing was designed and fabricated. It consisted of single-mode fiber, hollow glass tube and quartz diaphragm. It uses the double peak demodulation to obtain the initialized cavity length. The variety of cavity length can be calcultated by the single peak demodulation after changing the gas pressure. The results show that the sensor is small in size, whose sensitivity is 19 pm/kPa in the range of the 10 ~ 260 kPa gas pressure. And it has good linearity and repeatability.

  20. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  1. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  2. Etched optical fiber vibration sensor to monitor health condition of beam like structures

    NASA Astrophysics Data System (ADS)

    Putha, Kishore; Dantala, Dinakar; Kamineni, Srimannarayana; Pachava, Vengal Rao

    2013-06-01

    Using a center etched single mode optical fiber, a simple vibration senor is designed to monitor the vibrations of a simply supported beam. The sensor has high linear response to the axial displacement of about 0.8 mm with a sensitivity of 32 mV/10 μm strain. The sensor is tested for periodic and suddenly released forces, and the results are found to coincide with the theoretical values. This simple design, small in size and low cost sensor may find applications in industry and civil engineering to monitor the vibrations of the beam structures and bridges.

  3. Caprylate Salts Based on Amines as Volatile Corrosion Inhibitors for Metallic Zinc: Theoretical and Experimental Studies

    PubMed Central

    Valente, Marco A. G.; Teixeira, Deiver A.; Azevedo, David L.; Feliciano, Gustavo T.; Benedetti, Assis V.; Fugivara, Cecílio S.

    2017-01-01

    The interaction of volatile corrosion inhibitors (VCI), caprylate salt derivatives from amines, with zinc metallic surfaces is assessed by density functional theory (DFT) computer simulations, electrochemical impedance (EIS) measurements and humid chamber tests. The results obtained by the different methods were compared, and linear correlations were obtained between theoretical and experimental data. The correlations between experimental and theoretical results showed that the molecular size is the determining factor in the inhibition efficiency. The models used and experimental results indicated that dicyclohexylamine caprylate is the most efficient inhibitor. PMID:28620602

  4. Equating Scores from Adaptive to Linear Tests

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2006-01-01

    Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…

  5. Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering

    NASA Astrophysics Data System (ADS)

    Bromley, B. C.; Warren, M. S.; Zurek, W. H.

    1997-02-01

    We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.

  6. Development of semiconductor tracking: The future linear collider case

    NASA Astrophysics Data System (ADS)

    Savoy-Navarro, Aurore

    2011-04-01

    An active R&D on silicon tracking for the linear collider, SiLC, is pursued since several years to develop the new generation of large area silicon trackers for the future linear collider(s). The R&D objectives on new sensors, new front end processing of the signal, and the related mechanical and integration challenges for building such large detectors within the proposed detector concepts are described. Synergies and differences with the LHC construction and upgrades are explained. The differences between the linear collider projects, namely the international linear collider, ILC, and the compact linear collider, CLIC, are discussed as well. Two final objectives are presented for the construction of this important sub-detector for the future linear collider experiments: a relatively short term design based on micro-strips combined or not with a gaseous central tracker and a longer term design based on an all-pixel tracker.The R&D objectives on sensors include single sided micro-strips as baseline for the shorter term with the strips from large wafers (at least 6 in), 200 μm thick, 50 μm pitch and the edgeless and alignment friendly options. This work is conducted by SiLC in collaboration with three technical research centers in Italy, Finland, and Spain and HPK. SiLC is studied as well, using advanced Si sensor technologies for higher granularity trackers especially short strips and pixels all based on 3D technology. New Deep Sub-Micron CMOS mix mode (analog and digital) FE and readout electronics are developed to fully process the detector signals currently adapted to the ILC cycle. It is a high-level processing and a fully programmable ASIC; highly fault tolerant. In its latest version, handling 128 channels will equip these next coming years larger size silicon tracking prototypes at test beams. Connection of the FEE chip on the silicon detector especially in the strip case is a major issue. Very preliminary results with inline pitch adapter based on wiring were just achieved. Bump-bonding or 3D vertical interconnect is the other SiLC R&D objective. The goal is to simplify the overall architecture and decrease the material budget of these devices. Three tracking concepts are briefly discussed, two of which are part of the ILC Letter of Intent of the ILD and SiD detector concepts. These last years, SiLC successfully performed beam tests to experience and test these R&D lines.

  7. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  8. Functional morphology of the bovid astragalus in relation to habitat: controlling phylogenetic signal in ecomorphology.

    PubMed

    Barr, W Andrew

    2014-11-01

    Bovid astragali are one of the most commonly preserved bones in the fossil record. Accordingly, astragali are an important target for studies seeking to predict the habitat preferences of fossil bovids based on bony anatomy. However, previous work has not tested functional hypotheses linking astragalar morphology with habitat while controlling for body size and phylogenetic signal. This article presents a functional framework relating the morphology of the bovid astragalus to habitat-specific locomotor ecology and tests four hypotheses emanating from this framework. Highly cursorial bovids living in structurally open habitats are hypothesized to differ from their less cursorial closed-habitat dwelling relatives in having (1) relatively short astragali to maintain rotational speed throughout the camming motion of the rotating astragalus, (2) a greater range of angular excursion at the hock, (3) relatively larger joint surface areas, and (4) a more pronounced "spline-and-groove" morphology promoting lateral joint stability. A diverse sample of 181 astragali from 50 extant species was scanned using a Next Engine laser scanner. Species were assigned to one of four habitat categories based on the published ecological literature. A series of 11 linear measurements and three joint surface areas were measured on each astragalus. A geometric mean body size proxy was used to size-correct the measurement data. Phylogenetic generalized least squares (PGLS) was used to test for differences between habitat categories while controlling for body size differences and phylogenetic signal. Statistically significant PGLS results support Hypotheses 1 and 2 (which are not mutually exclusive) as well as Hypothesis 3. No support was found for Hypothesis 4. These findings confirm that the morphology of the bovid astragalus is related to habitat-specific locomotor ecology, and that this relationship is statistically significant after controlling for body size and phylogeny. Thus, this study validates the use of this bone as an ecomorphological indicator. © 2014 Wiley Periodicals, Inc.

  9. Differences in motor performance between children and adolescents in Mozambique and Portugal: impact of allometric scaling.

    PubMed

    Dos Santos, Fernanda Karina; Nevill, Allan; Gomes, Thayse Natacha Q F; Chaves, Raquel; Daca, Timóteo; Madeira, Aspacia; Katzmarzyk, Peter T; Prista, António; Maia, José A R

    2016-05-01

    Children from developed and developing countries have different anthropometric characteristics which may affect their motor performance (MP). To use the allometric approach to model the relationship between body size and MP in youth from two countries differing in socio-economic status-Portugal and Mozambique. A total of 2946 subjects, 1280 Mozambicans (688 girls) and 1666 Portuguese (826 girls), aged 10-15 years were sampled. Height and weight were measured and the reciprocal ponderal index (RPI) was computed. MP included handgrip strength, 1-mile run/walk, curl-ups and standing long jump tests. A multiplicative allometric model was adopted to adjust for body size differences across countries. Differences in MP between Mozambican and Portuguese children exist, invariably favouring the latter. The allometric models used to adjust MP for differences in body size identified the optimal body shape to be either the RPI or even more linear, i.e. approximately (height/mass(0.25)). Having adjusted the MP variables for differences in body size, the differences between Mozambican and Portuguese children were invariably reduced and, in the case of grip strength, reversed. These results reinforce the notion that significant differences exist in MP across countries, even after adjusting for differences in body size.

  10. Detonation charge size versus coda magnitude relations in California and Nevada

    USGS Publications Warehouse

    Brocher, T.M.

    2003-01-01

    Magnitude-charge size relations have important uses in forensic seismology and are used in Comprehensive Nuclear-Test-Ban Treaty monitoring. I derive empirical magnitude versus detonation-charge-size relationships for 322 detonations located by permanent seismic networks in California and Nevada. These detonations, used in 41 different seismic refraction or network calibration experiments, ranged in yield (charge size) between 25 and 106 kg; coda magnitudes reported for them ranged from 0.5 to 3.9. Almost all represent simultaneous (single-fired) detonations of one or more boreholes. Repeated detonations at the same shotpoint suggest that the reported coda magnitudes are repeatable, on average, to within 0.1 magnitude unit. An empirical linear regression for these 322 detonations yields M = 0.31 + 0.50 log10(weight [kg]). The detonations compiled here demonstrate that the Khalturin et al. (1998) relationship, developed mainly for data from large chemical explosions but which fits data from nuclear blasts, can be used to estimate the minimum charge size for coda magnitudes between 0.5 and 3.9. Drilling, loading, and shooting logs indicate that the explosive specification, loading method, and effectiveness of tamp are the primary factors determining the efficiency of a detonation. These records indicate that locating a detonation within the water table is neither a necessary nor sufficient condition for an efficient shot.

  11. The roles of productivity and ecosystem size in determining food chain length in tropical terrestrial ecosystems.

    PubMed

    Young, Hillary S; McCauley, Douglas J; Dunbar, Robert B; Hutson, Michael S; Ter-Kuile, Ana Miller; Dirzo, Rodolfo

    2013-03-01

    Many different drivers, including productivity, ecosystem size, and disturbance, have been considered to explain natural variation in the length of food chains. Much remains unknown about the role of these various drivers in determining food chain length, and particularly about the mechanisms by which they may operate in terrestrial ecosystems, which have quite different ecological constraints than aquatic environments, where most food chain length studies have been thus far conducted. In this study, we tested the relative importance of ecosystem size and productivity in influencing food chain length in a terrestrial setting. We determined that (1) there is no effect of ecosystem size or productive space on food chain length; (2) rather, food chain length increases strongly and linearly with productivity; and (3) the observed changes in food chain length are likely achieved through a combination of changes in predator size, predator behavior, and consumer diversity along gradients in productivity. These results lend new insight into the mechanisms by which productivity can drive changes in food chain length, point to potential for systematic differences in the drivers of food web structure between terrestrial and aquatic systems, and challenge us to consider how ecological context may control the drivers that shape food chain length.

  12. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  13. Performance of a reentrant cavity beam position monitor

    NASA Astrophysics Data System (ADS)

    Simon, Claire; Luong, Michel; Chel, Stéphane; Napoly, Olivier; Novo, Jorge; Roudier, Dominique; Rouvière, Nelly; Baboi, Nicoleta; Mildner, Nils; Nölle, Dirk

    2008-08-01

    The beam-based alignment and feedback systems, essential operations for the future colliders, require high resolution beam position monitors (BPMs). In the framework of the European CARE/SRF program, a reentrant cavity BPM with its associated electronics was developed by the CEA/DSM/Irfu in collaboration with DESY. The design, the fabrication, and the beam test of this monitor are detailed within this paper. This BPM is designed to be inserted in a cryomodule, work at cryogenic temperature in a clean environment. It has achieved a resolution better than 10μm and has the possibility to perform bunch to bunch measurements for the x-ray free electron laser (X-FEL) and the International Linear Collider (ILC). Its other features are a small size of the rf cavity, a large aperture (78 mm), and an excellent linearity. A first prototype of a reentrant cavity BPM was installed in the free electron laser in Hamburg (FLASH), at Deutsches Elektronen-Synchrotron (DESY) and demonstrated its operation at cryogenic temperature inside a cryomodule. The second, installed, also, in the FLASH linac to be tested with beam, measured a resolution of approximately 4μm over a dynamic range ±5mm in single bunch.

  14. The effects of dry-rolled corn particle size on performance, carcass traits, and starch digestibility in feedlot finishing diets containing wet distiller's grains.

    PubMed

    Schwandt, E F; Wagner, J J; Engle, T E; Bartle, S J; Thomson, D U; Reinhardt, C D

    2016-03-01

    Crossbred yearling steers ( = 360; 395 ± 33.1 kg initial BW) were used to evaluate the effects of dry-rolled corn (DRC) particle size in diets containing 20% wet distiller's grains plus solubles on feedlot performance, carcass characteristics, and starch digestibility. Steers were used in a randomized complete block design and allocated to 36 pens (9 pens/treatment, with 10 animals/pen). Treatments were coarse DRC (4,882 μm), medium DRC (3,760 μm), fine DRC (2,359 μm), and steam-flaked corn (0.35 kg/L; SFC). Final BW and ADG were not affected by treatment ( > 0.05). Dry matter intake was greater and G:F was lower ( < 0.05) for steers fed DRC vs. steers fed SFC. There was a linear decrease ( < 0.05) in DMI in the final 5 wk on feed with decreasing DRC particle size. Fecal starch decreased (linear, < 0.01) as DRC particle size decreased. In situ starch disappearance was lower for DRC vs. SFC ( < 0.05) and linearly increased ( < 0.05) with decreasing particle size at 8 and 24 h. Reducing DRC particle size did not influence growth performance but increased starch digestion and influenced DMI of cattle on finishing diets. No differences ( > 0.10) were observed among treatments for any of the carcass traits measured. Results indicate improved ruminal starch digestibility, reduced fecal starch concentration, and reduced DMI with decreasing DRC particle size in feedlot diets containing 20% wet distiller's grains on a DM basis.

  15. Adsorption of Poly(methyl methacrylate) on Concave Al2O3 Surfaces in Nanoporous Membranes

    PubMed Central

    Nunnery, Grady; Hershkovits, Eli; Tannenbaum, Allen; Tannenbaum, Rina

    2009-01-01

    The objective of this study was to determine the influence of polymer molecular weight and surface curvature on the adsorption of polymers onto concave surfaces. Poly(methyl methacrylate) (PMMA) of various molecular weights was adsorbed onto porous aluminum oxide membranes having various pore sizes, ranging from 32 to 220 nm. The surface coverage, expressed as repeat units per unit surface area, was observed to vary linearly with molecular weight for molecular weights below ~120 000 g/mol. The coverage was independent of molecular weight above this critical molar mass, as was previously reported for the adsorption of PMMA on convex surfaces. Furthermore, the coverage varied linearly with pore size. A theoretical model was developed to describe curvature-dependent adsorption by considering the density gradient that exists between the surface and the edge of the adsorption layer. According to this model, the density gradient of the adsorbed polymer segments scales inversely with particle size, while the total coverage scales linearly with particle size, in good agreement with experiment. These results show that the details of the adsorption of polymers onto concave surfaces with cylindrical geometries can be used to calculate molecular weight (below a critical molecular weight) if pore size is known. Conversely, pore size can also be determined with similar adsorption experiments. Most significantly, for polymers above a critical molecular weight, the precise molecular weight need not be known in order to determine pore size. Moreover, the adsorption developed and validated in this work can be used to predict coverage also onto surfaces with different geometries. PMID:19415910

  16. Solar granulation and statistical crystallography: A modeling approach using size-shape relations

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1994-01-01

    The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.

  17. Changes in the selection differential exerted on a marine snail during the ontogeny of a predatory shore crab.

    PubMed

    Pakes, D; Boulding, E G

    2010-08-01

    Empirical estimates of selection gradients caused by predators are common, yet no one has quantified how these estimates vary with predator ontogeny. We used logistic regression to investigate how selection on gastropod shell thickness changed with predator size. Only small and medium purple shore crabs (Hemigrapsus nudus) exerted a linear selection gradient for increased shell-thickness within a single population of the intertidal snail (Littorina subrotundata). The shape of the fitness function for shell thickness was confirmed to be linear for small and medium crabs but was humped for large male crabs, suggesting no directional selection. A second experiment using two prey species to amplify shell thickness differences established that the selection differential on adult snails decreased linearly as crab size increased. We observed differences in size distribution and sex ratios among three natural shore crab populations that may cause spatial and temporal variation in predator-mediated selection on local snail populations.

  18. The Birmingham pituitary database: auditing the outcome of the treatment of acromegaly.

    PubMed

    Jenkins, D; O'Brien, I; Johnson, A; Shakespear, R; Sheppard, M C; Stewart, P M

    1995-11-01

    Reduction of GH concentrations in acromegalic subjects may improve the increased mortality associated with the condition. Audit of the biochemical outcome of the management of acromegaly is, therefore, important. (1) To audit the biochemical 'cure' rate of acromegalic patients treated by surgery and/or radiotherapy under the care of the South Birmingham Endocrine Clinic. (2) To assess the correlation between random or basal GH with IGF-I and nadir GH during an oral glucose tolerance test. Ascertainment of acromegalic patients from a pituitary database. Mode of therapy, pretreatment GH, pretreatment tumour size, post-treatment GH, post-treatment IGF-I and post-treatment nadir GH were recorded. Biochemical cure was defined as a most recent random or basal GH < 5 mU/l. Cure rates were determined. Eighty-nine acromegalic patients were identified as having received surgery and/or radiotherapy. In 35/89 (39%) the most recent GH was < 5 mU/l. The cure rate following surgery was 26/78 (33%). This was not significantly associated with tumour size, but was associated with pretreatment GH concentration (chi 2 = 7.1, 2d.f., P < 0.05). Random/basal GH showed a log-linear association with IGF-I, r = 0.72, and a linear association with nadir GH, r = 0.93. Biochemical cure of acromegaly was more strongly associated with pretreatment GH than with tumour size. Random/basal GH measurements are useful and convenient for the audit of treatment outcome in acromegaly. Ways of improving the biochemical outcome of acromegaly should be sought.

  19. Correlation between structure and compressive strength in a reticulated glass-reinforced hydroxyapatite foam.

    PubMed

    Callcut, S; Knowles, J C

    2002-05-01

    Glass-reinforced hydroxyapatite (HA) foams were produced using reticulated foam technology using a polyurethane template with two different pore size distributions. The mechanical properties were evaluated and the structure analyzed through density measurements, image analysis, X-ray diffraction (XRD) and scanning electron microscopy (SEM). For the mechanical properties, the use of a glass significantly improved the ultimate compressive strength (UCS) as did the use of a second coating. All the samples tested showed the classic three regions characteristic of an elastic brittle foam. From the density measurements, after application of a correction to compensate for the closed porosity, the bulk and apparent density showed a 1 : 1 correlation. When relative bulk density was plotted against UCS, a non-linear relationship was found characteristic of an isotropic open celled material. It was found by image analysis that the pore size distribution did not change and there was no degradation of the macrostructure when replicating the ceramic from the initial polyurethane template during processing. However, the pore size distributions did shift to a lower size by about 0.5 mm due to the firing process. The ceramic foams were found to exhibit mechanical properties typical of isotropic open cellular foams.

  20. Analysis of daylight performance of solar light pipes influenced by size and shape of sunlight captures

    NASA Astrophysics Data System (ADS)

    Wu, Yanpeng; Jin, Rendong; Zhang, Wenming; Liu, Li; Zou, Dachao

    2009-11-01

    Experimental investigations on three different sunlight captures with diameter 150mm, 212mm, 300mm were carried out under different conditions such as sunny conditions, cloudy conditions and overcast conditions and the two different size solar light pipes with diameter 360mm and 160mm under sunny conditions. The illuminance in the middle of the sunlight capture have relationship with its size, but not linear. To improve the efficiency of the solar light pipes, the structure and the performance of the sunlight capture must be enhanced. For example, University of Science and Technology Beijing Gymnasium, Beijing 2008 Olympic events of Judo and Taekwondo, 148 solar light pipes were installed with the diameter 530mm for each light pipe. Two sunlight captures with different shape were installed and tested. From the measuring results of the illuminance on the work plane of the gymnasium, the improvement sunlight captures have better effects with the size of augmenting and the machining of the internal surface at the same time, so that the refraction increased and the efficiency of solar light pipes improved. The better effects of supplementary lighting for the gymnasium have been achieved.

  1. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C.; Hine, N. D. M.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on amore » small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.« less

  2. The Non-linear Health Consequences of Living in Larger Cities.

    PubMed

    Rocha, Luis E C; Thorson, Anna E; Lambiotte, Renaud

    2015-10-01

    Urbanization promotes economy, mobility, access, and availability of resources, but on the other hand, generates higher levels of pollution, violence, crime, and mental distress. The health consequences of the agglomeration of people living close together are not fully understood. Particularly, it remains unclear how variations in the population size across cities impact the health of the population. We analyze the deviations from linearity of the scaling of several health-related quantities, such as the incidence and mortality of diseases, external causes of death, wellbeing, and health care availability, in respect to the population size of cities in Brazil, Sweden, and the USA. We find that deaths by non-communicable diseases tend to be relatively less common in larger cities, whereas the per capita incidence of infectious diseases is relatively larger for increasing population size. Healthier lifestyle and availability of medical support are disproportionally higher in larger cities. The results are connected with the optimization of human and physical resources and with the non-linear effects of social networks in larger populations. An urban advantage in terms of health is not evident, and using rates as indicators to compare cities with different population sizes may be insufficient.

  3. The Impact of the Grid Size on TomoTherapy for Prostate Cancer

    PubMed Central

    Kawashima, Motohiro; Kawamura, Hidemasa; Onishi, Masahiro; Takakusagi, Yosuke; Okonogi, Noriyuki; Okazaki, Atsushi; Sekihara, Tetsuo; Ando, Yoshitaka; Nakano, Takashi

    2017-01-01

    Discretization errors due to the digitization of computed tomography images and the calculation grid are a significant issue in radiation therapy. Such errors have been quantitatively reported for a fixed multifield intensity-modulated radiation therapy using traditional linear accelerators. The aim of this study is to quantify the influence of the calculation grid size on the dose distribution in TomoTherapy. This study used ten treatment plans for prostate cancer. The final dose calculation was performed with “fine” (2.73 mm) and “normal” (5.46 mm) grid sizes. The dose distributions were compared from different points of view: the dose-volume histogram (DVH) parameters for planning target volume (PTV) and organ at risk (OAR), the various indices, and dose differences. The DVH parameters were used Dmax, D2%, D2cc, Dmean, D95%, D98%, and Dmin for PTV and Dmax, D2%, and D2cc for OARs. The various indices used were homogeneity index and equivalent uniform dose for plan evaluation. Almost all of DVH parameters for the “fine” calculations tended to be higher than those for the “normal” calculations. The largest difference of DVH parameters for PTV was Dmax and that for OARs was rectal D2cc. The mean difference of Dmax was 3.5%, and the rectal D2cc was increased up to 6% at the maximum and 2.9% on average. The mean difference of D95% for PTV was the smallest among the differences of the other DVH parameters. For each index, whether there was a significant difference between the two grid sizes was determined through a paired t-test. There were significant differences for most of the indices. The dose difference between the “fine” and “normal” calculations was evaluated. Some points around high-dose regions had differences exceeding 5% of the prescription dose. The influence of the calculation grid size in TomoTherapy is smaller than traditional linear accelerators. However, there was a significant difference. We recommend calculating the final dose using the “fine” grid size. PMID:28974860

  4. Wolves adapt territory size, not pack size to local habitat quality.

    PubMed

    Kittle, Andrew M; Anderson, Morgan; Avgar, Tal; Baker, James A; Brown, Glen S; Hagens, Jevon; Iwachewski, Ed; Moffatt, Scott; Mosser, Anna; Patterson, Brent R; Reid, Douglas E B; Rodgers, Arthur R; Shuter, Jen; Street, Garrett M; Thompson, Ian D; Vander Vennen, Lucas M; Fryxell, John M

    2015-09-01

    1. Although local variation in territorial predator density is often correlated with habitat quality, the causal mechanism underlying this frequently observed association is poorly understood and could stem from facultative adjustment in either group size or territory size. 2. To test between these alternative hypotheses, we used a novel statistical framework to construct a winter population-level utilization distribution for wolves (Canis lupus) in northern Ontario, which we then linked to a suite of environmental variables to determine factors influencing wolf space use. Next, we compared habitat quality metrics emerging from this analysis as well as an independent measure of prey abundance, with pack size and territory size to investigate which hypothesis was most supported by the data. 3. We show that wolf space use patterns were concentrated near deciduous, mixed deciduous/coniferous and disturbed forest stands favoured by moose (Alces alces), the predominant prey species in the diet of wolves in northern Ontario, and in proximity to linear corridors, including shorelines and road networks remaining from commercial forestry activities. 4. We then demonstrate that landscape metrics of wolf habitat quality - projected wolf use, probability of moose occupancy and proportion of preferred land cover classes - were inversely related to territory size but unrelated to pack size. 5. These results suggest that wolves in boreal ecosystems alter territory size, but not pack size, in response to local variation in habitat quality. This could be an adaptive strategy to balance trade-offs between territorial defence costs and energetic gains due to resource acquisition. That pack size was not responsive to habitat quality suggests that variation in group size is influenced by other factors such as intraspecific competition between wolf packs. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  5. Novel AC Servo Rotating and Linear Composite Driving Device for Plastic Forming Equipment

    NASA Astrophysics Data System (ADS)

    Liang, Jin-Tao; Zhao, Sheng-Dun; Li, Yong-Yi; Zhu, Mu-Zhi

    2017-07-01

    The existing plastic forming equipment are mostly driven by traditional AC motors with long transmission chains, low efficiency, large size, low precision and poor dynamic response are the common disadvantages. In order to realize high performance forming processes, the driving device should be improved, especially for complicated processing motions. Based on electric servo direct drive technology, a novel AC servo rotating and linear composite driving device is proposed, which features implementing both spindle rotation and feed motion without transmission, so that compact structure and precise control can be achieved. Flux switching topology is employed in the rotating drive component for strong robustness, and fractional slot is employed in the linear direct drive component for large force capability. Then the mechanical structure for compositing rotation and linear motion is designed. A device prototype is manufactured, machining of each component and the whole assembly are presented respectively. Commercial servo amplifiers are utilized to construct the control system of the proposed device. To validate the effectiveness of the proposed composite driving device, experimental study on the dynamic test benches are conducted. The results indicate that the output torque can attain to 420 N·m and the dynamic tracking errors are less than about 0.3 rad in the rotating drive. the dynamic tracking errors are less than about 1.6 mm in the linear feed. The proposed research provides a method to construct high efficiency and accuracy direct driving device in plastic forming equipment.

  6. Long bunch trains measured using a prototype cavity beam position monitor for the Compact Linear Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullinan, F. J.; Boogert, S. T.; Farabolini, W.

    2015-11-19

    The Compact Linear Collider (CLIC) requires beam position monitors (BPMs) with 50 nm spatial resolution for alignment of the beam line elements in the main linac and beam delivery system. Furthermore, the BPMs must be able to make multiple independent measurements within a single 156 ns long bunch train. A prototype cavity BPM for CLIC has been manufactured and tested on the probe beam line at the 3rd CLIC Test Facility (CTF3) at CERN. The transverse beam position is determined from the electromagnetic resonant modes excited by the beam in the two cavities of the pickup, the position cavity and the referencemore » cavity. The mode that is measured in each cavity resonates at 15 GHz and has a loaded quality factor that is below 200. Analytical expressions for the amplitude, phase and total energy of signals from long trains of bunches have been derived and the main conclusions are discussed. The results of the beam tests are presented. The variable gain of the receiver electronics has been characterized using beam excited signals and the form of the signals for different beam pulse lengths with the 2/3 ns bunch spacing has been observed. The sensitivity of the reference cavity signal to charge and the horizontal position signal to beam offset have been measured and are compared with theoretical predictions based on laboratory measurements of the BPM pickup and the form of the resonant cavity modes as determined by numerical simulation. Lastly, the BPM was calibrated so that the beam position jitter at the BPM location could be measured. It is expected that the beam jitter scales linearly with the beam size and so the results are compared to predicted values for the latter.« less

  7. Discrimination of nuclear explosions and earthquakes from teleseismic distances with a local network of short period seismic stations using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Tiira, Timo

    1996-10-01

    Seismic discrimination capability of artificial neural networks (ANNs) was studied using earthquakes and nuclear explosions from teleseismic distances. The events were selected from two areas, which were analyzed separately. First, 23 nuclear explosions from Semipalatinsk and Lop Nor test sites were compared with 46 earthquakes from adjacent areas. Second, 39 explosions from Nevada test site were compared with 27 earthquakes from close-by areas. The basic discriminants were complexity, spectral ratio and third moment of frequency. The spectral discriminants were computed in five different ways to obtain all the information embedded in the signals, some of which were relatively weak. The discriminants were computed using data from six short period stations in Central and southern Finland. The spectral contents of the signals of both classes varied considerably between the stations. The 66 discriminants were formed into 65 optimum subsets of different sizes by using stepwise linear regression. A type of ANN called multilayer perceptron (MLP) was applied to each of the subsets. As a comparison the classification was repeated using linear discrimination analysis (LDA). Since the number of events was small the testing was made with the leave-one-out method. The ANN gave significantly better results than LDA. As a final tool for discrimination a combination of the ten neural nets with the best performance were used. All events from Central Asia were clearly discriminated and over 90% of the events from Nevada region were confidently discriminated. The better performance of ANNs was attributed to its ability to form complex decision regions between the groups and to its highly non-linear nature.

  8. Long bunch trains measured using a prototype cavity beam position monitor for the Compact Linear Collider

    NASA Astrophysics Data System (ADS)

    Cullinan, F. J.; Boogert, S. T.; Farabolini, W.; Lefevre, T.; Lunin, A.; Lyapin, A.; Søby, L.; Towler, J.; Wendt, M.

    2015-11-01

    The Compact Linear Collider (CLIC) requires beam position monitors (BPMs) with 50 nm spatial resolution for alignment of the beam line elements in the main linac and beam delivery system. Furthermore, the BPMs must be able to make multiple independent measurements within a single 156 ns long bunch train. A prototype cavity BPM for CLIC has been manufactured and tested on the probe beam line at the 3rd CLIC Test Facility (CTF3) at CERN. The transverse beam position is determined from the electromagnetic resonant modes excited by the beam in the two cavities of the pickup, the position cavity and the reference cavity. The mode that is measured in each cavity resonates at 15 GHz and has a loaded quality factor that is below 200. Analytical expressions for the amplitude, phase and total energy of signals from long trains of bunches have been derived and the main conclusions are discussed. The results of the beam tests are presented. The variable gain of the receiver electronics has been characterized using beam excited signals and the form of the signals for different beam pulse lengths with the 2 /3 ns bunch spacing has been observed. The sensitivity of the reference cavity signal to charge and the horizontal position signal to beam offset have been measured and are compared with theoretical predictions based on laboratory measurements of the BPM pickup and the form of the resonant cavity modes as determined by numerical simulation. Finally, the BPM was calibrated so that the beam position jitter at the BPM location could be measured. It is expected that the beam jitter scales linearly with the beam size and so the results are compared to predicted values for the latter.

  9. Visibility vs. biomass in flowers: exploring corolla allocation in Mediterranean entomophilous plants

    PubMed Central

    Herrera, Javier

    2009-01-01

    Background and Aims While pollinators may in general select for large, morphologically uniform floral phenotypes, drought stress has been proposed as a destabilizing force that may favour small flowers and/or promote floral variation within species. Methods The general validity of this concept was checked by surveying a taxonomically diverse array of 38 insect-pollinated Mediterranean species. The interplay between fresh biomass investment, linear size and percentage corolla allocation was studied. Allometric relationships between traits were investigated by reduced major-axis regression, and qualitative correlates of floral variation explored using general linear-model MANOVA. Key Results Across species, flowers were perfectly isometrical with regard to corolla allocation (i.e. larger flowers were just scaled-up versions of smaller ones and vice versa). In contrast, linear size and biomass varied allometrically (i.e. there were shape variations, in addition to variations in size). Most floral variables correlated positively and significantly across species, except corolla allocation, which was largely determined by family membership and floral symmetry. On average, species with bilateral flowers allocated more to the corolla than those with radial flowers. Plant life-form was immaterial to all of the studied traits. Flower linear size variation was in general low among conspecifics (coefficients of variation around 10 %), whereas biomass was in general less uniform (e.g. 200–400 mg in Cistus salvifolius). Significant among-population differences were detected for all major quantitative floral traits. Conclusions Flower miniaturization can allow an improved use of reproductive resources under prevailingly stressful conditions. The hypothesis that flower size reflects a compromise between pollinator attraction, water requirements and allometric constraints among floral parts is discussed. PMID:19258340

  10. Final Report: CNC Micromachines LDRD No.10793

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    JOKIEL JR., BERNHARD; BENAVIDES, GILBERT L.; BIEG, LOTHAR F.

    2003-04-01

    The three-year LDRD ''CNC Micromachines'' was successfully completed at the end of FY02. The project had four major breakthroughs in spatial motion control in MEMS: (1) A unified method for designing scalable planar and spatial on-chip motion control systems was developed. The method relies on the use of parallel kinematic mechanisms (PKMs) that when properly designed provide different types of motion on-chip without the need for post-fabrication assembly, (2) A new type of actuator was developed--the linear stepping track drive (LSTD) that provides open loop linear position control that is scalable in displacement, output force and step size. Several versionsmore » of this actuator were designed, fabricated and successfully tested. (3) Different versions of XYZ translation only and PTT motion stages were designed, successfully fabricated and successfully tested demonstrating absolutely that on-chip spatial motion control systems are not only possible, but are a reality. (4) Control algorithms, software and infrastructure based on MATLAB were created and successfully implemented to drive the XYZ and PTT motion platforms in a controlled manner. The control software is capable of reading an M/G code machine tool language file, decode the instructions and correctly calculate and apply position and velocity trajectories to the motion devices linear drive inputs to position the device platform along the trajectory as specified by the input file. A full and detailed account of design methodology, theory and experimental results (failures and successes) is provided.« less

  11. Body size and lower limb posture during walking in humans

    PubMed Central

    Hora, Martin; Soumar, Libor; Pontzer, Herman; Sládek, Vladimír

    2017-01-01

    We test whether locomotor posture is associated with body mass and lower limb length in humans and explore how body size and posture affect net joint moments during walking. We acquired gait data for 24 females and 25 males using a three-dimensional motion capture system and pressure-measuring insoles. We employed the general linear model and commonality analysis to assess the independent effect of body mass and lower limb length on flexion angles at the hip, knee, and ankle while controlling for sex and velocity. In addition, we used inverse dynamics to model the effect of size and posture on net joint moments. At early stance, body mass has a negative effect on knee flexion (p < 0.01), whereas lower limb length has a negative effect on hip flexion (p < 0.05). Body mass uniquely explains 15.8% of the variance in knee flexion, whereas lower limb length uniquely explains 5.4% of the variance in hip flexion. Both of the detected relationships between body size and posture are consistent with the moment moderating postural adjustments predicted by our model. At late stance, no significant relationship between body size and posture was detected. Humans of greater body size reduce the flexion of the hip and knee at early stance, which results in the moderation of net moments at these joints. PMID:28192522

  12. A highly efficient targeted recombination system for engineering linear chromosomes of industrial bacteria Streptomyces.

    PubMed

    Pan, Hung-Yin; Chen, Carton W; Huang, Chih-Hung

    2018-04-17

    Soil bacteria Streptomyces are the most important producers of secondary metabolites, including most known antibiotics. These bacteria and their close relatives are unique in possessing linear chromosomes, which typically harbor 20 to 30 biosynthetic gene clusters of tens to hundreds of kb in length. Many Streptomyces chromosomes are accompanied by linear plasmids with sizes ranging from several to several hundred kb. The large linear plasmids also often contain biosynthetic gene clusters. We have developed a targeted recombination procedure for arm exchanges between a linear plasmid and a linear chromosome. A chromosomal segment inserted in an artificially constructed plasmid allows homologous recombination between the two replicons at the homology. Depending on the design, the recombination may result in two recombinant replicons or a single recombinant chromosome with the loss of the recombinant plasmid that lacks a replication origin. The efficiency of such targeted recombination ranges from 9 to 83% depending on the locations of the homology (and thus the size of the chromosomal arm exchanged), essentially eliminating the necessity of selection. The targeted recombination is useful for the efficient engineering of the Streptomyces genome for large-scale deletion, addition, and shuffling.

  13. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  14. Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers

    NASA Astrophysics Data System (ADS)

    Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.

    2007-11-01

    The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.

  15. Tailoring of physical properties in highly filled experimental nanohybrid resin composites.

    PubMed

    Pick, Bárbara; Pelka, Matthias; Belli, Renan; Braga, Roberto R; Lohbauer, Ulrich

    2011-07-01

    To assess the elastic modulus (EM), volumetric shrinkage (VS), and polymerization shrinkage stress (PSS) of experimental highly filled nanohybrid composites as a function of matrix composition, filler distribution, and density. One regular viscosity nanohybrid composite (Grandio, VOCO, Germany) and one flowable nanohybrid composite (Grandio Flow, VOCO) were tested as references along with six highly filled experimental nanohybrid composites (four Bis-GMA-based, one UDMA-based, and one Ormocer®-based). The experimental composites varied in filler size and density. EM values were obtained from the "three-point bending" load-displacement curve. VS was calculated with Archimedes' buoyancy principle. PSS was determined in 1-mm thick specimens placed between two (poly)methyl methacrylate rods (Ø=6mm) attached to an universal testing machine. Data were analyzed using oneway ANOVA, Tukey's test (α=0.05), and linear regression analyses. The flowable composite exhibited the highest VS and PSS but lowest EM. The PSS was significantly lower with Ormocer. The EM was significantly higher among experimental composites with highest filler levels. No significant differences were found between all other experimental composites regarding VS and PSS. Filler density and size did not influence EM, VS, or PSS. Neither the filler configuration nor matrix composition in the investigated materials significantly influenced composite shrinkage and mechanical properties. The highest filled experimental composite seemed to increase EM by keeping VS and PSS low; however, matrix composition seemed to be the determinant factor for shrinkage and stress development. The Ormocer, with reduced PSS, deserves further investigation. Filler size and density did not influence the tested parameters. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  16. A new simplified method for measuring the permeability characteristics of highly porous media

    NASA Astrophysics Data System (ADS)

    Qin, Yinghong; Zhang, Mingyi; Mei, Guoxiong

    2018-07-01

    Fluid flow through highly porous media is important in a variety of science and technology fields, including hydrology, chemical engineering, convections in porous media, and others. While many methods have been available to measure the permeability of tight solid materials, such as concrete and rock, the technique for measuring the permeability of highly porous media is limited (such as gravel, aggregated soils, and crushed rock). This study proposes a new simplified method for measuring the permeability of highly porous media with a permeability of 10-8-10-4 m2, using a Venturi tube to gauge the gas flowing rate through the sample. Using crushed rocks and glass beads as the test media, we measure the permeability and inertial resistance factor of six types of single-size aggregate columns. We compare the testing results with the published permeability and inertial resistance factor of crushed rock and of glass beads. We found that in a log-log graph, the permeability and inertial resistance factor of a single-size aggregate heap increases linearly with the mean diameter of the aggregate. We speculate that the proposed simplified method is suitable to efficiently test the permeability and inertial resistance factor of a variety of porous media with an intrinsic permeability of 10-8-10-4 m2.

  17. Mediterranean dryland Mosaic: The effect of scale on core area metrics

    NASA Astrophysics Data System (ADS)

    Alhamad, Mohammad Noor; Alrababah, Mohammad

    2014-05-01

    Quantifying landscape spatial pattern is essential to understanding the relationship between landscape structure and ecological functions and process. Many landscape metrics have been developed to quantify spatial heterogeneity. Landscape metrics have been employed to measure the impact of humans on landscapes. We examined the response of four core areas metrics to a large range of grain sizes in Mediterranean dryland landscapes. The investigated metrics were (1) mean core area (CORE-MN), (2) area weighted mean core area (CORE-AM) , (3) total core area (TCA) and (4) core area percentage of landscape (CPLAND) within six land use types (urban, agriculture, olive orchids, forestry, shrubland and rangeland). Agriculture areas showed the highest value for minimum TCA (2779.4 ha) within the tested grain sizes, followed by rangeland (1778.3 ha) and Forest (1488.5 ha). On the other hand, shrubland showed the lowest TCA (8.0 ha). The minimum CPLAND values were ranged from 0.002 for shrubland to 0.682 for agriculture land use. The maximum CORE-MN among the tested land use type at all levels of grain sizes was exhibited by agriculture land use type (519.759 ha). The core area metrics showed three types of behavior in response to changing grain size in all landuse types. CORE-MN showed predictable relationship, best explained by non-linear responses to changing grain size (R2=0.99). Both TCA and CPLAND exhibited domain of scale effect in response to changing grain size. The threshold behavior for TCA and CPLAND was at the 4 x 4 grain size (about 1.3 ha). However, CORE-AM exhibited erratic behavior. The unique domain of scale-like behavior may be attributed to the unique characteristics of dryland Mediterranean landscapes; where both natural processes and ancient human activities play a great role in shaping the apparent pattern of the landscape

  18. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  19. Mechanobiological induction of long-range contractility by diffusing biomolecules and size scaling in cell assemblies

    NASA Astrophysics Data System (ADS)

    Dasbiswas, K.; Alster, E.; Safran, S. A.

    2016-06-01

    Mechanobiological studies of cell assemblies have generally focused on cells that are, in principle, identical. Here we predict theoretically the effect on cells in culture of locally introduced biochemical signals that diffuse and locally induce cytoskeletal contractility which is initially small. In steady-state, both the concentration profile of the signaling molecule as well as the contractility profile of the cell assembly are inhomogeneous, with a characteristic length that can be of the order of the system size. The long-range nature of this state originates in the elastic interactions of contractile cells (similar to long-range “macroscopic modes” in non-living elastic inclusions) and the non-linear diffusion of the signaling molecules, here termed mechanogens. We suggest model experiments on cell assemblies on substrates that can test the theory as a prelude to its applicability in embryo development where spatial gradients of morphogens initiate cellular development.

  20. Effect of laser irradiation on surface hardness and structural parameters of 7178 aluminium alloy

    NASA Astrophysics Data System (ADS)

    Maryam, Siddra; Bashir, Farooq

    2018-04-01

    Aluminium 7178 samples were prepared and irradiated with Nd:YAG laser. The surfaces of exposed samples were investigated using optical microscopy, which revealed that the surface morphology of the samples is changed drastically as a function of laser shots. It is revealed from the micrographs that the laser heat effected area increases with the increase in the number of the laser pulses. Furthermore morphological and mechanical properties were studied using XRD and Vickers hardness testing. XRD study shows an increasing trend in Grain size with the increasing number of laser shots. And the hardness of the samples as a function of the laser shots shows that the hardness first increases and then it decreases gradually. It was observed that the grain size has no pronouncing effect on the hardness. Hardness profile has a decreasing trend with the increase in linear distance from the boundary of the laser heat affected area.

  1. Cavitation erosion - scale effect and model investigations

    NASA Astrophysics Data System (ADS)

    Geiger, F.; Rutschmann, P.

    2015-12-01

    The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.

  2. Slit scan radiographic system for intermediate size rocket motors

    NASA Astrophysics Data System (ADS)

    Bernardi, Richard T.; Waters, David D.

    1992-12-01

    The development of slit-scan radiography capability for the NASA Advanced Computed Tomography Inspection System (ACTIS) computed tomography (CT) scanner at MSFC is discussed. This allows for tangential case interface (bondline) inspection at 2 MeV of intermediate-size rocket motors like the Hawk. Motorized mounting fixture hardware was designed, fabricated, installed, and tested on ACTIS. The ACTIS linear array of x-ray detectors was aligned parallel to the tangent line of a horizontal Hawk motor case. A 5 mm thick x-ray fan beam was used. Slit-scan images were produced with continuous rotation of a horizontal Hawk motor. Image features along Hawk motor case interfaces were indicated. A motorized exit cone fixture for ACTIS slit-scan inspection was also provided. The results of this SBIR have shown that slit scanning is an alternative imaging technique for case interface inspection. More data is required to qualify the technique for bondline inspection.

  3. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  4. [Features of the electronic eikonometer for the study of binocular function].

    PubMed

    Bourdy, C

    2013-05-01

    After presenting the components of this electronic eikonometer (device schematic and organizational chart) for the analysis and measurement of perceptive effects of binocular disparity, we review the specifics (tests with incorporated magnifications seen in polarized light) and the advantages of this device as compared to existing eikonometers (absence of any intermediary optical system). We provide a list of available tests in the test library and their parametric characteristics: Ogle Spatial Test for Aniseikonia, Fixation Disparity Test: binocular nonius, and Linear and Random stereoscopic tests. We develop a methodology adapted to each type of test and the manipulations to be performed by the operators and observers. We then provide some results of examinations performed with this eikonometer for a sample of observers equipped with glasses, contact lenses or implants. We propose an analysis of these various perceptive effects from experimental and theoretical studies: association between Depth, Disparity and Fusion; brief review of theoretical studies by automatic matrix calculus of retinal image size for various types of eyes: emmetropic and isometropic eyes based on various dioptric elements from Gullstrand's eye, axial anisometropia, anisometropia of conformation, aphakia resulting from these various eyes. We demonstrate the role of these studies in the analysis of subjective measurements of aniseikonia and for the choice of best correction: variations in amplitude and sign of the monocular components of the fixation disparity as a function of the viewing distance, Complexity of depth perception, according to the test used. Considering the evolution of the technology used for the realization of this prototype, we propose that this eikonometer be updated, in particular by using high-resolution flat screens, which would allow improvement and enrichment of the test library (definition, contrast and size of the observed images). Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  5. Sex, eye size, and the rate of myopic eye growth due to form deprivation in outbred white leghorn chickens.

    PubMed

    Chen, Yen-Po; Prashar, Ankush; Hocking, Paul M; Erichsen, Jonathan T; To, Chi Ho; Schaeffel, Frank; Guggenheim, Jeremy A

    2010-02-01

    There is considerable variation in the degree of form-deprivation myopia (FDM) induced in chickens by a uniform treatment regimen. Sex and pretreatment eye size have been found to be predictive of the rate of FD-induced eye growth. Therefore, this study was undertaken to test whether the greater rate of myopic eye growth in males is a consequence of their larger eyes or of some other aspect of their sex. Monocular FDM was induced in 4-day-old White Leghorn chicks for 4 days. Changes in ocular component dimensions and refractive error were assessed by A-scan ultrasonography and retinoscopy, respectively. Sex identification of chicks was performed by DNA test. Relationships between traits were assessed by multiple regression. FD produced (mean +/- SD) 13.47 +/- 3.12 D of myopia and 0.47 +/- 0.14 mm of vitreous chamber elongation. The level of induced myopia was not significantly different between the sexes, but the males had larger eyes initially and showed greater myopic eye growth than did the females. In multiple linear regression analysis, the partial correlation between sex and the degree of induced eye growth remained significant (P = 0.008) after adjustment for eye size, whereas the partial correlation between initial eye size and the degree of induced eye growth was no longer significant after adjustment for sex (P = 0.11). After adjustment for other factors, the chicks' sex accounted for 6.4% of the variation in FD-induced vitreous chamber elongation. The sex of the chick influences the rate of experimentally induced myopic eye growth, independent of its effects on eye size.

  6. Dry etching of chrome for photomasks for 100-nm technology using chemically amplified resist

    NASA Astrophysics Data System (ADS)

    Mueller, Mark; Komarov, Serguie; Baik, Ki-Ho

    2002-07-01

    Photo mask etching for the 100nm technology node places new requirements on dry etching processes. As the minimum-size features on the mask, such as assist bars and optical proximity correction (OPC) patterns, shrink down to 100nm, it is necessary to produce etch CD biases of below 20nm in order to reproduce minimum resist features into chrome with good pattern fidelity. In addition, vertical profiles are necessary. In previous generations of photomask technology, footing and sidewall profile slope were tolerated, since this dry etch profile was an improvement from wet etching. However, as feature sizes shrink, it is extremely important to select etch processes which do not generate a foot, because this will affect etch linearity and also limit the smallest etched feature size. Chemically amplified resist (CAR) from TOK is patterned with a 50keV MEBES eXara e-beam writer, allowing for patterning of small features with vertical resist profiles. This resist is developed for raster scan 50 kV e-beam systems. It has high contrast, good coating characteristics, good dry etch selectivity, and high environmental stability. Chrome etch process development has been performed using Design of Experiments to optimize parameters such as sidewall profile, etch CD bias, etch CD linearity for varying sizes of line/space patterns, etch CD linearity for varying sizes of isolated lines and spaces, loading effects, and application to contact etching.

  7. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. MEMS earthworm: a thermally actuated peristaltic linear micromotor

    NASA Astrophysics Data System (ADS)

    Arthur, Craig; Ellerington, Neil; Hubbard, Ted; Kujath, Marek

    2011-03-01

    This paper examines the design, fabrication and testing of a bio-mimetic MEMS (micro-electro mechanical systems) earthworm motor with external actuators. The motor consists of a passive mobile shuttle with two flexible diamond-shaped segments; each segment is independently squeezed by a pair of stationary chevron-shaped thermal actuators. Applying a specific sequence of squeezes to the earthworm segments, the shuttle can be driven backward or forward. Unlike existing inchworm drives that use clamping and thrusting actuators, the earthworm actuators apply only clamping forces to the shuttle, and lateral thrust is produced by the shuttle's compliant geometry. The earthworm assembly is fabricated using the PolyMUMPs process with planar dimensions of 400 µm width by 800 µm length. The stationary actuators operate within the range of 4-9 V and provide a maximum shuttle range of motion of 350 µm (approximately half its size), a maximum shuttle speed of 17 mm s-1 at 10 kHz, and a maximum dc shuttle force of 80 µN. The shuttle speed was found to vary linearly with both input voltage and input frequency. The shuttle force was found to vary linearly with the actuator voltage.

  9. Experimental and Numerical Simulation Analysis of Typical Carbon Woven Fabric/Epoxy Laminates Subjected to Lightning Strike

    NASA Astrophysics Data System (ADS)

    Yin, J. J.; Chang, F.; Li, S. L.; Yao, X. L.; Sun, J. R.; Xiao, Y.

    2017-12-01

    To clarify the evolution of damage for typical carbon woven fabric/epoxy laminates exposed to lightning strike, artificial lightning testing on carbon woven fabric/epoxy laminates were conducted, damage was assessed using visual inspection and damage peeling approaches. Relationships between damage size and action integral were also elucidated. Results showed that damage appearance of carbon woven fabric/epoxy laminate presents circular distribution, and center of the circle located at the lightning attachment point approximately, there exist no damage projected area dislocations for different layers, visual damage territory represents maximum damage scope; visible damage can be categorized into five modes: resin ablation, fiber fracture and sublimation, delamination, ablation scallops and block-shaped ply-lift; delamination damage due to resin pyrolysis and internal pressure exist obvious distinguish; project area of total damage is linear with action integral for the same type specimens, that of resin ablation damage is linear with action integral, but no correlation with specimen type, for all specimens, damage depth is linear with logarithm of action integral. The coupled thermal-electrical model constructed is capable to simulate the ablation damage for carbon woven fabric/epoxy laminates exposed to simulated lightning current through experimental verification.

  10. Modeling grain size variations of aeolian gypsum deposits at White Sands, New Mexico, using AVIRIS imagery

    USGS Publications Warehouse

    Ghrefat, H.A.; Goodell, P.C.; Hubbard, B.E.; Langford, R.P.; Aldouri, R.E.

    2007-01-01

    Visible and Near-Infrared (VNIR) through Short Wavelength Infrared (SWIR) (0.4-2.5????m) AVIRIS data, along with laboratory spectral measurements and analyses of field samples, were used to characterize grain size variations in aeolian gypsum deposits across barchan-transverse, parabolic, and barchan dunes at White Sands, New Mexico, USA. All field samples contained a mineralogy of ?????100% gypsum. In order to document grain size variations at White Sands, surficial gypsum samples were collected along three Transects parallel to the prevailing downwind direction. Grain size analyses were carried out on the samples by sieving them into seven size fractions ranging from 45 to 621????m, which were subjected to spectral measurements. Absorption band depths of the size fractions were determined after applying an automated continuum-removal procedure to each spectrum. Then, the relationship between absorption band depth and gypsum size fraction was established using a linear regression. Three software processing steps were carried out to measure the grain size variations of gypsum in the Dune Area using AVIRIS data. AVIRIS mapping results, field work and laboratory analysis all show that the interdune areas have lower absorption band depth values and consist of finer grained gypsum deposits. In contrast, the dune crest areas have higher absorption band depth values and consist of coarser grained gypsum deposits. Based on laboratory estimates, a representative barchan-transverse dune (Transect 1) has a mean grain size of 1.16 ??{symbol} (449????m). The error bar results show that the error ranges from - 50 to + 50????m. Mean grain size for a representative parabolic dune (Transect 2) is 1.51 ??{symbol} (352????m), and 1.52 ??{symbol} (347????m) for a representative barchan dune (Transect 3). T-test results confirm that there are differences in the grain size distributions between barchan and parabolic dunes and between interdune and dune crest areas. The t-test results also show that there are no significant differences between modeled and laboratory-measured grain size values. Hyperspectral grain size modeling can help to determine dynamic processes shaping the formation of the dunes such as wind directions, and the relative strengths of winds through time. This has implications for studying such processes on other planetary landforms that have mineralogy with unique absorption bands in VNIR-SWIR hyperspectral data. ?? 2006 Elsevier B.V. All rights reserved.

  11. Recent Improvements to the Acoustical Testing Laboratory at the NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Podboy, Devin M.; Mirecki, Julius H.; Walker, Bruce E.; Sutliff, Daniel L.

    2014-01-01

    The Acoustical Testing Laboratory (ATL) consists of a 27- by 23- by 20-ft (height) convertible hemi/anechoic chamber and separate sound-attenuating test support enclosure. Absorptive fiberglass wedges in the test chamber provide an anechoic environment down to 100 Hz. A spring-isolated floor system affords vibration isolation above 3 Hz. These specifications, along with very low design background levels, enable the acquisition of accurate and repeatable acoustical measurements on test articles that produce very low sound pressures. Removable floor wedges allow the test chamber to operate in either a hemi-anechoic or anechoic configuration, depending on the size of the test article and the specific test being conducted. The test support enclosure functions as a control room during normal operations. Recently improvements were accomplished in support of continued usage of the ATL by NASA programs including an analysis of the ultra-sonic characteristics. A 3-D traverse system inside the chamber was utilized for acquiring acoustic data for these tests. The traverse system drives a linear array of 13, 1/4 in.-microphones spaced 3 in. apart (36 in. span). An updated data acquisition system was also incorporated into the facility.

  12. Recent Improvements to the Acoustical Testing Laboratory at the NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Podboy, Devin M.; Mirecki, Julius H.; Walker, Bruce E.; Sutliff, Daniel L.

    2014-01-01

    The Acoustical Testing Laboratory (ATL) consists of a 27 by 23 by 20 ft (height) convertible hemi/anechoic chamber and separate sound-attenuating test support enclosure. Absorptive fiberglass wedges in the test chamber provide an anechoic environment down to 100 Hz. A spring-isolated floor system affords vibration isolation above 3 Hz. These specifications, along with very low design background levels, enable the acquisition of accurate and repeatable acoustical measurements on test articles that produce very low sound pressures. Removable floor wedges allow the test chamber to operate in either a hemi-anechoic or anechoic configuration, depending on the size of the test article and the specific test being conducted. The test support enclosure functions as a control room during normal operations. Recently improvements were accomplished in support of continued usage of the ATL by NASA programs including an analysis of the ultra-sonic characteristics. A 3 dimensional traverse system inside the chamber was utilized for acquiring acoustic data for these tests. The traverse system drives a linear array of 13, 1/4"-microphones spaced 3" apart (36" span). An updated data acquisition system was also incorporated into the facility.

  13. Extreme value statistics analysis of fracture strengths of a sintered silicon nitride failing from pores

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1992-01-01

    Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.

  14. On remote sensing of small aerosol particles with polarized light

    NASA Astrophysics Data System (ADS)

    Sun, W.

    2012-12-01

    The CALIPSO satellite mission consistently measures volume (including molecule and particulate) light depolarization ratio of ~2% for smoke, compared to ~1% for marine aerosols and ~15% for dust. The observed ~2% smoke depolarization ratio comes primarily from the nonspherical habits of particles in the smoke at certain particle sizes. The depolarization of linearly polarized light by small sphere aggregates and irregular Gaussian-shaped particles is studied, to reveal the physics between the depolarization of linearly polarized light and aerosol shape and size. It is found that randomly oriented nonspherical particles have some common depolarization properties as functions of scattering angle and size parameter. This may be very useful information for active remote sensing of small nonspherical aerosols using polarized light. We also show that the depolarization ratio from the CALIPSO measurements could be used to derive smoke aerosol particle size. The mean particle size of South-African smoke is estimated to be about half of the 532 nm wavelength of the CALIPSO lidar.

  15. Hypothesis test for synchronization: twin surrogates revisited.

    PubMed

    Romano, M Carmen; Thiel, Marco; Kurths, Jürgen; Mergenthaler, Konstantin; Engbert, Ralf

    2009-03-01

    The method of twin surrogates has been introduced to test for phase synchronization of complex systems in the case of passive experiments. In this paper we derive new analytical expressions for the number of twins depending on the size of the neighborhood, as well as on the length of the trajectory. This allows us to determine the optimal parameters for the generation of twin surrogates. Furthermore, we determine the quality of the twin surrogates with respect to several linear and nonlinear statistics depending on the parameters of the method. In the second part of the paper we perform a hypothesis test for phase synchronization in the case of experimental data from fixational eye movements. These miniature eye movements have been shown to play a central role in neural information processing underlying the perception of static visual scenes. The high number of data sets (21 subjects and 30 trials per person) allows us to compare the generated twin surrogates with the "natural" surrogates that correspond to the different trials. We show that the generated twin surrogates reproduce very well all linear and nonlinear characteristics of the underlying experimental system. The synchronization analysis of fixational eye movements by means of twin surrogates reveals that the synchronization between the left and right eye is significant, indicating that either the centers in the brain stem generating fixational eye movements are closely linked, or, alternatively that there is only one center controlling both eyes.

  16. A powerful and flexible approach to the analysis of RNA sequence count data

    PubMed Central

    Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.

    2011-01-01

    Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900

  17. Correlation of Normal Gravity Mixed Convection Blowoff Limits with Microgravity Forced Flow Blowoff Limits

    NASA Technical Reports Server (NTRS)

    Marcum, Jeremy W.; Olson, Sandra L.; Ferkul, Paul V.

    2016-01-01

    The axisymmetric rod geometry in upward axial stagnation flow provides a simple way to measure normal gravity blowoff limits to compare with microgravity Burning and Suppression of Solids - II (BASS-II) results recently obtained aboard the International Space Station. This testing utilized the same BASS-II concurrent rod geometry, but with the addition of normal gravity buoyant flow. Cast polymethylmethacrylate (PMMA) rods of diameters ranging from 0.635 cm to 3.81 cm were burned at oxygen concentrations ranging from 14 to 18% by volume. The forced flow velocity where blowoff occurred was determined for each rod size and oxygen concentration. These blowoff limits compare favorably with the BASS-II results when the buoyant stretch is included and the flow is corrected by considering the blockage factor of the fuel. From these results, the normal gravity blowoff boundary for this axisymmetric rod geometry is determined to be linear, with oxygen concentration directly proportional to flow speed. We describe a new normal gravity 'upward flame spread test' method which extrapolates the linear blowoff boundary to the zero stretch limit in order to resolve microgravity flammability limits-something current methods cannot do. This new test method can improve spacecraft fire safety for future exploration missions by providing a tractable way to obtain good estimates of material flammability in low gravity.

  18. Using eye tracking to test for individual differences in attention to attractive faces

    PubMed Central

    Valuch, Christian; Pflüger, Lena S.; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli. PMID:25698993

  19. Finite Element Simulation for Analysing the Design and Testing of an Energy Absorption System

    PubMed Central

    Segade, Abraham; López-Campos, José A.; Fernández, José R.; Casarejos, Enrique; Vilán, José A.

    2016-01-01

    It is not uncommon to use profiles to act as energy absorption parts in vehicle safety systems. This work analyses an impact attenuator based on a simple design and discusses the use of a thermoplastic material. We present the design of the impact attenuator and a mechanical test for the prototype. We develop a simulation model using the finite element method and explicit dynamics, and we evaluate the most appropriate mesh size and integration for describing the test results. Finally, we consider the performance of different materials, metallic ones (steel AISI 4310, Aluminium 5083-O) and a thermoplastic foam (IMPAXX500™). This reflects the car industry’s interest in using new materials to make high-performance, low-mass energy absorbers. We show the strength of the models when it comes to providing reliable results for large deformations and strong non-linearities, and how they are highly correlated with respect to the test results both in value and behaviour. PMID:28773778

  20. Using eye tracking to test for individual differences in attention to attractive faces.

    PubMed

    Valuch, Christian; Pflüger, Lena S; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli.

  1. SSE-based Thomas algorithm for quasi-block-tridiagonal linear equation systems, optimized for small dense blocks

    NASA Astrophysics Data System (ADS)

    Barnaś, Dawid; Bieniasz, Lesław K.

    2017-07-01

    We have recently developed a vectorized Thomas solver for quasi-block tridiagonal linear algebraic equation systems using Streaming SIMD Extensions (SSE) and Advanced Vector Extensions (AVX) in operations on dense blocks [D. Barnaś and L. K. Bieniasz, Int. J. Comput. Meth., accepted]. The acceleration caused by vectorization was observed for large block sizes, but was less satisfactory for small blocks. In this communication we report on another version of the solver, optimized for small blocks of size up to four rows and/or columns.

  2. Infrared laser spectroscopy of the linear C13 carbon cluster

    NASA Technical Reports Server (NTRS)

    Giesen, T. F.; Van Orden, A.; Hwang, H. J.; Fellers, R. S.; Provencal, R. A.; Saykally, R. J.

    1994-01-01

    The infrared absorption spectrum of a linear, 13-atom carbon cluster (C13) has been observed by using a supersonic cluster beam-diode laser spectrometer. Seventy-six rovibrational transitions were measured near 1809 wave numbers and assigned to an antisymmetric stretching fundamental in the 1 sigma g+ ground state of C13. This definitive structural characterization of a carbon cluster in the intermediate size range between C10 and C20 is in apparent conflict with theoretical calculations, which predict that clusters of this size should exist as planar monocyclic rings.

  3. Angular Momentum Transfer and Fractional Moment of Inertia in Pulsar Glitches

    NASA Astrophysics Data System (ADS)

    Eya, I. O.; Urama, J. O.; Chukwude, A. E.

    2017-05-01

    We use the Jodrell Bank Observatory glitch database containing 472 glitches from 165 pulsars to investigate the angular momentum transfer during rotational glitches in pulsars. Our emphasis is on pulsars with at least five glitches, of which there are 26 that exhibit 261 glitches in total. This paper identifies four pulsars in which the angular momentum transfer, after many glitches, is almost linear with time. The Lilliefore test on the cumulative distribution of glitch spin-up sizes in these glitching pulsars shows that glitch sizes in 12 pulsars are normally distributed, suggesting that their glitches originate from the same momentum reservoir. In addition, the distribution of the fractional moment of inertia (I.e., the ratio of the moment of inertia of neutron star components that are involved in the glitch process) have a single mode, unlike the distribution of fractional glitch size (Δν/ν), which is usually bimodal. The mean fractional moment of inertia in the glitching pulsars we sampled has a very weak correlation with the pulsar spin properties, thereby supporting a neutron star interior mechanism for the glitch phenomenon.

  4. Angular Momentum Transfer and Fractional Moment of Inertia in Pulsar Glitches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eya, I. O.; Urama, J. O.; Chukwude, A. E., E-mail: innocent.eya@unn.edu.ng, E-mail: innocent.eya@gmail.com

    We use the Jodrell Bank Observatory glitch database containing 472 glitches from 165 pulsars to investigate the angular momentum transfer during rotational glitches in pulsars. Our emphasis is on pulsars with at least five glitches, of which there are 26 that exhibit 261 glitches in total. This paper identifies four pulsars in which the angular momentum transfer, after many glitches, is almost linear with time. The Lilliefore test on the cumulative distribution of glitch spin-up sizes in these glitching pulsars shows that glitch sizes in 12 pulsars are normally distributed, suggesting that their glitches originate from the same momentum reservoir.more » In addition, the distribution of the fractional moment of inertia (i.e., the ratio of the moment of inertia of neutron star components that are involved in the glitch process) have a single mode, unlike the distribution of fractional glitch size (Δ ν / ν ), which is usually bimodal. The mean fractional moment of inertia in the glitching pulsars we sampled has a very weak correlation with the pulsar spin properties, thereby supporting a neutron star interior mechanism for the glitch phenomenon.« less

  5. Size dependent nanomechanics of coil spring shaped polymer nanowires

    NASA Astrophysics Data System (ADS)

    Ushiba, Shota; Masui, Kyoko; Taguchi, Natsuo; Hamano, Tomoki; Kawata, Satoshi; Shoji, Satoru

    2015-11-01

    Direct laser writing (DLW) via two-photon polymerization (TPP) has been established as a powerful technique for fabrication and integration of nanoscale components, as it enables the production of three dimensional (3D) micro/nano objects. This technique has indeed led to numerous applications, including micro- and nanoelectromechanical systems (MEMS/NEMS), metamaterials, mechanical metamaterials, and photonic crystals. However, as the feature sizes decrease, an urgent demand has emerged to uncover the mechanics of nanosized polymer materials. Here, we fabricate coil spring shaped polymer nanowires using DLW via two-photon polymerization. We find that even the nanocoil springs follow a linear-response against applied forces, following Hooke’s law, as revealed by compression tests using an atomic force microscope. Further, the elasticity of the polymer material is found to become significantly greater as the wire radius is decreased from 550 to 350 nm. Polarized Raman spectroscopy measurements show that polymer chains are aligned in nanowires along the axis, which may be responsible for the size dependence. Our findings provide insight into the nanomechanics of polymer materials fabricated by DLW, which leads to further applications based on nanosized polymer materials.

  6. Size dependent nanomechanics of coil spring shaped polymer nanowires.

    PubMed

    Ushiba, Shota; Masui, Kyoko; Taguchi, Natsuo; Hamano, Tomoki; Kawata, Satoshi; Shoji, Satoru

    2015-11-27

    Direct laser writing (DLW) via two-photon polymerization (TPP) has been established as a powerful technique for fabrication and integration of nanoscale components, as it enables the production of three dimensional (3D) micro/nano objects. This technique has indeed led to numerous applications, including micro- and nanoelectromechanical systems (MEMS/NEMS), metamaterials, mechanical metamaterials, and photonic crystals. However, as the feature sizes decrease, an urgent demand has emerged to uncover the mechanics of nanosized polymer materials. Here, we fabricate coil spring shaped polymer nanowires using DLW via two-photon polymerization. We find that even the nanocoil springs follow a linear-response against applied forces, following Hooke's law, as revealed by compression tests using an atomic force microscope. Further, the elasticity of the polymer material is found to become significantly greater as the wire radius is decreased from 550 to 350 nm. Polarized Raman spectroscopy measurements show that polymer chains are aligned in nanowires along the axis, which may be responsible for the size dependence. Our findings provide insight into the nanomechanics of polymer materials fabricated by DLW, which leads to further applications based on nanosized polymer materials.

  7. Second-order motions contribute to vection.

    PubMed

    Gurnsey, R; Fleet, D; Potechin, C

    1998-09-01

    First- and second-order motions differ in their ability to induce motion aftereffects (MAEs) and the kinetic depth effect (KDE). To test whether second-order stimuli support computations relating to motion-in-depth we examined the vection illusion (illusory self motion induced by image flow) using a vection stimulus (V, expanding concentric rings) that depicted a linear path through a circular tunnel. The set of vection stimuli contained differing amounts of first- and second-order motion energy (ME). Subjects reported the duration of the perceived MAEs and the duration of their vection percept. In Experiment 1 both MAEs and vection durations were longest when the first-order (Fourier) components of V were present in the stimulus. In Experiment 2, V was multiplicatively combined with static noise carriers having different check sizes. The amount of first-order ME associated with V increases with check size. MAEs were found to increase with check size but vection durations were unaffected. In general MAEs depend on the amount of first-order ME present in the signal. Vection, on the other hand, appears to depend on a representation of image flow that combines first- and second-order ME.

  8. Small-scale rotor test rig capabilities for testing vibration alleviation algorithms

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.; Leyland, Jane Anne

    1987-01-01

    A test was conducted to assess the capabilities of a small scale rotor test rig for implementing higher harmonic control and stability augmentation algorithms. The test rig uses three high speed actuators to excite the swashplate over a range of frequencies. The actuator position signals were monitored to measure the response amplitudes at several frequencies. The ratio of response amplitude to excitation amplitude was plotted as a function of frequency. In addition to actuator performance, acceleration from six accelerometers placed on the test rig was monitored to determine whether a linear relationship exists between the harmonics of N/Rev control input and the least square error (LSE) identification technique was used to identify local and global transfer matrices for two rotor speeds at two batch sizes each. It was determined that the multicyclic control computer system interfaced very well with the rotor system and kept track of the input accelerometer signals and their phase angles. However, the current high speed actuators were found to be incapable of providing sufficient control authority at the higher excitation frequencies.

  9. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  10. Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.

    PubMed

    Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M

    2017-09-01

    Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.

  11. Non-Targeted Effects and the Dose Response for Heavy Ion Tumorigenesis

    NASA Technical Reports Server (NTRS)

    Chappelli, Lori J.; Cucinotta, Francis A.

    2010-01-01

    BACKGROUND: There is no human epidemiology data available to estimate the heavy ion cancer risks experienced by astronauts in space. Studies of tumor induction in mice are a necessary step to estimate risks to astronauts. Previous experimental data can be better utilized to model dose response for heavy ion tumorigenesis and plan future low dose studies. DOSE RESPONSE MODELS: The Harderian Gland data of Alpen et al.[1-3] was re-analyzed [4] using non-linear least square regression. The data set measured the induction of Harderian gland tumors in mice by high-energy protons, helium, neon, iron, niobium and lanthanum with LET s ranging from 0.4 to 950 keV/micron. We were able to strengthen the individual ion models by combining data for all ions into a model that relates both radiation dose and LET for the ion to tumor prevalence. We compared models based on Targeted Effects (TE) to one motivated by Non-targeted Effects (NTE) that included a bystander term that increased tumor induction at low doses non-linearly. When comparing fitted models to the experimental data, we considered the adjusted R2, the Akaike Information Criteria (AIC), and the Bayesian Information Criteria (BIC) to test for Goodness of fit.In the adjusted R2test, the model with the highest R2values provides a better fit to the available data. In the AIC and BIC tests, the model with the smaller values of the summary value provides the better fit. The non-linear NTE models fit the combined data better than the TE models that are linear at low doses. We evaluated the differences in the relative biological effectiveness (RBE) and found the NTE model provides a higher RBE at low dose compared to the TE model. POWER ANALYSIS: The final NTE model estimates were used to simulate example data to consider the design of new experiments to detect NTE at low dose for validation. Power and sample sizes were calculated for a variety of radiation qualities including some not considered in the Harderian Gland data set and with different background tumor incidences. We considered different experimental designs with varying number of doses and varying low doses dependant on the LET of the radiation. The optimal design to detect a NTE for an individual ion had 4 doses equally spaced below a maximal dose where bending due to cell sterilization was < 2%. For example at 100 keV/micron we would irradiate at 0.03 Gy, 0.065 Gy, 0.13 Gy, and 0.26 Gy and require 850 mice including a control dose for a sensitivity to detect NTE with 80% power. Sample sizes could be improved by combining ions similar to the methods used with the Harderian Gland data.

  12. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to < 0.01 Mbp, is modeled using computer simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  13. The Effect of Primary School Size on Academic Achievement

    ERIC Educational Resources Information Center

    Gershenson, Seth; Langbein, Laura

    2015-01-01

    Evidence on optimal school size is mixed. We estimate the effect of transitory changes in school size on the academic achievement of fourth-and fifth-grade students in North Carolina using student-level longitudinal administrative data. Estimates of value-added models that condition on school-specific linear time trends and a variety of…

  14. Correlation of spleen metabolism assessed by 18F-FDG PET with serum interleukin-2 receptor levels and other biomarkers in patients with untreated sarcoidosis.

    PubMed

    Kalkanis, Alexandros; Kalkanis, Dimitrios; Drougas, Dimitrios; Vavougios, George D; Datseris, Ioannis; Judson, Marc A; Georgiou, Evangelos

    2016-03-01

    The objective of our study was to assess the possible relationship between splenic F-18-fluorodeoxyglucose (18F-FDG) uptake and other established biochemical markers of sarcoidosis activity. Thirty treatment-naive sarcoidosis patients were prospectively enrolled in this study. They underwent biochemical laboratory tests, including serum interleukin-2 receptor (sIL-2R), serum C-reactive protein, serum angiotensin-I converting enzyme, and 24-h urine calcium levels, and a whole-body combined 18F-FDG PET/computed tomography (PET/CT) scan as a part of an ongoing study at our institute. These biomarkers were statistically compared in these patients. A statistically significant linear dependence was detected between sIL-2R and log-transformed spleen-average standard uptake value (SUV avg) (R2=0.488, P<0.0001) and log-transformed spleen-maximum standard uptake value (SUV max) (R2=0.490, P<0.0001). sIL-2R levels and splenic size correlated linearly (Pearson's r=0.373, P=0.042). Multivariate linear regression analysis revealed that this correlation remained significant after age and sex adjustment (β=0.001, SE=0.001, P=0.024). No statistically significant associations were detected between (a) any two serum biomarkers or (b) between spleen-SUV measurements and any serum biomarker other than sIL-2R. Our analysis revealed an association between sIL-2R levels and spleen 18F-FDG uptake and size, whereas all other serum biomarkers were not significantly associated with each other or with PET 18F-FDG uptake. Our results suggest that splenic inflammation may be related to the systemic inflammatory response in sarcoidosis that may be associated with elevated sIL-2R levels.

  15. A novel large thrust-weight ratio V-shaped linear ultrasonic motor with a flexible joint.

    PubMed

    Li, Xiaoniu; Yao, Zhiyuan; Yang, Mojian

    2017-06-01

    A novel large thrust-weight ratio V-shaped linear ultrasonic motor with a flexible joint is proposed in this paper. The motor is comprised of a V-shaped transducer, a slider, a clamp, and a base. The V-shaped transducer consists of two piezoelectric beams connected through a flexible joint to form an appropriate coupling angle. The V-shaped motor is operated in the coupled longitudinal-bending mode. Longitudinal and bending movements are transferred by the flexible joint between the two beams. Compared with the coupled longitudinal-bending mode of the single piezoelectric beam or the symmetrical and asymmetrical modes of the previous V-shaped transducer, the coupled longitudinal-bending mode of the V-shaped transducer with a flexible joint provides higher vibration efficiency and more convenient mode conformance adjustment. A finite element model of the V-shaped transducer is created to numerically study the influence of geometrical parameters and to determine the final geometrical parameters. In this paper, three prototypes were then fabricated and experimentally investigated. The modal test results match well with the finite element analysis. The motor mechanical output characteristics of three different coupling angles θ indicate that V-90 (θ = 90°) is the optimal angle. The mechanical output experiments conducted using the V-90 prototype (Size: 59.4 mm × 30.7 mm × 4 mm) demonstrate that the maximum unloaded speed is 1.2 m/s under a voltage of 350 Vpp, and the maximum output force is 15 N under a voltage of 300 Vpp. The proposed novel V-shaped linear ultrasonic motor has a compact size and a simple structure with a large thrust-weight ratio (0.75 N/g) and high speed.

  16. Improved Linear-Ion-Trap Frequency Standard

    NASA Technical Reports Server (NTRS)

    Prestage, John D.

    1995-01-01

    Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.

  17. Pupils' over-reliance on linearity: a scholastic effect?

    PubMed

    Van Dooren, Wim; De Bock, Dirk; Janssens, Dirk; Verschaffel, Lieven

    2007-06-01

    From upper elementary education on, children develop a tendency to over-use linearity. Particularly, it is found that many pupils assume that if a figure enlarges k times, the area enlarges k times too. However, most research was conducted with traditional, school-like word problems. This study examines whether pupils also over-use linearity if non-linear problems are embedded in meaningful, authentic performance tasks instead of traditional, school-like word problems, and whether this experience influences later behaviour. Ninety-three sixth graders from two primary schools in Flanders, Belgium. Pupils received a pre-test with traditional word problems. Those who made a linear error on the non-linear area problem were subjected to individual interviews. They received one new non-linear problem, in the S-condition (again a traditional, scholastic word problem), D-condition (the same word problem with a drawing) or P-condition (a meaningful performance-based task). Shortly afterwards, pupils received a post-test, containing again a non-linear word problem. Most pupils from the S-condition displayed linear reasoning during the interview. Offering drawings (D-condition) had a positive effect, but presenting the problem as a performance task (P-condition) was more beneficial. Linear reasoning was nearly absent in the P-condition. Remarkably, at the post-test, most pupils from all three groups again applied linear strategies. Pupils' over-reliance on linearity seems partly elicited by the school-like word problem format of test items. Pupils perform much better if non-linear problems are offered as performance tasks. However, a single experience does not change performances on a comparable word problem test afterwards.

  18. Design, synthesis and biological evaluation of (S)-valine thiazole-derived cyclic and non-cyclic peptidomimetic oligomers as modulators of human P-glycoprotein (ABCB1)

    PubMed Central

    Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E.; Patel, Bhargav A.; Ambudkar, Suresh V.; Talele, Tanaji T.

    2014-01-01

    Multidrug resistance (MDR) caused by ATP-binding cassette (ABC) transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure to cancer chemotherapy. Previously, selenazole containing cyclic peptides were reported as P-gp inhibitors and these were also used for co-crystallization with mouse P-gp, which has 87% homology to human P-gp. It has been reported that human P-gp, can simultaneously accommodate 2-3 moderate size molecules at the drug binding pocket. Our in-silico analysis based on the homology model of human P-gp spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at drug-binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity and the structural form (linear and cyclic) of valine-derived thiazole peptides that can accommodate well in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear- (13) and cyclic-trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 = 1.5 μM). Cyclic trimer and linear trimer being equipotent, future studies can be focused on non-cyclic counterparts of cyclic peptides maintaining linear trimer length. Binding model of the linear trimer (13) within the drug-binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the non-cyclic form. PMID:24288265

  19. Design, synthesis, and biological evaluation of (S)-valine thiazole-derived cyclic and noncyclic peptidomimetic oligomers as modulators of human P-glycoprotein (ABCB1).

    PubMed

    Singh, Satyakam; Prasad, Nagarajan Rajendra; Kapoor, Khyati; Chufan, Eduardo E; Patel, Bhargav A; Ambudkar, Suresh V; Talele, Tanaji T

    2014-01-03

    Multidrug resistance caused by ATP binding cassette transporter P-glycoprotein (P-gp) through extrusion of anticancer drugs from the cells is a major cause of failure in cancer chemotherapy. Previously, selenazole-containing cyclic peptides were reported as P-gp inhibitors and were also used for co-crystallization with mouse P-gp, which has 87 % homology to human P-gp. It has been reported that human P-gp can simultaneously accommodate two to three moderately sized molecules at the drug binding pocket. Our in silico analysis, based on the homology model of human P-gp, spurred our efforts to investigate the optimal size of (S)-valine-derived thiazole units that can be accommodated at the drug binding pocket. Towards this goal, we synthesized varying lengths of linear and cyclic derivatives of (S)-valine-derived thiazole units to investigate the optimal size, lipophilicity, and structural form (linear or cyclic) of valine-derived thiazole peptides that can be accommodated in the P-gp binding pocket and affects its activity, previously an unexplored concept. Among these oligomers, lipophilic linear (13) and cyclic trimer (17) derivatives of QZ59S-SSS were found to be the most and equally potent inhibitors of human P-gp (IC50 =1.5 μM). As the cyclic trimer and linear trimer compounds are equipotent, future studies should focus on noncyclic counterparts of cyclic peptides maintaining linear trimer length. A binding model of the linear trimer 13 within the drug binding site on the homology model of human P-gp represents an opportunity for future optimization, specifically replacing valine and thiazole groups in the noncyclic form. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. SU-F-207-16: CT Protocols Optimization Using Model Observer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, H; Fan, J; Kupinski, M

    2015-06-15

    Purpose: To quantitatively evaluate the performance of different CT protocols using task-based measures of image quality. This work studies the task of size and the contrast estimation of different iodine concentration rods inserted in head- and body-sized phantoms using different imaging protocols. These protocols are designed to have the same dose level (CTDIvol) but using different X-ray tube voltage settings (kVp). Methods: Different concentrations of iodine objects inserted in a head size phantom and a body size phantom are imaged on a 64-slice commercial CT scanner. Scanning protocols with various tube voltages (80, 100, and 120 kVp) and current settingsmore » are selected, which output the same absorbed dose level (CTDIvol). Because the phantom design (size of the iodine objects, the air gap between the inserted objects and the phantom) is not ideal for a model observer study, the acquired CT images are used to generate simulation images with four different sizes and five different contracts iodine objects. For each type of the objects, 500 images (100 x 100 pixels) are generated for the observer study. The observer selected in this study is the channelized scanning linear observer which could be applied to estimate the size and the contrast. The figure of merit used is the correct estimation ratio. The mean and the variance are estimated by the shuffle method. Results: The results indicate that the protocols with 100 kVp tube voltage setting provides the best performance for iodine insert size and contrast estimation for both head and body phantom cases. Conclusion: This work presents a practical and robust quantitative approach using channelized scanning linear observer to study contrast and size estimation performance from different CT protocols. Different protocols at same CTDIvol setting could Result in different image quality performance. The relationship between the absorbed dose and the diagnostic image quality is not linear.« less

  1. Constrained Laboratory vs. Unconstrained Steering-Induced Rollover Crash Tests.

    PubMed

    Kerrigan, Jason R; Toczyski, Jacek; Roberts, Carolyn; Zhang, Qi; Clauser, Mark

    2015-01-01

    The goal of this study was to evaluate how well an in-laboratory rollover crash test methodology that constrains vehicle motion can reproduce the dynamics of unconstrained full-scale steering-induced rollover crash tests in sand. Data from previously-published unconstrained steering-induced rollover crash tests using a full-size pickup and mid-sized sedan were analyzed to determine vehicle-to-ground impact conditions and kinematic response of the vehicles throughout the tests. Then, a pair of replicate vehicles were prepared to match the inertial properties of the steering-induced test vehicles and configured to record dynamic roof structure deformations and kinematic response. Both vehicles experienced greater increases in roll-axis angular velocities in the unconstrained tests than in the constrained tests; however, the increases that occurred during the trailing side roof interaction were nearly identical between tests for both vehicles. Both vehicles experienced linear accelerations in the constrained tests that were similar to those in the unconstrained tests, but the pickup, in particular, had accelerations that were matched in magnitude, timing, and duration very closely between the two test types. Deformations in the truck test were higher in the constrained than the unconstrained, and deformations in the sedan were greater in the unconstrained than the constrained as a result of constraints of the test fixture, and differences in impact velocity for the trailing side. The results of the current study suggest that in-laboratory rollover tests can be used to simulate the injury-causing portions of unconstrained rollover crashes. To date, such a demonstration has not yet been published in the open literature. This study did, however, show that road surface can affect vehicle response in a way that may not be able to be mimicked in the laboratory. Lastly, this study showed that configuring the in-laboratory tests to match the leading-side touchdown conditions could result in differences in the trailing side impact conditions.

  2. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics

    PubMed Central

    2016-01-01

    Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications. PMID:27806075

  3. Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics.

    PubMed

    Miguel-Hurtado, Oscar; Guest, Richard; Stevenage, Sarah V; Neil, Greg J; Black, Sue

    2016-01-01

    Understanding the relationship between physiological measurements from human subjects and their demographic data is important within both the biometric and forensic domains. In this paper we explore the relationship between measurements of the human hand and a range of demographic features. We assess the ability of linear regression and machine learning classifiers to predict demographics from hand features, thereby providing evidence on both the strength of relationship and the key features underpinning this relationship. Our results show that we are able to predict sex, height, weight and foot size accurately within various data-range bin sizes, with machine learning classification algorithms out-performing linear regression in most situations. In addition, we identify the features used to provide these relationships applicable across multiple applications.

  4. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  5. The Role of Breast Size and Areolar Pigmentation in Perceptions of Women's Sexual Attractiveness, Reproductive Health, Sexual Maturity, Maternal Nurturing Abilities, and Age.

    PubMed

    Dixson, Barnaby J; Duncan, Melanie; Dixson, Alan F

    2015-08-01

    Women's breast morphology is thought to have evolved via sexual selection as a signal of maturity, health, and fecundity. While research demonstrates that breast morphology is important in men's judgments of women's attractiveness, it remains to be determined how perceptions might differ when considering a larger suite of mate relevant attributes. Here, we tested how variation in breast size and areolar pigmentation affected perceptions of women's sexual attractiveness, reproductive health, sexual maturity, maternal nurturing abilities, and age. Participants (100 men; 100 women) rated images of female torsos modeled to vary in breast size (very small, small, medium, and large) and areolar pigmentation (light, medium, and dark) for each of the five attributes listed above. Sexual attractiveness ratings increased linearly with breast size, but large breasts were not judged to be significantly more attractive than medium-sized breasts. Small and medium-sized breasts were rated as most attractive if they included light or medium colored areolae, whereas large breasts were more attractive if they had medium or dark areolae. Ratings for perceived age, sexual maturity, and nurturing ability also increased with breast size. Darkening the areolae reduced ratings of the reproductive health of medium and small breasts, whereas it increased ratings for large breasts. There were no significant sex differences in ratings of any of the perceptual measures. These results demonstrate that breast size and areolar pigmentation interact to determine ratings for a suite of sociosexual attributes, each of which may be relevant to mate choice in men and intra-sexual competition in women.

  6. Microstructure characterization of 316L deformed at high strain rates using EBSD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yvell, K., E-mail: kyv@du.se

    2016-12-15

    Specimens from split Hopkinson pressure bar experiments, at strain rates between ~ 1000–9000 s{sup −1} at room temperature and 500 °C, have been studied using electron backscatter diffraction. No significant differences in the microstructures were observed at different strain rates, but were observed for different strains and temperatures. Size distribution for subgrains with boundary misorientations > 2° can be described as a bimodal lognormal area distribution. The distributions were found to change due to deformation. Part of the distribution describing the large subgrains decreased while the distribution for the small subgrains increased. This is in accordance with deformation being heterogeneousmore » and successively spreading into the undeformed part of individual grains. The variation of the average size for the small subgrain distribution varies with strain but not with strain rate in the tested interval. The mean free distance for dislocation slip, interpreted here as the average size of the distribution of small subgrains, displays a variation with plastic strain which is in accordance with the different stages in the stress-strain curves. The rate of deformation hardening in the linear hardening range is accurately calculated using the variation of the small subgrain size with strain. - Highlights: •Only changes in strain, not strain rate, gave differences in the microstructure. •A bimodal lognormal size distribution was found to describe the size distribution. •Variation of the subgrain fraction sizes agrees with models for heterogeneous slip. •Variation of subgrain size with strain describes part of the stress strain curve.« less

  7. Association between excess weight and beverage portion size consumed in Brazil

    PubMed Central

    Bezerra, Ilana Nogueira; de Alencar, Eudóxia Sousa

    2018-01-01

    ABSTRACT OBJECTIVE To describe the beverage portion size consumed and to evaluate their association with excess weight in Brazil. METHODS We used data from the National Dietary Survey, which included individuals with two days of food record aged over 20 years (n = 24,527 individuals). The beverages were categorized into six groups: soft drink, 100% fruit juice, fruit drink, alcoholic beverage, milk, and coffee or tea. We estimated the average portion consumed for each group and we evaluated, using linear regression, the association between portion size per group and the variables of age, sex, income, and nutritional status. We tested the association between portion size and excess weight using Poisson regression, adjusted for age, sex, income, and total energy intake. RESULTS The most frequently consumed beverages in Brazil were coffee and tea, followed by 100% fruit juices, soft drinks, and milk. Alcoholic beverages presented the highest average in the portion size consumed, followed by soft drinks, 100% fruit juice, fruit drink, and milk. Portion size showed positive association with excess weight only in the soft drink (PR = 1.19, 95%CI 1.10–1.27) and alcoholic beverage groups (PR = 1.20, 95%CI, 1.11–1.29), regardless of age, sex, income, and total energy intake. CONCLUSIONS Alcoholic beverages and soft drinks presented the highest averages in portion size and positive association with excess weight. Public health interventions should address the issue of portion sizes offered to consumers by discouraging the consumption of large portions, especially sweetened and low nutritional beverages. PMID:29489988

  8. Development and clinical evaluation of an ionization chamber array with 3.5 mm pixel pitch for quality assurance in advanced radiotherapy techniques.

    PubMed

    Togno, M; Wilkens, J J; Menichelli, D; Oechsner, M; Perez-Andujar, A; Morin, O

    2016-05-01

    To characterize a new air vented ionization chamber technology, suitable to build detector arrays with small pixel pitch and independence of sensitivity on dose per pulse. The prototype under test is a linear array of air vented ionization chambers, consisting of 80 pixels with 3.5 mm pixel pitch distance and a sensitive volume of about 4 mm(3). The detector has been characterized with (60)Co radiation and MV x rays from different linear accelerators (with flattened and unflattened beam qualities). Sensitivity dependence on dose per pulse has been evaluated under MV x rays by changing both the source to detector distance and the beam quality. Bias voltage has been varied in order to evaluate the charge collection efficiency in the most critical conditions. Relative dose profiles have been measured for both flattened and unflattened distributions with different field sizes. The reference detectors were a commercial array of ionization chambers and an amorphous silicon flat panel in direct conversion configuration. Profiles of dose distribution have been measured also with intensity modulated radiation therapy (IMRT), stereotactic radiosurgery (SRS), and volumetric modulated arc therapy (VMAT) patient plans. Comparison has been done with a commercial diode array and with Gafchromic EBT3 films. Repeatability and stability under continuous gamma irradiation are within 0.3%, in spite of low active volume and sensitivity (∼200 pC/Gy). Deviation from linearity is in the range [0.3%, -0.9%] for a dose of at least 20 cGy, while a worsening of linearity is observed below 10 cGy. Charge collection efficiency with 2.67 mGy/pulse is higher than 99%, leading to a ±0.9% sensitivity change in the range 0.09-2.67 mGy/pulse (covering all flattened and unflattened beam qualities). Tissue to phantom ratios show an agreement within 0.6% with the reference detector up to 34 cm depth. For field sizes in the range 2 × 2 to 15 × 15 cm(2), the output factors are in agreement with a thimble chamber within 2%, while with 25 × 25 cm(2) field size, an underestimation of 4.0% was found. Agreement of field and penumbra width measurements with the flat panel is of the order of 1 mm down to 1 × 1 cm(2) field size. Flatness and symmetry values measured with the 1D array and the reference detectors are comparable, and differences are always smaller than 1%. Angular dependence of the detector, when compared to measurements taken with a cylindrical chamber in the same phantom, is as large as 16%. This includes inhomogeneity and asymmetry of the design, which during plan verification are accounted for by the treatment planning system (TPS). The detector is capable to reproduce the dose distributions of IMRT and VMAT plans with a maximum deviation from TPS of 3.0% in the target region. In the case of VMAT and SRS plans, an average (maximum) deviation of the order of 1% (4%) from films has been measured. The investigated technology appears to be useful both for Linac QA and patient plan verification, especially in treatments with steep dose gradients and nonuniform dose rates such as VMAT and SRS. Major limitations of the present prototype are the linearity at low dose, which can be solved by optimizing the readout electronics, and the underestimation of output factors with large field sizes. The latter problem is presently not completely understood and will require further investigations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Togno, M., E-mail: michele.togno@iba-group.com; Department of Radiation Oncology, Technische Universität München, Klinikum rechts der Isar, Munich 81675; IBA Dosimetry GmbH, Schwarzenbruck 90592

    Purpose: To characterize a new air vented ionization chamber technology, suitable to build detector arrays with small pixel pitch and independence of sensitivity on dose per pulse. Methods: The prototype under test is a linear array of air vented ionization chambers, consisting of 80 pixels with 3.5 mm pixel pitch distance and a sensitive volume of about 4 mm{sup 3}. The detector has been characterized with {sup 60}Co radiation and MV x rays from different linear accelerators (with flattened and unflattened beam qualities). Sensitivity dependence on dose per pulse has been evaluated under MV x rays by changing both themore » source to detector distance and the beam quality. Bias voltage has been varied in order to evaluate the charge collection efficiency in the most critical conditions. Relative dose profiles have been measured for both flattened and unflattened distributions with different field sizes. The reference detectors were a commercial array of ionization chambers and an amorphous silicon flat panel in direct conversion configuration. Profiles of dose distribution have been measured also with intensity modulated radiation therapy (IMRT), stereotactic radiosurgery (SRS), and volumetric modulated arc therapy (VMAT) patient plans. Comparison has been done with a commercial diode array and with Gafchromic EBT3 films. Results: Repeatability and stability under continuous gamma irradiation are within 0.3%, in spite of low active volume and sensitivity (∼200 pC/Gy). Deviation from linearity is in the range [0.3%, −0.9%] for a dose of at least 20 cGy, while a worsening of linearity is observed below 10 cGy. Charge collection efficiency with 2.67 mGy/pulse is higher than 99%, leading to a ±0.9% sensitivity change in the range 0.09–2.67 mGy/pulse (covering all flattened and unflattened beam qualities). Tissue to phantom ratios show an agreement within 0.6% with the reference detector up to 34 cm depth. For field sizes in the range 2 × 2 to 15 × 15 cm{sup 2}, the output factors are in agreement with a thimble chamber within 2%, while with 25 × 25 cm{sup 2} field size, an underestimation of 4.0% was found. Agreement of field and penumbra width measurements with the flat panel is of the order of 1 mm down to 1 × 1 cm{sup 2} field size. Flatness and symmetry values measured with the 1D array and the reference detectors are comparable, and differences are always smaller than 1%. Angular dependence of the detector, when compared to measurements taken with a cylindrical chamber in the same phantom, is as large as 16%. This includes inhomogeneity and asymmetry of the design, which during plan verification are accounted for by the treatment planning system (TPS). The detector is capable to reproduce the dose distributions of IMRT and VMAT plans with a maximum deviation from TPS of 3.0% in the target region. In the case of VMAT and SRS plans, an average (maximum) deviation of the order of 1% (4%) from films has been measured. Conclusions: The investigated technology appears to be useful both for Linac QA and patient plan verification, especially in treatments with steep dose gradients and nonuniform dose rates such as VMAT and SRS. Major limitations of the present prototype are the linearity at low dose, which can be solved by optimizing the readout electronics, and the underestimation of output factors with large field sizes. The latter problem is presently not completely understood and will require further investigations.« less

  10. HiDi: an efficient reverse engineering schema for large-scale dynamic regulatory network reconstruction using adaptive differentiation.

    PubMed

    Deng, Yue; Zenil, Hector; Tegnér, Jesper; Kiani, Narsis A

    2017-12-15

    The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes. The Matlab code of the HiDi implementation is available at: www.complexitycalculator.com/HiDiScript.zip. hzenilc@gmail.com or narsis.kiani@ki.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. The instantaneous radial growth rate of stellar discs

    NASA Astrophysics Data System (ADS)

    Pezzulli, G.; Fraternali, F.; Boissier, S.; Muñoz-Mateos, J. C.

    2015-08-01

    We present a new and simple method to measure the instantaneous mass and radial growth rates of the stellar discs of spiral galaxies, based on their star formation rate surface density (SFRD) profiles. Under the hypothesis that discs are exponential with time-varying scalelengths, we derive a universal theoretical profile for the SFRD, with a linear dependence on two parameters: the specific mass growth rate ν _ M ≡ dot{M}_⋆ /M_⋆ and the specific radial growth rate ν _ R ≡ dot{R}_⋆ /R_⋆ of the disc. We test our theory on a sample of 35 nearby spiral galaxies, for which we derive a measurement of νM and νR. 32/35 galaxies show the signature of ongoing inside-out growth (νR > 0). The typical derived e-folding time-scales for mass and radial growth in our sample are ˜10 and ˜30 Gyr, respectively, with some systematic uncertainties. More massive discs have a larger scatter in νM and νR, biased towards a slower growth, both in mass and size. We find a linear relation between the two growth rates, indicating that our galaxy discs grow in size at ˜0.35 times the rate at which they grow in mass; this ratio is largely unaffected by systematics. Our results are in very good agreement with theoretical expectations if known scaling relations of disc galaxies are not evolving with time.

  12. Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier

    2007-04-01

    In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.

  13. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  14. Non-linear scaling of oxygen consumption and heart rate in a very large cockroach species (Gromphadorhina portentosa): correlated changes with body size and temperature.

    PubMed

    Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F

    2012-04-01

    Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.

  15. Use of electrothermal atomic absorption spectrometry for size profiling of gold and silver nanoparticles.

    PubMed

    Panyabut, Teerawat; Sirirat, Natnicha; Siripinyanond, Atitaya

    2018-02-13

    Electrothermal atomic absorption spectrometry (ETAAS) was applied to investigate the atomization behaviors of gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) in order to relate with particle size information. At various atomization temperatures from 1400 °C to 2200 °C, the time-dependent atomic absorption peak profiles of AuNPs and AgNPs with varying sizes from 5 nm to 100 nm were examined. With increasing particle size, the maximum absorbance was observed at the longer time. The time at maximum absorbance was found to linearly increase with increasing particle size, suggesting that ETAAS can be applied to provide the size information of nanoparticles. With the atomization temperature of 1600 °C, the mixtures of nanoparticles containing two particle sizes, i.e., 5 nm tannic stabilized AuNPs with 60, 80, 100 nm citrate stabilized AuNPs, were investigated and bimodal peaks were observed. The particle size dependent atomization behaviors of nanoparticles show potential application of ETAAS for providing size information of nanoparticles. The calibration plot between the time at maximum absorbance and the particle size was applied to estimate the particle size of in-house synthesized AuNPs and AgNPs and the results obtained were in good agreement with those from flow field-flow fractionation (FlFFF) and transmission electron microscopy (TEM) techniques. Furthermore, the linear relationship between the activation energy and the particle size was observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Structural differences among alkali-soluble arabinoxylans from maize (Zea mays), rice (Oryza sativa), and wheat (Triticum aestivum) brans influence human fecal fermentation profiles.

    PubMed

    Rose, Devin J; Patterson, John A; Hamaker, Bruce R

    2010-01-13

    Human fecal fermentation profiles of maize, rice, and wheat bran and their dietary fiber fractions released by alkaline-hydrogen peroxide treatment (principally arabinoxylan) were obtained with the aim of identifying and characterizing fractions associated with high production of short chain fatty acids and a linear fermentation profile for possible application as a slowly fermentable dietary fiber. The alkali-soluble fraction from maize bran resulted in the highest short chain fatty acid production among all samples tested, and was linear over the 24 h fermentation period. Size-exclusion chromatography and (1)H NMR suggested that higher molecular weight and uniquely substituted arabinose side chains may contribute to these properties. Monosaccharide disappearance data suggest that maize and rice bran arabinoxylans are fermented by a debranching mechanism, while wheat bran arabinoxylans likely contain large unsubstituted xylose regions that are fermented preferentially, followed by poor fermentation of the remaining, highly branched oligosaccharides.

  17. A prototype fully polarimetric 160-GHz bistatic ISAR compact radar range

    NASA Astrophysics Data System (ADS)

    Beaudoin, C. J.; Horgan, T.; DeMartinis, G.; Coulombe, M. J.; Goyette, T.; Gatesman, A. J.; Nixon, William E.

    2017-05-01

    We present a prototype bistatic compact radar range operating at 160 GHz and capable of collecting fullypolarimetric radar cross-section and electromagnetic scattering measurements in a true far-field facility. The bistatic ISAR system incorporates two 90-inch focal length, 27-inch-diameter diamond-turned mirrors fed by 160 GHz transmit and receive horns to establish the compact range. The prototype radar range with its modest sized quiet zone serves as a precursor to a fully developed compact radar range incorporating a larger quiet zone capable of collecting X-band bistatic RCS data and 3D imagery using 1/16th scale objects. The millimeter-wave transmitter provides 20 GHz of swept bandwidth in the single linear (Horizontal/Vertical) polarization while the millimeter-wave receiver, that is sensitive to linear Horizontal and Vertical polarization, possesses a 7 dB noise figure. We present the design of the compact radar range and report on test results collected to validate the system's performance.

  18. Rapid method for the determination of 14 isoflavones in food using UHPLC coupled to photo diode array detection.

    PubMed

    Shim, You-Shin; Yoon, Won-Jin; Hwang, Jin-Bong; Park, Hyun-Jin; Seo, Dongwon; Ha, Jaeho

    2015-11-15

    A rapid method for the determination of 14 types of isoflavones in food using ultra-high performance liquid chromatography (UHPLC) was validated in terms of precision, accuracy, sensitivity and linearity. The UHPLC separation was performed on a reverse-phase C18 column (particle size 2 μm, i.d. 2 mm, length 100 mm) using a photo diode array detector that was fixed to 260 nm. The limits of detection and quantification of the UHPLC analyses ranged from 0.03 to 0.33 mg kg(-1). The intra-day and inter-day precision of the individual isoflavones were less than 11.77% and calibration curves exhibited good linearity (r(2) = 0.99) within the tested ranges. These results suggest that the rapid method used in this study could be available to determine of 14 types of isoflavones in a variety of food such as soy bean, black bean, red bean and soybean paste. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Advanced complex trait analysis.

    PubMed

    Gray, A; Stewart, I; Tenesa, A

    2012-12-01

    The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.

  20. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    NASA Astrophysics Data System (ADS)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  1. Snow mapping and land use studies in Switzerland

    NASA Technical Reports Server (NTRS)

    Haefner, H. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. A system was developed for operational snow and land use mapping, based on a supervised classification method using various classification algorithms and representation of the results in maplike form on color film with a photomation system. Land use mapping, under European conditions, was achieved with a stepwise linear discriminant analysis by using additional ratio variables. On fall images, signatures of built-up areas were often not separable from wetlands. Two different methods were tested to correlate the size of settlements and the population with an accuracy for the densely populated Swiss Plateau between +2 or -12%.

  2. Garment Counting in a Textile Warehouse by Means of a Laser Imaging System

    PubMed Central

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-01-01

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760

  3. Garment counting in a textile warehouse by means of a laser imaging system.

    PubMed

    Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban

    2013-04-29

    Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin S.

    The Futility package contains the following: 1) Definition of the size of integers and real numbers; 2) A generic Unit test harness; 3) Definitions for some basic extensions to the Fortran language: arbitrary length strings, a parameter list construct, exception handlers, command line processor, timers; 4) Geometry definitions: point, line, plane, box, cylinder, polyhedron; 5) File wrapper functions: standard Fortran input/output files, Fortran binary files, HDF5 files; 6) Parallel wrapper functions: MPI, and Open MP abstraction layers, partitioning algorithms; 7) Math utilities: BLAS, Matrix and Vector definitions, Linear Solver methods and wrappers for other TPLs (PETSC, MKL, etc), preconditioner classes;more » 8) Misc: random number generator, water saturation properties, sorting algorithms.« less

  5. Origin of optical non-linear response in TiN owing to excitation dynamics of surface plasmon resonance electronic oscillations

    NASA Astrophysics Data System (ADS)

    Divya, S.; Nampoori, V. P. N.; Radhakrishnan, P.; Mujeeb, A.

    2014-08-01

    TiN nanoparticles of average size 55 nm were investigated for their optical non-linear properties. During the experiment the irradiated laser wavelength coincided with the surface plasmon resonance (SPR) peak of the nanoparticle. The large non-linearity of the nanoparticle was attributed to the plasmon resonance, which largely enhanced the local field within the nanoparticle. Both open and closed aperture Z-scan experiments were performed and the corresponding optical constants were explored. The post-excitation absorption spectra revealed the interesting phenomenon of photo fragmentation leading to the blue shift in band gap and red shift in the SPR. The results are discussed in terms of enhanced interparticle interaction simultaneous with size reduction. Here, the optical constants being intrinsic constants for a particular sample change unusually with laser power intensity. The dependence of χ(3) is discussed in terms of the size variation caused by photo fragmentation. The studies proved that the TiN nanoparticles are potential candidates in photonics technology offering huge scope to study unexplored research for various expedient applications.

  6. Improving near-infrared prediction model robustness with support vector machine regression: a pharmaceutical tablet assay example.

    PubMed

    Igne, Benoît; Drennen, James K; Anderson, Carl A

    2014-01-01

    Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.

  7. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  8. The use of computed radiography plates to determine light and radiation field coincidence.

    PubMed

    Kerns, James R; Anand, Aman

    2013-11-01

    Photo-stimulable phosphor computed radiography (CR) has characteristics that allow the output to be manipulated by both radiation and optical light. The authors have developed a method that uses these characteristics to carry out radiation field and light field coincidence quality assurance on linear accelerators. CR detectors from Kodak were used outside their cassettes to measure both radiation and light field edges from a Varian linear accelerator. The CR detector was first exposed to a radiation field and then to a slightly smaller light field. The light impinged on the detector's latent image, removing to an extent the portion exposed to the light field. The detector was then digitally scanned. A MATLAB-based algorithm was developed to automatically analyze the images and determine the edges of the light and radiation fields, the vector between the field centers, and the crosshair center. Radiographic film was also used as a control to confirm the radiation field size. Analysis showed a high degree of repeatability with the proposed method. Results between the proposed method and radiographic film showed excellent agreement of the radiation field. The effect of varying monitor units and light exposure time was tested and found to be very small. Radiation and light field sizes were determined with an uncertainty of less than 1 mm, and light and crosshair centers were determined within 0.1 mm. A new method was developed to digitally determine the radiation and light field size using CR photo-stimulable phosphor plates. The method is quick and reproducible, allowing for the streamlined and robust assessment of light and radiation field coincidence, with no observer interpretation needed.

  9. Universal Spatial Correlation Functions for Describing and Reconstructing Soil Microstructure

    PubMed Central

    Skvortsova, Elena B.; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity. PMID:26010779

  10. Universal spatial correlation functions for describing and reconstructing soil microstructure.

    PubMed

    Karsanina, Marina V; Gerke, Kirill M; Skvortsova, Elena B; Mallants, Dirk

    2015-01-01

    Structural features of porous materials such as soil define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, or gas exchange between biologically active soil root zone and atmosphere) and solute transport. To characterize soil microstructure, conventional soil science uses such metrics as pore size and pore-size distributions and thin section-derived morphological indicators. However, these descriptors provide only limited amount of information about the complex arrangement of soil structure and have limited capability to reconstruct structural features or predict physical properties. We introduce three different spatial correlation functions as a comprehensive tool to characterize soil microstructure: 1) two-point probability functions, 2) linear functions, and 3) two-point cluster functions. This novel approach was tested on thin-sections (2.21×2.21 cm2) representing eight soils with different pore space configurations. The two-point probability and linear correlation functions were subsequently used as a part of simulated annealing optimization procedures to reconstruct soil structure. Comparison of original and reconstructed images was based on morphological characteristics, cluster correlation functions, total number of pores and pore-size distribution. Results showed excellent agreement for soils with isolated pores, but relatively poor correspondence for soils exhibiting dual-porosity features (i.e. superposition of pores and micro-cracks). Insufficient information content in the correlation function sets used for reconstruction may have contributed to the observed discrepancies. Improved reconstructions may be obtained by adding cluster and other correlation functions into reconstruction sets. Correlation functions and the associated stochastic reconstruction algorithms introduced here are universally applicable in soil science, such as for soil classification, pore-scale modelling of soil properties, soil degradation monitoring, and description of spatial dynamics of soil microbial activity.

  11. Pressure attenuation during high-frequency airway clearance therapy across different size endotracheal tubes: An in vitro study.

    PubMed

    Smallwood, Craig D; Bullock, Kevin J; Gouldstone, Andrew

    2016-08-01

    High-frequency airway clearance therapy is a positive pressure secretion clearance modality used in pediatric and adult applications. However, pressure attenuation across different size endotracheal tubes (ETT) has not been adequately described. This study quantifies attenuation in an in vitro model. The MetaNeb® System was used to deliver high-frequency pressure pulses to 3.0, 4.0, 6.0 and 8.0mm ID ETTs connected to a test lung during mechanical ventilation. The experimental setup included a 3D-printed trachea model and imbedded pressure sensors. The pressure attenuation (Patt%) was calculated: Patt%=[(Pproximal-Pdistal)/Pproximal]x100. The effect of pulse frequency on Pdistal and Pproximal was quantified. Patt% was inversely and linearly related to ETT ID and (y=-7.924x+74.36; R(2)=0.9917, P=.0042 for 4.0Hz pulse frequency and y=-7.382+9.445, R(2)=0.9964, P=.0018 for 3.0Hz pulse frequency). Patt% across the 3.0, 4.0, 6.0 and 8.0mm I.D. ETTs was 48.88±10.25%, 40.87±5.22%, 27.97±5.29%, and 9.90±1.9% respectively. Selecting the 4.0Hz frequency mode demonstrated higher Pproximal and Pdistal compared to the 3.0Hz frequency mode (P=.0049 and P=.0065). Observed Pdistal was <30cmH2O for all experiments. In an in vitro model, pressure attenuation was linearly related to the inner diameter of the endotracheal tube; with decreasing attenuation as the ETT size increased. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    DOE PAGES

    Burr, P. A.; Cooper, M. W. D.

    2017-09-15

    Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less

  13. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, P. A.; Cooper, M. W. D.

    Small system sizes are a well known source of error in DFT calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite size effects have been well characterised, but self-interaction of charge neutral defects is often discounted or assumed to follow an asymptotic behaviour and thus easily corrected with linear elastic theory. Here we show that elastic effect are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequatly small supercells are used; moreover,more » the spurious self-interaction does not follow the behaviour predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground state structure of (charge neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768 and 1500 atoms), and careful analysis determines that elastic effects, not electrostatic, are responsible. The spurious self-interaction was also observed in non-oxide ionic compounds and irrespective of the computational method used, thereby resolving long standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects are a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g. hybrid functionals) or when modelling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studies oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells | greater than 96 atoms.« less

  14. Importance of elastic finite-size effects: Neutral defects in ionic compounds

    NASA Astrophysics Data System (ADS)

    Burr, P. A.; Cooper, M. W. D.

    2017-09-01

    Small system sizes are a well-known source of error in density functional theory (DFT) calculations, yet computational constraints frequently dictate the use of small supercells, often as small as 96 atoms in oxides and compound semiconductors. In ionic compounds, electrostatic finite-size effects have been well characterized, but self-interaction of charge-neutral defects is often discounted or assumed to follow an asymptotic behavior and thus easily corrected with linear elastic theory. Here we show that elastic effects are also important in the description of defects in ionic compounds and can lead to qualitatively incorrect conclusions if inadequately small supercells are used; moreover, the spurious self-interaction does not follow the behavior predicted by linear elastic theory. Considering the exemplar cases of metal oxides with fluorite structure, we show that numerous previous studies, employing 96-atom supercells, misidentify the ground-state structure of (charge-neutral) Schottky defects. We show that the error is eliminated by employing larger cells (324, 768, and 1500 atoms), and careful analysis determines that elastic, not electrostatic, effects are responsible. The spurious self-interaction was also observed in nonoxide ionic compounds irrespective of the computational method used, thereby resolving long-standing discrepancies between DFT and force-field methods, previously attributed to the level of theory. The surprising magnitude of the elastic effects is a cautionary tale for defect calculations in ionic materials, particularly when employing computationally expensive methods (e.g., hybrid functionals) or when modeling large defect clusters. We propose two computationally practicable methods to test the magnitude of the elastic self-interaction in any ionic system. In commonly studied oxides, where electrostatic effects would be expected to be dominant, it is the elastic effects that dictate the need for larger supercells: greater than 96 atoms.

  15. Size-independent neural networks based first-principles method for accurate prediction of heat of formation of fuels

    NASA Astrophysics Data System (ADS)

    Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua

    2018-06-01

    Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.

  16. Aerosol Size Distributions During ACE-Asia: Retrievals From Optical Thickness and Comparisons With In-situ Measurements

    NASA Astrophysics Data System (ADS)

    Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.

    2002-12-01

    As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.

  17. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  19. Individual-Area Relationship Best Explains Goose Species Density in Wetlands

    PubMed Central

    Prins, Herbert H. T.; Cao, Lei; de Boer, Willem Fred

    2015-01-01

    Explaining and predicting animal distributions is one of the fundamental objectives in ecology and conservation biology. Animal habitat selection can be regulated by top-down and bottom-up processes, and is mediated by species interactions. Species varying in body size respond differently to top-down and bottom-up determinants, and hence understanding these allometric responses to those determinants is important for conservation. In this study, using two differently sized goose species wintering in the Yangtze floodplain, we tested the predictions derived from three different hypotheses (individual-area relationship, food resource and disturbance hypothesis) to explain the spatial and temporal variation in densities of two goose species. Using Generalized Linear Mixed Models with a Markov Chain Monte Carlo technique, we demonstrated that goose density was positive correlated with patch area size, suggesting that the individual area-relationship best predicts differences in goose densities. Moreover, the other predictions, related to food availability and disturbance, were not significant. Buffalo grazing probably facilitated greater white-fronted geese, as the number of buffalos was positively correlated to the density of this species. We concluded that patch area size is the most important factor determining the density of goose species in our study area. Patch area size is directly determined by water levels in the Yangtze floodplain, and hence modifying the hydrological regimes can enlarge the capacity of these wetlands for migratory birds. PMID:25996502

  20. Simulation and experiment for depth sizing of cracks in anchor bolts by ultrasonic phased array technology

    NASA Astrophysics Data System (ADS)

    Lin, Shan

    2018-04-01

    There have been lots of reports about the occurrence of cracks in bolts in aging nuclear and thermal power plants. Sizing of such cracks is crucial for assessing the integrity of bolts. Currently, hammering and visual tests are used to detect cracks in bolts. However, they are not applicable for sizing cracks. Although the tip diffraction method is well known as a crack sizing technique, reflection echoes from threads make it difficult to apply this technique to bolts. This paper addresses a method for depth sizing of cracks in bolts by means of ultrasonic phased array technology. Numerical results of wave propagation in bolts by the finite element method (FEM) shows that a peak associated within the vicinity of a crack tip can be observed in the curve of echo intensity versus refraction angle for deep cracks. The refraction angle with respect to this peak decreases as crack depth increases. Such numerical results are verified by experiments on bolt specimens that have electrical discharge machining notches or fatigue cracks with different depths. In the experiment, a 10-MHz linear array probe is used. Depth of cracks in bolts using the refraction angle associated with the peak is determined and compared to actual depths. The comparison shows that accurately determining a crack depth from the inspection results is possible.

Top