Reconnaissance On Chi-Square Test Procedure For Determining Two Species Association
NASA Astrophysics Data System (ADS)
Marisa, Hanifa
2008-01-01
Determining the assosiation of two species by using chi-square test has been published. Utility of this procedure to plants species at certain location, shows that the procedure could not find "ecologically" association. Tens sampling units have been made to record some weeds species in Indralaya, South Sumatera. Chi square test; Xt2 = N[|(ad)-(bc)|-(N/2)]2/mnrs (Eq:1) on two species (Cleome sp and Eleusine indica) of the weeds shows positive assosiation; while ecologically in nature, there is no relationship between them. Some alternatives are proposed to this problem; simplified chi-square test steps, make further study to find out ecologically association, or at last, ignore it.
Predicting falls in older adults using the four square step test.
Cleary, Kimberly; Skornyakov, Elena
2017-10-01
The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.
Selection of lasing direction in single mode semiconductor square ring cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin-Woong; Kim, Kyoung-Youm; Moon, Hee-Jong
We propose and demonstrate a selection scheme of lasing direction by imposing a loss imbalance structure into the single mode square ring cavity. The control of the traveling direction is realized by introducing a taper-step section in one of the straight waveguides of the square ring cavity. It was shown by semi-analytic calculation that the taper-step section in the cavity provides effective loss imbalance between two travelling directions as the round trip repeats. Various kinds of square cavities were fabricated using InGaAsP/InGaAs multiple quantum well semiconductor materials in order to test the direction selectivity while maintaining the single mode. Wemore » also measured the pump power dependent lasing spectra to investigate the maintenance property of the lasing direction. The experimental results demonstrated that the proposed scheme is an efficient means for a unidirectional lasing in a single mode laser.« less
Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S
2016-01-01
Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).
Water-Pressure Distribution on Seaplane Float
NASA Technical Reports Server (NTRS)
Thompson, F L
1929-01-01
The investigation presented in this report was conducted for the purpose of determining the distribution and magnitude of water pressures likely to be experienced on seaplane hulls in service. It consisted of the development and construction of apparatus for recording water pressures lasting one one-hundredth second or longer and of flight tests to determine the water pressures on a UO-1 seaplane float under various conditions of taxiing, taking off, and landing. The apparatus developed was found to operate with satisfactory accuracy and is suitable for flight tests on other seaplanes. The tests on the UO-1 showed that maximum pressures of about 6.5 pounds per square inch occur at the step for the full width of the float bottom. Proceeding forward from the step the maximum pressures decrease in magnitude uniformly toward the bow, and the region of highest pressures narrows toward the keel. Immediately abaft the step the maximum pressures are very small, but increase in magnitude toward the stern and there once reached a value of about 5 pounds per square inch. (author)
MIDAS robust trend estimator for accurate GPS station velocities without step detection
Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-01-01
Abstract Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil‐Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj–xi)/(tj–ti) computed between all data pairs i > j. For normally distributed data, Theil‐Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil‐Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one‐sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root‐mean‐square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences. PMID:27668140
MIDAS robust trend estimator for accurate GPS station velocities without step detection.
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes v ij = ( x j -x i )/( t j -t i ) computed between all data pairs i > j . For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
MIDAS robust trend estimator for accurate GPS station velocities without step detection
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Kreemer, Corné; Hammond, William C.; Gazeaux, Julien
2016-03-01
Automatic estimation of velocities from GPS coordinate time series is becoming required to cope with the exponentially increasing flood of available data, but problems detectable to the human eye are often overlooked. This motivates us to find an automatic and accurate estimator of trend that is resistant to common problems such as step discontinuities, outliers, seasonality, skewness, and heteroscedasticity. Developed here, Median Interannual Difference Adjusted for Skewness (MIDAS) is a variant of the Theil-Sen median trend estimator, for which the ordinary version is the median of slopes vij = (xj-xi)/(tj-ti) computed between all data pairs i > j. For normally distributed data, Theil-Sen and least squares trend estimates are statistically identical, but unlike least squares, Theil-Sen is resistant to undetected data problems. To mitigate both seasonality and step discontinuities, MIDAS selects data pairs separated by 1 year. This condition is relaxed for time series with gaps so that all data are used. Slopes from data pairs spanning a step function produce one-sided outliers that can bias the median. To reduce bias, MIDAS removes outliers and recomputes the median. MIDAS also computes a robust and realistic estimate of trend uncertainty. Statistical tests using GPS data in the rigid North American plate interior show ±0.23 mm/yr root-mean-square (RMS) accuracy in horizontal velocity. In blind tests using synthetic data, MIDAS velocities have an RMS accuracy of ±0.33 mm/yr horizontal, ±1.1 mm/yr up, with a 5th percentile range smaller than all 20 automatic estimators tested. Considering its general nature, MIDAS has the potential for broader application in the geosciences.
Testing of the anemometer circuit: Data report
NASA Technical Reports Server (NTRS)
Moen, Michael J.
1992-01-01
The following text discusses results from the electronic step testing and the beginning of velocity step testing in the shock tube. It should be kept in mind that frequency response is always measured as the time from the beginning of the event to the minimum (positive inflection) of the 'bucket' that immediately follows the response. This report is not a complete account of the results from square wave testing. Some data is still in the process of being analyzed and efforts are being made to fit the data to both Freymuth's third order theory and modelled responses from SPICE circuit simulation software.
NASA Technical Reports Server (NTRS)
Nemeth, Z. N.
1974-01-01
Two coatings for a Rayleigh step thrust bearing were tested when coasting down and stopping under self-acting operation in air. The thrust bearing had an outside diameter of 8.9 cm (3.5 in.), an inside diameter of 5.4 cm (2.1 in.), and nine sectors. The load was 73 N (16.4 lbf). The load pressure was 19.1 kN/per square meter (2.77 lbf/per square inch) on the total thrust bearing area. The chromium oxide coating was good to 150 stops without bearing deterioration, and the molybdenum disulfide coating was good for only four stops before bearing deterioration. The molybdenum disulfide coated bearing failed after nine stops.
Leizerowitz, Gil; Katz-Leurer, Michal
2017-01-01
To assess feasibility, test-retest reliability and validity of the Four Square Step Test (FSST) in typically developed children (TD), and children with cerebral palsy (CP) and acquired brain injury (ABI). 30 TD children, 20 with CP and 12 with ABI participated in the study. The FSST while sitting and standing, the Timed Up and Go (TUG) and the balance subtest of the Bruininks-Oseretsky Test (BOT-2) were assessed. Each child attempted the FSST twice within 1 week. The scores for the FSST were assigned according to the original test: two successes in four trials, and according to a more lenient test, one success in four trials. The original form of the FSST is not feasible for children with CP or ABI. In TD children the lenient version is feasible (93%) and has moderate stability (Interclass correlation, ICC = 0.723), with a significant, positive correlation with the TUG (r s = 0.56). In children with CP the lenient test is feasible (80%), stable (r s = 0.83) and negatively correlates with the BOT-2 (r s =-0.69). In children with ABI the test is less feasible (67%) and neither stable nor valid. The lenient form of the FSST is feasible, reliable and valid in TD children and children with CP.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Analysis of stability for stochastic delay integro-differential equations.
Zhang, Yu; Li, Longsuo
2018-01-01
In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.
Violato, Claudio; Gao, Hong; O'Brien, Mary Claire; Grier, David; Shen, E
2018-05-01
The distinction between basic sciences and clinical knowledge which has led to a theoretical debate on how medical expertise is developed has implications for medical school and lifelong medical education. This longitudinal, population based observational study was conducted to test the fit of three theories-knowledge encapsulation, independent influence, distinct domains-of the development of medical expertise employing structural equation modelling. Data were collected from 548 physicians (292 men-53.3%; 256 women-46.7%; mean age = 24.2 years on admission) who had graduated from medical school 2009-2014. They included (1) Admissions data of undergraduate grade point average and Medical College Admission Test sub-test scores, (2) Course performance data from years 1, 2, and 3 of medical school, and (3) Performance on the NBME exams (i.e., Step 1, Step 2 CK, and Step 3). Statistical fit indices (Goodness of Fit Index-GFI; standardized root mean squared residual-SRMR; root mean squared error of approximation-RSMEA) and comparative fit [Formula: see text] of three theories of cognitive development of medical expertise were used to assess model fit. There is support for the knowledge encapsulation three factor model of clinical competency (GFI = 0.973, SRMR = 0.043, RSMEA = 0.063) which had superior fit indices to both the independent influence and distinct domains theories ([Formula: see text] vs [Formula: see text] [[Formula: see text
Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.
Yuan, Ke-Hai; Chan, Wai
2016-09-01
Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Array automated assembly, phase 2
NASA Technical Reports Server (NTRS)
Taylor, W. E.
1978-01-01
An analysis was made of cost tradeoffs for shaping modified square wafers from cylindrical crystals. Tests were conducted of the effectiveness of texture etching for removal of surface damage on sawed wafers. A single step texturing etch appeared adequate for removal of surface damage on wafers cut with multiple blade reciprocating slurry saws.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Shape calibration of a conformal ultrasound therapy array.
McGough, R J; Cindric, D; Samulski, T V
2001-03-01
A conformal ultrasound phased array prototype with 96 elements was recently calibrated for electronic steering and focusing in a water tank. The procedure for calibrating the shape of this 2D therapy array consists of two steps. First, a least squares triangulation algorithm determines the element coordinates from a 21 x 21 grid of time delays. The triangulation algorithm also requires temperature measurements to compensate for variations in the speed of sound. Second, a Rayleigh-Sommerfeld formulation of the acoustic radiation integral is aligned to a second grid of measured pressure amplitudes in a least squares sense. This shape calibration procedure, which is applicable to a wide variety of ultrasound phased arrays, was tested on a square array panel consisting of 7- x 7-mm elements operating at 617 kHz. The simulated fields generated by an array of 96 equivalent elements are consistent with the measured data, even in the fine structure away from the primary focus and sidelobes. These two calibration steps are sufficient for the simulation model to predict successfully the pressure field generated by this conformal ultrasound phased array prototype.
Promoting College Students' Problem Understanding Using Schema-Emphasizing Worked Examples
ERIC Educational Resources Information Center
Yan, Jie; Lavigne, Nancy C.
2014-01-01
Statistics learners often bypass the critical step of understanding a problem before executing solutions. Worked-out examples that identify problem information (e.g., data type, number of groups, purpose of analysis) key to determining a solution (e.g., "t" test, chi-square, correlation) can address this concern. The authors examined the…
Guix-Comellas, Eva Maria; Rozas-Quesada, Librada; Velasco-Arnaiz, Eneritz; Ferres-Canals, Ariadna; Estrada-Masllorens, Joan Maria; Force-Sanmartín, Enriqueta; Noguera-Julian, Antoni
2018-05-03
To evaluate the association of a new nursing intervention on the adherence to antituberculosis treatment in a pediatric cohort (<18 years). Tuberculosis remains a public health problem worldwide. The risk of developing tuberculosis after primary infection and its severity are higher in children. Proper adherence to antituberculosis treatment is critical for disease control. Non-randomized controlled trial; Phase 1, retrospective (2011-2013), compared with Phase 2, prospective with intervention (2015-2016), in a referral center for pediatric tuberculosis in Spain (NCT03230409). A total of 359 patients who received antituberculosis drugs after close contact with a smear-positive patient (primary chemoprophylaxis) or were treated for latent tuberculosis infection or tuberculosis disease were included, 261 in Phase 1 and 98 in Phase 2. In phase 2, a new nurse-led intervention was implemented in all patients and included two educational steps (written information in the child's native language and follow-up telephone calls) and two monitoring steps (Eidus-Hamilton test and follow-up questionnaire) that were exclusively carried out by nurses. Adherence to antituberculosis treatment increased from 74.7% in Phase 1 to 87.8% in Phase 2 (p=0.014; Chi-square test), after the implementation of the nurse-led intervention. In Phase 2, non-adherence was only associated with being born abroad (28.6% versus 7.8%; p=0.019; Chi-square test) and with foreign origin families (27.3% versus 0%; p<0.0001; Chi-square test). The nurse-led intervention was associated to an increase in adherence to antituberculosis treatment. Immigrant-related variables remained major risk factors for sub-optimal adherence in a low-endemic setting. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
STEP: Satellite Test of the Equivalence Principle. Report on the phase A study
NASA Technical Reports Server (NTRS)
Blaser, J. P.; Bye, M.; Cavallo, G.; Damour, T.; Everitt, C. W. F.; Hedin, A.; Hellings, R. W.; Jafry, Y.; Laurance, R.; Lee, M.
1993-01-01
During Phase A, the STEP Study Team identified three types of experiments that can be accommodated on the STEP satellite within the mission constraints and whose performance is orders of magnitude better than any present or planned future experiment of the same kind on the ground. The scientific objectives of the STEP mission are to: test the Equivalence Principle to one part in 10(exp 17), six orders of magnitude better than has been achieved on the ground; search for a new interaction between quantum-mechanical spin and ordinary matter with a sensitivity of the mass-spin coupling constant g(sub p)g(sub s) = 6 x 10(exp -34) at a range of 1 mm, which represents a seven order-of-magnitude improvement over comparable ground-based measurements; and determine the constant of gravity G with a precision of one part in 10(exp 6) and to test the validity of the inverse square law with the same precision, both two orders of magnitude better than has been achieved on the ground.
Speckle evolution with multiple steps of least-squares phase removal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Mingzhou; Dainty, Chris; Roux, Filippus S.
2011-08-15
We study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both vortex density and scintillation index. Statistical results show that almost all the optical vortices can be removed from a speckle field, which finally decays into a quasiplane wave after such an iterative process.
On the Possibility of Ill-Conditioned Covariance Matrices in the First-Order Two-Step Estimator
NASA Technical Reports Server (NTRS)
Garrison, James L.; Axelrod, Penina; Kasdin, N. Jeremy
1997-01-01
The first-order two-step nonlinear estimator, when applied to a problem of orbital navigation, is found to occasionally produce first step covariance matrices with very low eigenvalues at certain trajectory points. This anomaly is the result of the linear approximation to the first step covariance propagation. The study of this anomaly begins with expressing the propagation of the first and second step covariance matrices in terms of a single matrix. This matrix is shown to have a rank equal to the difference between the number of first step states and the number of second step states. Furthermore, under some simplifying assumptions, it is found that the basis of the column space of this matrix remains fixed once the filter has removed the large initial state error. A test matrix containing the basis of this column space and the partial derivative matrix relating first and second step states is derived. This square test matrix, which has dimensions equal to the number of first step states, numerically drops rank at the same locations that the first step covariance does. It is formulated in terms of a set of constant vectors (the basis) and a matrix which can be computed from a reference trajectory (the partial derivative matrix). A simple example problem involving dynamics which are described by two states and a range measurement illustrate the cause of this anomaly and the application of the aforementioned numerical test in more detail.
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data.
Gauer, Jacqueline L; Wolff, Josephine M; Jackson, J Brooks
2016-01-01
The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011-2015) at the University of Minnesota Medical School ( N =1,065). The multiple linear regression analyses were both significant ( p <0.001). The three MCAT component scores together explained 17.7% of the variance in Step 1 scores ( p< 0.001) and 12.0% of the variance in Step 2 CK scores ( p <0.001). In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24). Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student.
Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data
Gauer, Jacqueline L.; Wolff, Josephine M.; Jackson, J. Brooks
2016-01-01
Introduction The purpose of this study was to determine the associations and predictive values of Medical College Admission Test (MCAT) component and composite scores prior to 2015 with U.S. Medical Licensure Exam (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, with a focus on whether students scoring low on the MCAT were particularly likely to continue to score low on the USMLE exams. Method Multiple linear regression, correlation, and chi-square analyses were performed to determine the relationship between MCAT component and composite scores and USMLE Step 1 and Step 2 CK scores from five graduating classes (2011–2015) at the University of Minnesota Medical School (N=1,065). Results The multiple linear regression analyses were both significant (p<0.001). The three MCAT component scores together explained 17.7% of the variance in Step 1 scores (p<0.001) and 12.0% of the variance in Step 2 CK scores (p<0.001). In the chi-square analyses, significant, albeit weak associations were observed between almost all MCAT component scores and USMLE scores (Cramer's V ranged from 0.05 to 0.24). Discussion Each of the MCAT component scores was significantly associated with USMLE Step 1 and Step 2 CK scores, although the effect size was small. Being in the top or bottom scoring range of the MCAT exam was predictive of being in the top or bottom scoring range of the USMLE exams, although the strengths of the associations were weak to moderate. These results indicate that MCAT scores are predictive of student performance on the USMLE exams, but, given the small effect sizes, should be considered as part of the holistic view of the student. PMID:27702431
A clinical test of stepping and change of direction to identify multiple falling older adults.
Dite, Wayne; Temple, Viviene A
2002-11-01
To establish the reliability and validity of a new clinical test of dynamic standing balance, the Four Square Step Test (FSST), to evaluate its sensitivity, specificity, and predictive value in identifying subjects who fall, and to compare it with 3 established balance and mobility tests. A 3-group comparison performed by using 3 validated tests and 1 new test. A rehabilitation center and university medical school in Australia. Eighty-one community-dwelling adults over the age of 65 years. Subjects were age- and gender-matched to form 3 groups: multiple fallers, nonmultiple fallers, and healthy comparisons. Not applicable. Time to complete the FSST and Timed Up and Go test and the number of steps to complete the Step Test and Functional Reach Test distance. High reliability was found for interrater (n=30, intraclass correlation coefficient [ICC]=.99) and retest reliability (n=20, ICC=.98). Evidence for validity was found through correlation with other existing balance tests. Validity was supported, with the FSST showing significantly better performance scores (P<.01) for each of the healthier and less impaired groups. The FSST also revealed a sensitivity of 85%, a specificity of 88% to 100%, and a positive predictive value of 86%. As a clinical test, the FSST is reliable, valid, easy to score, quick to administer, requires little space, and needs no special equipment. It is unique in that it involves stepping over low objects (2.5cm) and movement in 4 directions. The FSST had higher combined sensitivity and specificity for identifying differences between groups in the selected sample population of older adults than the 3 tests with which it was compared. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Vinholes, Daniele Botelho; Assunção, Maria Cecília Formoso; Neutzling, Marilda Borges
2009-04-01
This study aimed to measure frequency of healthy eating habits and associated factors using the 10 Steps to Healthy Eating score proposed by the Ministry of Health in the adult population in Pelotas, Rio Grande do Sul State, Brazil. A cross-sectional population-based survey was conducted on a cluster sample of 3,136 adult residents in Pelotas. The frequency of each step to healthy eating was collected with a pre-coded questionnaire. Data analysis consisted of descriptive analysis, followed by bivariate analysis using the chi-square test. Only 1.1% of the population followed all the recommended steps. The average number of steps was six. Step four, salt intake, showed the highest frequency, while step nine, physical activity, showed the lowest. Knowledge of the population's eating habits and their distribution according to demographic and socioeconomic variables is important to guide local and national strategies to promote healthy eating habits and thus improve quality of life.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
Chirality in distorted square planar Pd(O,N)2 compounds.
Brunner, Henri; Bodensteiner, Michael; Tsuno, Takashi
2013-10-01
Salicylidenimine palladium(II) complexes trans-Pd(O,N)2 adopt step and bowl arrangements. A stereochemical analysis subdivides 52 compounds into 41 step and 11 bowl types. Step complexes with chiral N-substituents and all the bowl complexes induce chiral distortions in the square planar system, resulting in Δ/Λ configuration of the Pd(O,N)2 unit. In complexes with enantiomerically pure N-substituents ligand chirality entails a specific square chirality and only one diastereomer assembles in the lattice. Dimeric Pd(O,N)2 complexes with bridging N-substituents in trans-arrangement are inherently chiral. For dimers different chirality patterns for the Pd(O,N)2 square are observed. The crystals contain racemates of enantiomers. In complex two independent molecules form a tight pair. The (RC) configuration of the ligand induces the same Δ chirality in the Pd(O,N)2 units of both molecules with varying square chirality due to the different crystallographic location of the independent molecules. In complexes and atrop isomerism induces specific configurations in the Pd(O,N)2 bowl systems. The square chirality is largest for complex [(Diop)Rh(PPh3 )Cl)], a catalyst for enantioselective hydrogenation. In the lattice of two diastereomers with the same (RC ,RC) configuration in the ligand Diop but opposite Δ and Λ square configurations co-crystallize, a rare phenomenon in stereochemistry. © 2013 Wiley Periodicals, Inc.
Stock, Roland; Mork, Paul Jarle
2009-09-01
To investigate the effect of two weeks of intensive exercise on leg function in chronic stroke patients and to evaluate the feasibility of an intensive exercise programme in a group setting. Pilot study with one-group pre-test post-test design with two pre-tests and one-year follow-up. Inpatient rehabilitation hospital. Twelve hemiparetic patients completed the intervention. Ten patients participated at one-year follow-up. Six hours of daily intensive exercise for two weeks with focus on weight-shifting towards the affected side and increased use of the affected extremity during functional activities. An insole with nubs in the shoe of the non-paretic limb was used to reinforce weight-shift toward the affected side. Timed Up and Go, Four Square Step Test, gait velocity, gait symmetry and muscle strength in knee and ankle muscles. Maximal gait velocity (P = 0.002) and performance time (seconds) on Timed Up and Go (mean, SD; 12.2, 3.8 vs. 9.4, 3.2) and Four Square Step Test improved from pre- to post-test (P = 0.005). Improvements remained significant at follow-up. Preferred gait velocity and gait symmetry remained unchanged. Knee extensor (P<50.009) and flexor (P<50.001) strength increased bilaterally from pre- to post-test but only knee flexor strength remained significant at follow-up. Ankle dorsi flexor (P = 0.02) and plantar flexor (P<0.001) strength increased on paretic side only (not tested at follow-up). Intensive exercise for lower extremity is feasible in a group setting and was effective in improving ambulatory function, maximal gait velocity and muscle strength in chronic stroke patients. Most improvements persisted at the one-year follow-up.
Kloos, Anne D; Fritz, Nora E; Kostyk, Sandra K; Young, Gregory S; Kegelmeyer, Deb A
2014-09-01
Individuals with Huntington's disease (HD) experience balance and gait problems that lead to falls. Clinicians currently have very little information about the reliability and validity of outcome measures to determine the efficacy of interventions that aim to reduce balance and gait impairments in HD. This study examined the reliability and concurrent validity of spatiotemporal gait measures, the Tinetti Mobility Test (TMT), Four Square Step Test (FSST), and Activities-specific Balance Confidence (ABC) Scale in individuals with HD. Participants with HD [n = 20; mean age ± SD=50.9 ± 13.7; 7 male] were tested on spatiotemporal gait measures and the TMT, FSST, and ABC Scale before and after a six week period to determine test-retest reliability and minimal detectable change (MDC) values. Linear relationships between gait and clinical measures were estimated using Pearson's correlation coefficients. Spatiotemporal gait measures, the TMT total and the FSST showed good to excellent test-retest reliability (ICC > 0.75). MDC values were 0.30 m/s and 0.17 m/s for velocity in forward and backward walking respectively, four points for the TMT, and 3s for the FSST. The TMT and FSST were highly correlated with most spatiotemporal measures. The ABC Scale demonstrated lower reliability and less concurrent validity than other measures. The high test-retest reliability over a six week period and concurrent validity between the TMT, FSST, and spatiotemporal gait measures suggest that the TMT and FSST may be useful outcome measures for future intervention studies in ambulatory individuals with HD. Copyright © 2014 Elsevier B.V. All rights reserved.
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Study on constant-step stress accelerated life tests in white organic light-emitting diodes.
Zhang, J P; Liu, C; Chen, X; Cheng, G L; Zhou, A X
2014-11-01
In order to obtain reliability information for a white organic light-emitting diode (OLED), two constant and one step stress tests were conducted with its working current increased. The Weibull function was applied to describe the OLED life distribution, and the maximum likelihood estimation (MLE) and its iterative flow chart were used to calculate shape and scale parameters. Furthermore, the accelerated life equation was determined using the least squares method, a Kolmogorov-Smirnov test was performed to assess if the white OLED life follows a Weibull distribution, and self-developed software was used to predict the average and the median lifetimes of the OLED. The numerical results indicate that white OLED life conforms to a Weibull distribution, and that the accelerated life equation completely satisfies the inverse power law. The estimated life of a white OLED may provide significant guidelines for its manufacturers and customers. Copyright © 2014 John Wiley & Sons, Ltd.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2002-01-01
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following estimation or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The "hybrid" method herein means a combination of an initial classical least squares analysis calibration step with subsequent analysis by an inverse multivariate analysis method. A "spectral shape" herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The "shape" can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Speeding Fermat's factoring method
NASA Astrophysics Data System (ADS)
McKee, James
A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.
Statistical Modeling of Robotic Random Walks on Different Terrain
NASA Astrophysics Data System (ADS)
Naylor, Austin; Kinnaman, Laura
Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.
Silva, Cristianny Miranda E; Pellegrinelli, Ana Luiza Rodrigues; Pereira, Simone Cardoso Lisboa; Passos, Ieda Ribeiro; Santos, Luana Caroline Dos
2017-05-01
This article sought to evaluate educational practices in line with the "Ten Steps to Successful Breastfeeding" in a Human Milk Bank. It involved a retrospective study using sociodemographic data about the pregnancy and the baby, obtained from a nursing mothers care protocol (2009-2012). These data were associated to steps related to educational practices from the "Ten Steps." Descriptive analysis, chi-square test and Poisson regression were performed. 12,283 mothers, with a median of 29 (12-54) years old, were evaluated. The guidelines about breastfeeding received during prenatal care (step 3) prevailed among mothers aged 30-39 years and the skin to skin contact (step 4) prevailed among oriented mothers. Breastfeeding training (step 5) predominated among those who breastfed exclusively. Higher prevalence of exclusive breastfeeding (step 6), breastfeeding on demand (step 8) and use of artificial nipples (step 9) were noted among infants whose mothers were oriented. These findings indicate the important role of health professionals on mother/child training about breastfeeding, on encouragement of the skin/skin contact, exclusive breastfeeding and breastfeeding on demand. The guidelines indicated the need to improve in order to reduce the use of artificial nipples and enhance exclusive breastfeeding.
Santos, Kennedy Maia Dos; Tsutsui, Mario Luiz da Silva; Galvão, Patrícia Paiva de Oliveira; Mazzucchetti, Lalucha; Rodrigues, Douglas; Gimeno, Suely Godoy Agostinho
2012-12-01
This study aimed to verify the existence of an association between degree of physical activity and presence of metabolic syndrome in the Khisêdjê indigenous group. The authors evaluated 170 individuals 20 years or older, based on demographic data, physical examination, and laboratory tests. The data were analyzed with the chi-square test (p < 0.05), crude and adjusted prevalence ratios (point and 95% confidence intervals), and Student's t-test. Satisfactory results were observed in relation to cardiorespiratory endurance, flexibility, bending of arms and trunk, and measurement of physical activity according to the number of steps/day. Prevalence of metabolic syndrome was 27.8% and was higher in women, the 39-49-year and ≥ 50-year age groups, and in individuals with lower performance on the cardiorespiratory endurance test, horizontal impulse, and number of steps/day. The results indicate the need for greater surveillance in the control and prevention of risk factors for metabolic syndrome.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo
2015-02-01
In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.
Rosenblum, Uri; Melzer, Itshak
2017-01-01
About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).
Economou, Anastasios; Voulgaropoulos, Anastasios
2003-01-01
The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV.
Economou, Anastasios; Voulgaropoulos, Anastasios
2003-01-01
The development of a dedicated automated sequential-injection analysis apparatus for anodic stripping voltammetry (ASV) and adsorptive stripping voltammetry (AdSV) is reported. The instrument comprised a peristaltic pump, a multiposition selector valve and a home-made potentiostat and used a mercury-film electrode as the working electrodes in a thin-layer electrochemical detector. Programming of the experimental sequence was performed in LabVIEW 5.1. The sequence of operations included formation of the mercury film, electrolytic or adsorptive accumulation of the analyte on the electrode surface, recording of the voltammetric current-potential response, and cleaning of the electrode. The stripping step was carried out by applying a square-wave (SW) potential-time excitation signal to the working electrode. The instrument allowed unattended operation since multiple-step sequences could be readily implemented through the purpose-built software. The utility of the analyser was tested for the determination of copper(II), cadmium(II), lead(II) and zinc(II) by SWASV and of nickel(II), cobalt(II) and uranium(VI) by SWAdSV. PMID:18924623
Abe, Takumi; Tsuji, Taishi; Kitano, Naruki; Muraki, Toshiaki; Hotta, Kazushi; Okura, Tomohiro
2015-01-01
The purpose of this study was to investigate whether the degree of improvement in cognitive function achieved with an exercise intervention in community-dwelling older Japanese women is affected by the participant's baseline cognitive function and age. Eighty-eight women (mean age: 70.5±4.2 years) participated in a prevention program for long-term care. They completed the Square-Stepping Exercise (SSE) program once a week, 120 minutes/session, for 11 weeks. We assessed participants' cognitive function using 5 cognitive tests (5-Cog) before and after the intervention. We defined cognitive function as the 5-Cog total score and defined the change in cognitive function as the 5-cog post-score minus the pre-score. We divided participants into four groups based on age (≤69 years or ≥70 years) and baseline cognitive function level (above vs. below the median cognitive function level). We conducted two-way analysis of variance. All 4 groups improved significantly in cognitive function after the intervention. There were no baseline cognitive function level×age interactions and no significant main effects of age, although significant main effects of baseline cognitive function level (P=0.004, η(2)=0.09) were observed. Square-Stepping Exercise is an effective exercise for improving cognitive function. These results suggest that older adults with cognitive decline are more likely to improve their cognitive function with exercise than if they start the intervention with high cognitive function. Furthermore, during an exercise intervention, baseline cognitive function level may have more of an effect than a participant's age on the degree of cognitive improvement.
Kloos, Anne D.; Fritz, Nora E.; Kostyk, Sandra K.; Young, Gregory S.; Kegelmeyer, Deb A.
2014-01-01
Background and purpose Individuals with Huntington's disease (HD) experience balance and gait problems that lead to falls. Clinicians currently have very little information about the reliability and validity of outcome measures to determine the efficacy of interventions that aim to reduce balance and gait impairments in HD. This study examined the reliability and concurrent validity of spatiotemporal gait measures, the Tinetti Mobility Test (TMT), Four Square Step Test (FSST), and Activities-specific Balance Confidence (ABC) Scale in individuals with HD. Methods Participants with HD [n = 20; mean age ± SD = 50.9 ± 13.7; 7 male] were tested on spatiotemporal gait measures the TMT, FSST, and ABC Scale before and after a six week period to determine test–retest reliability and minimal detectable change (MDC) values. Linear relationships between gait and clinical measures were estimated using Pearson's correlation coefficients. Results Spatiotemporal gait measures, the TMT total and the FSST showed good to excellent test–retest reliability (ICC > 0.75). MDC values were 0.30 m/s and 0.17 m/s for velocity in forward and backward walking respectively, four points for the TMT, and 3 s for the FSST. The TMT and FSST were highly correlated with most spatiotemporal measures. The ABC Scale demonstrated lower reliability and less concurrent validity than other measures. Conclusions The high test–retest reliability over a six week period and concurrent validity between the TMT, FSST, and spatiotemporal gait measures suggest that the TMT and FSST may be useful outcome measures for future intervention studies in ambulatory individuals with HD. PMID:25128156
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Vieira, Marcus Fraga; de Sá E Souza, Gustavo Souto; Lehnen, Georgia Cristina; Rodrigues, Fábio Barbosa; Andrade, Adriano O
2016-10-01
The purpose of this study was to determine whether general fatigue induced by incremental maximal exercise test (IMET) affects gait stability and variability in healthy subjects. Twenty-two young healthy male subjects walked in a treadmill at preferred walking speed for 4min prior (PreT) the test, which was followed by three series of 4min of walking with 4min of rest among them. Gait variability was assessed using walk ratio (WR), calculated as step length normalized by step frequency, root mean square (RMSratio) of trunk acceleration, standard deviation of medial-lateral trunk acceleration between strides (VARML), coefficient of variation of step frequency (SFCV), length (SLCV) and width (SWCV). Gait stability was assessed using margin of stability (MoS) and local dynamic stability (λs). VARML, SFCV, SLCV and SWCV increased after the test indicating an increase in gait variability. MoS decreased and λs increased after the test, indicating a decrease in gait stability. All variables showed a trend to return to PreT values, but the 20-min post-test interval appears not to be enough for a complete recovery. The results showed that general fatigue induced by IMET alters negatively the gait, and an interval of at least 20min should be considered for injury prevention in tasks with similar demands. Copyright © 2016 Elsevier Ltd. All rights reserved.
Marques, Ana Paula C; Oliveira, Sandra Maria V L; Rezende, Grazielli R; Melo, Dayane A; Fernandes-Fitts, Sonia M; Pontes, Elenir Rose J C; Bonecini-Almeida, Maria da Glória; Camargo, Zoilo P; Mendes, Rinaldo P; Paniago, Anamaria M M
2017-10-01
We estimated the occurrence rate of the booster phenomenon by using an intradermal test with 43 kDa glycoprotein in an endemic area of paracoccidioidomycosis in the central-west region of Brazil. Individuals who had a negative result on a survey performed by using an intradermal test with 43 kDa glycoprotein in an endemic area of paracoccidioidomycosis underwent a second intradermal test after 10-15 days to determine the presence or absence of the booster phenomenon. Statistical analyses were performed using the Chi-square test, Chi-square for linear trend test, Student's t test, and binomial test; p < 0.05 was considered significant. For the first time, we reported the occurrence of the booster phenomenon to an intradermal reaction caused by 43 kDa glycoprotein at a rate of 5.8-8.4%, depending on the test's cutoff point. This suggests that a cutoff point should be considered for the booster phenomenon in intradermal tests with 43 kDa glycoprotein: a difference of 6-7 mm between readings according to the first and second tests, depending on the purpose of the evaluation. The results indicate that the prevalence of paracoccidioidal infection in endemic areas is underestimated, as the booster phenomenon has not been considered in epidemiological surveys for this infection.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Malliou, P; Rokka, S; Beneka, A; Gioftsidou, A; Mavromoustakos, S; Godolias, G
2014-01-01
There is limited information on injury patterns in Step Aerobic Instructors (SAI) who exclusively execute "step" aerobic classes. To record the type and the anatomical position in relation to diagnosis of muscular skeletal injuries in step aerobic instructors. Also, to analyse the days of absence due to chronic injury in relation to weekly working hours, height of the step platform, working experience and working surface and footwear during the step class. The Step Aerobic Instructors Injuries Questionnaire was developed, and then validity and reliability indices were calculated. 63 SAI completed the questionnaire. For the statistical analysis of the data, the method used was the analysis of frequencies, the non-parametric test χ
On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting
NASA Astrophysics Data System (ADS)
Tellinghuisen, Joel
1996-10-01
One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.
A composite step conjugate gradients squared algorithm for solving nonsymmetric linear systems
NASA Astrophysics Data System (ADS)
Chan, Tony; Szeto, Tedd
1994-03-01
We propose a new and more stable variant of the CGS method [27] for solving nonsymmetric linear systems. The method is based on squaring the Composite Step BCG method, introduced recently by Bank and Chan [1,2], which itself is a stabilized variant of BCG in that it skips over steps for which the BCG iterate is not defined and causes one kind of breakdown in BCG. By doing this, we obtain a method (Composite Step CGS or CSCGS) which not only handles the breakdowns described above, but does so with the advantages of CGS, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG. Our strategy for deciding whether to skip a step does not involve any machine dependent parameters and is designed to skip near breakdowns as well as produce smoother iterates. Numerical experiments show that the new method does produce improved performance over CGS on practical problems.
Least-squares finite element solutions for three-dimensional backward-facing step flow
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Hou, Lin-Jun; Lin, Tsung-Liang
1993-01-01
Comprehensive numerical solutions of the steady state incompressible viscous flow over a three-dimensional backward-facing step up to Re equals 800 are presented. The results are obtained by the least-squares finite element method (LSFEM) which is based on the velocity-pressure-vorticity formulation. The computed model is of the same size as that of Armaly's experiment. Three-dimensional phenomena are observed even at low Reynolds number. The calculated values of the primary reattachment length are in good agreement with experimental results.
A higher-order split-step Fourier parabolic-equation sound propagation solution scheme.
Lin, Ying-Tsong; Duda, Timothy F
2012-08-01
A three-dimensional Cartesian parabolic-equation model with a higher-order approximation to the square-root Helmholtz operator is presented for simulating underwater sound propagation in ocean waveguides. The higher-order approximation includes cross terms with the free-space square-root Helmholtz operator and the medium phase speed anomaly. It can be implemented with a split-step Fourier algorithm to solve for sound pressure in the model. Two idealized ocean waveguide examples are presented to demonstrate the performance of this numerical technique.
March, Melissa I; Modest, Anna M; Ralston, Steven J; Hacker, Michele R; Gupta, Munish; Brown, Florence M
2016-01-01
To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. This was a retrospective cohort study of women with GDM who delivered in 2010-2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0-29.0) weeks versus 26.0 (24.0-28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31-1.0) versus 0.56 pounds/week (0.15-0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method.
Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh
2013-01-01
In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mbamalu, G.A.N.; El-Hawary, M.E.
The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less
Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover
NASA Technical Reports Server (NTRS)
Dangelo, K. R.
1974-01-01
A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.
BrightStat.com: free statistics online.
Stricker, Daniel
2008-10-01
Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.
Jindo, Takashi; Kitano, Naruki; Tsunoda, Kenji; Kusuda, Mikiko; Hotta, Kazushi; Okura, Tomohiro
Decreasing daily life physical activity (PA) outside an exercise program might hinder the benefit of that program on lower-extremity physical function (LEPF) in older adults. The purpose of this study was to investigate how daily life PA modulates the effects of an exercise program on LEPF. The participants were 46 community-dwelling older adults (mean age, 70.1 ± 3.5 years) in Kasama City, a rural area in Japan. All participated in a fall-prevention program called square-stepping exercise once a week for 11 weeks. We evaluated their daily life PA outside the exercise program with pedometers and calculated the average daily step counts during the early and late periods of the program. We divided participants into 2 groups on the basis of whether or not they decreased PA by more than 1000 steps per day between the early and late periods. To ascertain the LEPF benefits induced by participating in the exercise program, we measured 5 physical performance tests before and after the intervention: 1-leg stand, 5-time sit-to-stand, Timed Up and Go (TUG), habitual walking speed, and choice-stepping reaction time (CSRT). We used a 2-way analysis of variance to confirm the interaction between the 2 groups and the time effect before and after the intervention. During the exercise program, 8 participants decreased their daily life PA (early period, 6971 ± 2771; late period, 5175 ± 2132) and 38 participants maintained PA (early period, 6326 ± 2477; late period, 6628 ± 2636). Both groups significantly improved their performance in TUG and CSRT at the posttest compared with the baseline. A significant group-by-time interaction on the walking speed (P = .038) was observed: participants who maintained PA improved their performance more than those who decreased their PA. Square-stepping exercise requires and strengthens dynamic balance and agility, which contributed to the improved time effects that occurred in TUG and CSRT. On the contrary, because PA is positively associated with walking speed, maintaining daily life PA outside an exercise program may have a stronger influence on walking speed. To enhance the effectiveness of an exercise program for young-old adults, researchers and instructors should try to maintain the participant's daily life PA outside the program. Regardless of decreasing or maintaining daily life PA, the square-stepping exercise program could improve aspects of LEPF that require complex physical performance. However, a greater effect can be expected when participants maintain their daily life PA outside the exercise program.
Wang, Zhijie; Chen, Dongdong; Zheng, Liqiong; Huo, Linsheng; Song, Gangbing
2018-06-01
With the advantages of high tensile, bending, and shear strength, steel fiber concrete structures have been widely used in civil engineering. The health monitoring of concrete structures, including steel fiber concrete structures, receives increasing attention, and the Electromechanical Impedance (EMI)-based method is commonly used. Structures are often subject to changing axial load and ignoring the effect of axial forces may introduce error to Structural Health Monitoring (SHM), including the EMI-based method. However, many of the concrete structure monitoring algorithms do not consider the effects of axial loading. To investigate the influence of axial load on the EMI of a steel fiber concrete structure, concrete specimens with different steel fiber content (0, 30, 60, 90, 120) (kg/m³) were casted and the Lead Zirconate Titanate (PZT)-based Smart Aggregate (SA) was used as the EMI sensor. During tests, the step-by-step loading procedure was applied on different steel fiber content specimens, and the electromechanical impedance values were measured. The Normalized root-mean-square deviation Index (NI) was developed to analyze the EMI information and evaluate the test results. The results show that the normalized root-mean-square deviation index increases with the increase of the axial load, which clearly demonstrates the influence of axial load on the EMI values for steel fiber concrete and this influence should be considered during a monitoring or damage detection procedure if the axial load changes. In addition, testing results clearly reveal that the steel fiber content, often at low mass and volume percentage, has no obvious influence on the PZT's EMI values. Furthermore, experiments to test the repeatability of the proposed method were conducted. The repeating test results show that the EMI-based indices are repeatable and there is a great linearity between the NI and the applied loading.
Investigation of test methods, material properties and processes for solar cell encapsulants
NASA Technical Reports Server (NTRS)
Willis, P. B.; Baum, B.
1983-01-01
The goal of the program is to identify, test, evaluate and recommend encapsulation materials and processes for the fabrication of cost-effective and long life solar modules. Of the $18 (1948 $) per square meter allocated for the encapsulation components approximately 50% of the cost ($9/sq m) may be taken by the load bearing component. Due to the proportionally high cost of this element, lower costing materials were investigated. Wood based products were found to be the lowest costing structural materials for module construction, however, they require protection from rainwater and humidity in order to acquire dimensional stability. The cost of a wood product based substrate must, therefore, include raw material costs plus the cost of additional processing to impart hygroscopic inertness. This protection is provided by a two step, or split process in which a flexible laminate containing the cell string is prepared, first in a vacuum process and then adhesively attached with a back cover film to the hardboard in a subsequent step.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Enhancing multi-step quantum state tomography by PhaseLift
NASA Astrophysics Data System (ADS)
Lu, Yiping; Zhao, Qing
2017-09-01
Multi-photon system has been studied by many groups, however the biggest challenge faced is the number of copies of an unknown state are limited and far from detecting quantum entanglement. The difficulty to prepare copies of the state is even more serious for the quantum state tomography. One possible way to solve this problem is to use adaptive quantum state tomography, which means to get a preliminary density matrix in the first step and revise it in the second step. In order to improve the performance of adaptive quantum state tomography, we develop a new distribution scheme of samples and extend it to three steps, that is to correct it once again based on the density matrix obtained in the traditional adaptive quantum state tomography. Our numerical results show that the mean square error of the reconstructed density matrix by our new method is improved to the level from 10-4 to 10-9 for several tested states. In addition, PhaseLift is also applied to reduce the required storage space of measurement operator.
Force Limited Random Vibration Test of TESS Camera Mass Model
NASA Technical Reports Server (NTRS)
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
Skinner, Elizabeth H; Dinh, Tammy; Hewitt, Melissa; Piper, Ross; Thwaites, Claire
2016-11-01
Falls are associated with morbidity, loss of independence, and mortality. While land-based group exercise and Tai Chi programs reduce the risk of falls, aquatic therapy may allow patients to complete balance exercises with less pain and fear of falling; however, limited data exist. The objective of the study was to pilot the implementation of an aquatic group based on Ai Chi principles (Aquabalance) and to evaluate the safety, intervention acceptability, and intervention effect sizes. Pilot observational cohort study. Forty-two outpatients underwent a single 45-minute weekly group aquatic Ai Chi-based session for eight weeks (Aquabalance). Safety was monitored using organizational reporting systems. Patient attendance, satisfaction, and self-reported falls were also recorded. Balance measures included the Timed Up and Go (TUG) test, the Four Square Step Test (FSST), and the unilateral Step Tests. Forty-two patients completed the program. It was feasible to deliver Aquabalance, as evidenced by the median (IQR) attendance rate of 8.0 (7.8, 8.0) out of 8. No adverse events occurred and participants reported high satisfaction levels. Improvements were noted on the TUG, 10-meter walk test, the Functional Reach Test, the FSST, and the unilateral step tests (p < 0.05). The proportion of patients defined as high falls risk reduced from 38% to 21%. The study was limited by its small sample size, single-center nature, and the absence of a control group. Aquabalance was safe, well-attended, and acceptable to participants. A randomized controlled assessor-blinded trial is required.
NASA Astrophysics Data System (ADS)
Rowland, David J.; Biteen, Julie S.
2017-04-01
Single-molecule super-resolution imaging and tracking can measure molecular motions inside living cells on the scale of the molecules themselves. Diffusion in biological systems commonly exhibits multiple modes of motion, which can be effectively quantified by fitting the cumulative probability distribution of the squared step sizes in a two-step fitting process. Here we combine this two-step fit into a single least-squares minimization; this new method vastly reduces the total number of fitting parameters and increases the precision with which diffusion may be measured. We demonstrate this Global Fit approach on a simulated two-component system as well as on a mixture of diffusing 80 nm and 200 nm gold spheres to show improvements in fitting robustness and localization precision compared to the traditional Local Fit algorithm.
Hybrid least squares multivariate spectral analysis methods
Haaland, David M.
2004-03-23
A set of hybrid least squares multivariate spectral analysis methods in which spectral shapes of components or effects not present in the original calibration step are added in a following prediction or calibration step to improve the accuracy of the estimation of the amount of the original components in the sampled mixture. The hybrid method herein means a combination of an initial calibration step with subsequent analysis by an inverse multivariate analysis method. A spectral shape herein means normally the spectral shape of a non-calibrated chemical component in the sample mixture but can also mean the spectral shapes of other sources of spectral variation, including temperature drift, shifts between spectrometers, spectrometer drift, etc. The shape can be continuous, discontinuous, or even discrete points illustrative of the particular effect.
Development of self-acting seals for helicopter engines
NASA Technical Reports Server (NTRS)
Lynwander, P.
1974-01-01
An experimental evaluation of a NASA-designed self-acting face seal for use in advanced gas turbine main shaft positions was conducted. The seal incorporated Rayleigh step pads (self-acting geometry) for lift augmentation. Satisfactory performance of the gas film seal was demonstrated in a 500-hour endurance test at speeds to 183 m/s (600 ft/sec, 54,000 rpm) and air pressure differential of 137 newtons per square centimeter (198.7 psi). Carbon wear was minor. Tests were also conducted with seal seat runout greater than that expected in engine operation and in a severe sand and dust environment. Seal operation was satisfactory in both these detrimental modes of operation.
Evidence-based dentistry skill acquisition by second-year dental students.
Marshall, T A; McKernan, S C; Straub-Morarend, C L; Guzman-Armstrong, S; Marchini, L; Handoo, N Q; Cunningham, M A
2018-05-22
Identification and assessment of Evidence-based dentistry (EBD) outcomes have been elusive. Our objective was to describe EBD skill acquisition during the second (D2) year of pre-doctoral dental education and student competency at the end of the year. The first and fourth (final) curricular-required EBD Exercises (ie, application of the first 4 steps of the 5-Step evidence-based practice process applied to a real or hypothetical situation) completed by D2 students (n = 151) during 2014-2015 and 2015-2016 were evaluated to measure skill acquisition through use of a novel rubric with measures of performance from novice to expert. Exercises were evaluated on the performance for each step, identification of manuscript details and reflective commentary on manuscript components. Changes in performance were evaluated using the chi-square test for trend and the Wilcoxon signed-rank test. Seventy-eight per cent of students scored competent or higher on the Ask step at the beginning of the D2 year; scores improved with 58% scoring proficient or expert on the fourth Exercise (P < .001). Most students were advanced beginners or higher in the Acquire, Appraise and Apply steps at the beginning of the D2 year, with minimal growth observed during the year. Identification of manuscript details improved between the first and fourth Exercises (P = .015); however, depth of commentary skills did not change. Unlike previous investigations evaluating EBD knowledge or behaviour in a testing situation, we evaluated skill acquisition using applied Exercises. Consistent with their clinical and scientific maturity, D2 students minimally performed as advanced beginners at the end of their D2 year. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Trammell, Scott A.; Zabetakis, Dan; Moore, Martin; Verbarg, Jasenka; Stenger, David A.
2014-01-01
Square wave voltammetry for the reduction of 2,4,6-trinitrotoluene (TNT) was measured in 100 mM potassium phosphate buffer (pH 8) at gold electrodes modified with self-assembled monolayers (SAMs) containing either an alkane thiol or aromatic ring thiol structures. At 15 Hz, the electrochemical sensitivity (µA/ppm) was similar for all SAMs tested. However, at 60 Hz, the SAMs containing aromatic structures had a greater sensitivity than the alkane thiol SAM. In fact, the alkane thiol SAM had a decrease in sensitivity at the higher frequency. When comparing the electrochemical response between simulations and experimental data, a general trend was observed in which most of the SAMs had similar heterogeneous rate constants within experimental error for the reduction of TNT. This most likely describes a rate limiting step for the reduction of TNT. However, in the case of the alkane SAM at higher frequency, the decrease in sensitivity suggests that the rate limiting step in this case may be electron tunneling through the SAM. Our results show that SAMs containing aromatic rings increased the sensitivity for the reduction of TNT when higher frequencies were employed and at the same time suppressed the electrochemical reduction of dissolved oxygen. PMID:25549081
Zheng, Wenjun; Brooks, Bernard R
2006-06-15
Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.
March, Melissa I.; Modest, Anna M.; Ralston, Steven J.; Hacker, Michele R.; Gupta, Munish; Brown, Florence M.
2016-01-01
Abstract Objective: To compare characteristics and outcomes of women diagnosed with gestational diabetes mellitus (GDM) by the newer one-step glucose tolerance test and those diagnosed with the traditional two-step method. Research design and methods: This was a retrospective cohort study of women with GDM who delivered in 2010–2011. Data are reported as proportion or median (interquartile range) and were compared using a Chi-square, Fisher's exact or Wilcoxon rank sum test based on data type. Results: Of 235 women with GDM, 55.7% were diagnosed using the two-step method and 44.3% with the one-step method. The groups had similar demographics and GDM risk factors. The two-step method group was diagnosed with GDM one week later [27.0 (24.0–29.0) weeks versus 26.0 (24.0–28.0 weeks); p = 0.13]. The groups had similar median weight gain per week before diagnosis. After diagnosis, women in the one-step method group had significantly higher median weight gain per week [0.67 pounds/week (0.31–1.0) versus 0.56 pounds/week (0.15–0.89); p = 0.047]. In the one-step method group more women had suspected macrosomia (11.7% versus 5.3%, p = 0.07) and more neonates had a birth weight >4000 g (13.6% versus 7.5%, p = 0.13); however, these differences were not statistically significant. Other pregnancy and neonatal complications were similar. Conclusions: Women diagnosed with the one-step method gained more weight per week after GDM diagnosis and had a non-statistically significant increased risk for suspected macrosomia. Our data suggest the one-step method identifies women with at least equally high risk as the two-step method. PMID:25958989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J.; Elmore, R.; Kennedy, C.
This research is to illustrate the use of statistical inference techniques in order to quantify the uncertainty surrounding reliability estimates in a step-stress accelerated degradation testing (SSADT) scenario. SSADT can be used when a researcher is faced with a resource-constrained environment, e.g., limits on chamber time or on the number of units to test. We apply the SSADT methodology to a degradation experiment involving concentrated solar power (CSP) mirrors and compare the results to a more traditional multiple accelerated testing paradigm. Specifically, our work includes: (1) designing a durability testing plan for solar mirrors (3M's new improved silvered acrylic "Solarmore » Reflector Film (SFM) 1100") through the ultra-accelerated weathering system (UAWS), (2) defining degradation paths of optical performance based on the SSADT model which is accelerated by high UV-radiant exposure, and (3) developing service lifetime prediction models for solar mirrors using advanced statistical inference. We use the method of least squares to estimate the model parameters and this serves as the basis for the statistical inference in SSADT. Several quantities of interest can be estimated from this procedure, e.g., mean-time-to-failure (MTTF) and warranty time. The methods allow for the estimation of quantities that may be of interest to the domain scientists.« less
NASA Technical Reports Server (NTRS)
Siegel, W. H.
1978-01-01
As part of NASA's continuing research into hypersonics and 85 square foot hypersonic wing test section of a proposed hypersonic research airplane was laboratory tested. The project reported on in this paper has carried the hypersonic wing test structure project one step further by testing a single beaded panel to failure. The primary interest was focused upon the buckling characteristics of the panel under pure compression with boundary conditions similar to those found in a wing mounted condition. Three primary phases of analysis are included in the report. These phases include: experimental testing of the beaded panel to failure; finite element structural analysis of the beaded panel with the computer program NASTRAN; a summary of the semiclassical buckling equations for the beaded panel under purely compressive loads. Comparisons between each of the analysis methods are also included.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.
Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179
Four-point-bend fatigue of AA 2026 aluminum alloys
NASA Astrophysics Data System (ADS)
Li, J. X.; Zhai, T.; Garratt, M. D.; Bray, G. H.
2005-09-01
High-cycle fatigue tests were carried out on a newly developed high-strength AA 2026 Al alloy, which was in the form of extrusion bars with square and rectangular cross sections, using a self-aligning four-point-bend rig at room temperature, 15 Hz, and R = 0.1, in lab air. The fatigue strength of the square and rectangular bars was measured to be 85 and 90 pct of their yield strength, respectively, more than twice that of the predecessor to the 2026 alloy (the AA 2024 Al alloy). Fatigue cracks were found to be always initiated at large Θ' (Al7Cu2(Fe,Mn)) particles and to propagate predominantly in a crystallographic mode in the AA 2026 alloy. The fatigue fractographies of the square and rectangular extrusion bars were found to be markedly different, due to their different grain structures (fibril and layered, respectively). Fracture steps on the crack face were found in both of these extrusion bars. Since the 2026 alloy was purer in terms of Fe and Si content, it contained much less coarse particles than in a 2024 alloy. This partially accounted for the superior fatigue strength of the 2026 alloy.
A new algorithm for stand table projection models.
Quang V. Cao; V. Clark Baldwin
1999-01-01
The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....
A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models
NASA Astrophysics Data System (ADS)
Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.
2010-09-01
For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Nurjahan, M I; Lim, T A; Yeong, S W; Foong, A L S; Ware, J
2002-12-01
The objective of this survey was to obtain a self-reported assessment of the use of Information and Communication Technology (ICT) by medical students at the International Medical University, Malaysia. Students' perceived skills and extent of usage of ICT were evaluated using a questionnaire. Chi-square analysis were performed to ascertain the association between variables. Further statistical testing using Chi-square test for trend was done when one of the variables was ordered, and Spearman rank correlation when both variables were ordered. Overall, (98%) of students responded to the questionnaire. Twenty seven students (5.7%) did not use a computer either in the university or at home. Most students surveyed reported adequate skills at word processing (55%), e-mailing (78%) and surfing the internet (67%). The results suggests that in order to increase the level of computer literacy among medical students, positive steps would need to be taken, for example the formal inclusion of ICT instruction in the teaching of undergraduate medicine. This will enhance medical students' ability to acquire, appraise, and use information in order to solve clinical and other problems quickly and efficiently in the course of their studies, and more importantly when they graduate.
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction
Cruz-Cano, Raul; Chew, David S.H.; Kwok-Pui, Choi; Ming-Ying, Leung
2010-01-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications. PMID:20729987
Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction.
Cruz-Cano, Raul; Chew, David S H; Kwok-Pui, Choi; Ming-Ying, Leung
2010-06-01
Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications.
Parallel Nonnegative Least Squares Solvers for Model Order Reduction
2016-03-01
NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper -reduction procedure is used when constructing a reduced-order model...ScaLAPACK and performance results are presented. nonnegative least squares, model order reduction, hyper -reduction, Energy Conserving Sampling and...optimal solution. ........................................ 20 Table 6 Reduced mesh sizes produced for each solver in the ECSW hyper -reduction step
Process for obtaining multiple sheet resistances for thin film hybrid microcircuit resistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norwood, D.P.
1989-01-31
A standard thin film circuit containing Ta/sub 2/N (100 ohms/square) resistors is fabricated by depositing on a dielectric substrate successive layers of Ta/sub 2/N, Ti and Pd, with a gold layer to provide conductors. The addition of a few simple photoprocessing steps to the standard TFN manufacturing process enables the formation of Ta/sub 2/N + Ti (10 ohms/square) and Ta/sub 2/N + Ti + Pd (1 ohm/square) resistors in the same otherwise standard thin film circuit structure.
Process for obtaining multiple sheet resistances for thin film hybrid microcircuit resistors
Norwood, David P.
1989-01-01
A standard thin film circuit containing Ta.sub.2 N (100 ohms/square) resirs is fabricated by depositing on a dielectric substrate successive layers of Ta.sub.2 N, Ti and Pd, with a gold layer to provide conductors. The addition of a few simple photoprocessing steps to the standeard TFN manufacturing process enables the formation of Ta.sub.2 N+Ti (10 ohms/square) and Ta.sub.2 N+Ti+Pd (1 ohm/square) resistors in the same otherwise standard thin film circuit structure.
Combining Approach in Stages with Least Squares for fits of data in hyperelasticity
NASA Astrophysics Data System (ADS)
Beda, Tibi
2006-10-01
The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).
Large-eddy simulation of a backward facing step flow using a least-squares spectral element method
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Mittal, Rajat
1996-01-01
We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.
46 CFR 108.449 - Piping tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square centimeter (1000 pounds per square inch), with no additional gas introduced into the system, the... of more than 10.5 kilograms per square centimeter (150 pounds per square inch) per minute for a 2 minute period. (c) When tested with CO2 or other inert gas under a pressure of 42 kilograms per square...
46 CFR 108.449 - Piping tests.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square centimeter (1000 pounds per square inch), with no additional gas introduced into the system, the... of more than 10.5 kilograms per square centimeter (150 pounds per square inch) per minute for a 2 minute period. (c) When tested with CO2 or other inert gas under a pressure of 42 kilograms per square...
46 CFR 108.449 - Piping tests.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square centimeter (1000 pounds per square inch), with no additional gas introduced into the system, the... of more than 10.5 kilograms per square centimeter (150 pounds per square inch) per minute for a 2 minute period. (c) When tested with CO2 or other inert gas under a pressure of 42 kilograms per square...
46 CFR 108.449 - Piping tests.
Code of Federal Regulations, 2010 CFR
2010-10-01
... square centimeter (1000 pounds per square inch), with no additional gas introduced into the system, the... of more than 10.5 kilograms per square centimeter (150 pounds per square inch) per minute for a 2 minute period. (c) When tested with CO2 or other inert gas under a pressure of 42 kilograms per square...
NASA Astrophysics Data System (ADS)
Salatino, Maria
2017-06-01
In the current submm and mm cosmology experiments the focal planes are populated by kilopixel transition edge sensors (TESes). Varying incoming power load requires frequent rebiasing of the TESes through standard current-voltage (IV) acquisition. The time required to perform IVs on such large arrays and the resulting transient heating of the bath reduces the sky observation time. We explore a bias step method that significantly reduces the time required for the rebiasing process. This exploits the detectors' responses to the injection of a small square wave signal on top of the dc bias current and knowledge of the shape of the detector transition R(T,I). This method has been tested on two detector arrays of the Atacama Cosmology Telescope (ACT). In this paper, we focus on the first step of the method, the estimate of the TES %Rn.
Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M
2003-10-01
Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Moore, Martha; Barker, Karen
2017-09-11
The four square step test (FSST) was first validated in healthy older adults to provide a measure of dynamic standing balance and mobility. The FSST has since been used in a variety of patient populations. The purpose of this systematic review is to determine the validity and reliability of the FSST in these different adult patient populations. The literature search was conducted to highlight all the studies that measured validity and reliability of the FSST. Six electronic databases were searched including AMED, CINAHL, MEDLINE, PEDro, Web of Science and Google Scholar. Grey literature was also searched for any documents relevant to the review. Two independent reviewers carried out study selection and quality assessment. The methodological quality was assessed using the QUADAS-2 tool, which is a validated tool for the quality assessment of diagnostic accuracy studies, and the COSMIN four-point checklist, which contains standards for evaluating reliability studies on the measurement properties of health instruments. Fifteen studies were reviewed studying community-dwelling older adults, Parkinson's disease, Huntington's disease, multiple sclerosis, vestibular disorders, post stroke, post unilateral transtibial amputation, knee pain and hip osteoarthritis. Three of the studies were of moderate methodological quality scoring low in risk of bias and applicability for all domains in the QUADAS-2 tool. Three studies scored "fair" on the COSMIN four-point checklist for the reliability components. The concurrent validity of the FSST was measured in nine of the studies with moderate to strong correlations being found. Excellent Intraclass Correlation Coefficients were found between physiotherapists carrying out the tests (ICC = .99) with good to excellent test-retest reliability shown in nine of the studies (ICC = .73-.98). The FSST may be an effective and valid tool for measuring dynamic balance and a participants' falls risk. It has been shown to have strong correlations with other measures of balance and mobility with good reliability shown in a number of populations. However, the quality of the papers reviewed was variable with key factors, such as sample size and test set up, needing to be addressed before the tool can be confidently used in these specified populations.
NASA Astrophysics Data System (ADS)
Sikora, Grzegorz; Teuerle, Marek; Wyłomańska, Agnieszka; Grebenkov, Denis
2017-08-01
The most common way of estimating the anomalous scaling exponent from single-particle trajectories consists of a linear fit of the dependence of the time-averaged mean-square displacement on the lag time at the log-log scale. We investigate the statistical properties of this estimator in the case of fractional Brownian motion (FBM). We determine the mean value, the variance, and the distribution of the estimator. Our theoretical results are confirmed by Monte Carlo simulations. In the limit of long trajectories, the estimator is shown to be asymptotically unbiased, consistent, and with vanishing variance. These properties ensure an accurate estimation of the scaling exponent even from a single (long enough) trajectory. As a consequence, we prove that the usual way to estimate the diffusion exponent of FBM is correct from the statistical point of view. Moreover, the knowledge of the estimator distribution is the first step toward new statistical tests of FBM and toward a more reliable interpretation of the experimental histograms of scaling exponents in microbiology.
Parra-Moreno, M; Rodríguez-Juan, J J; Ruiz-Cárdenas, J D
2018-03-07
Commercial video games are considered an effective tool to improve postural balance in different populations. However, the effectiveness of these video games for patients with multiple sclerosis (MS) is unclear. To analyse existing evidence on the effects of commercial video games on postural balance in patients with MS. We conducted a systematic literature search on 11 databases (Academic-Search Complete, AMED, CENTRAL, CINAHL, WoS, IBECS, LILACS, Pubmed/Medline, Scielo, SPORTDiscus, and Science Direct) using the following terms: "multiple sclerosis", videogames, "video games", exergam*, "postural balance", posturography, "postural control", balance. Risk of bias was analysed by 2 independent reviewers. We conducted 3 fixed effect meta-analyses and calculated the difference of means (DM) and the 95% confidence interval (95% CI) for the Four Step Square Test, Timed 25-Foot Walk, and Berg Balance Scale. Five randomized controlled trials were included in the qualitative systematic review and 4 in the meta-analysis. We found no significant differences between the video game therapy group and the control group in Four Step Square Test (DM: -.74; 95% CI, -2.79-1.32; P=.48; I 2 =0%) and Timed 25-Foot Walk scores (DM: .15; 95% CI, -1.06-.76; P=.75; I 2 =0%). We did observe intergroup differences in BBS scores in favour of video game therapy (DM: 5.30; 95% CI, 3.39-7.21; P<.001; I 2 =0%), but these were not greater than the minimum detectable change reported in the literature. The effectiveness of commercial video game therapy for improving postural balance in patients with MS is limited. Copyright © 2018 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Lin, Jyh-Jiuan; Chang, Ching-Hui; Pal, Nabendu
2015-01-01
To test the mutual independence of two qualitative variables (or attributes), it is a common practice to follow the Chi-square tests (Pearson's as well as likelihood ratio test) based on data in the form of a contingency table. However, it should be noted that these popular Chi-square tests are asymptotic in nature and are useful when the cell frequencies are "not too small." In this article, we explore the accuracy of the Chi-square tests through an extensive simulation study and then propose their bootstrap versions that appear to work better than the asymptotic Chi-square tests. The bootstrap tests are useful even for small-cell frequencies as they maintain the nominal level quite accurately. Also, the proposed bootstrap tests are more convenient than the Fisher's exact test which is often criticized for being too conservative. Finally, all test methods are applied to a few real-life datasets for demonstration purposes.
Vossenkuhl, Birgit; Brandt, Jörgen; Fetsch, Alexandra; Käsbohrer, Annemarie; Kraushaar, Britta; Alt, Katja; Tenhagen, Bernd-Alois
2014-01-01
The prevalence of MRSA in the turkey meat production chain in Germany was estimated within the national monitoring for zoonotic agents in 2010. In total 22/112 (19.6%) dust samples from turkey farms, 235/359 (65.5%) swabs from turkey carcasses after slaughter and 147/460 (32.0%) turkey meat samples at retail were tested positive for MRSA. The specific distributions of spa types, SCCmec types and antimicrobial resistance profiles of MRSA isolated from these three different origins were compared using chi square statistics and the proportional similarity index (Czekanowski index). No significant differences between spa types, SCCmec types and antimicrobial resistance profiles of MRSA from different steps of the German turkey meat production chain were observed using Chi-Square test statistics. The Czekanowski index which can obtain values between 0 (no similarity) and 1 (perfect agreement) was consistently high (0.79–0.86) for the distribution of spa types and SCCmec types between the different processing stages indicating high degrees of similarity. The comparison of antimicrobial resistance profiles between the different process steps revealed the lowest Czekanowski index values (0.42–0.56). However, the Czekanowski index values were substantially higher than the index when isolates from the turkey meat production chain were compared to isolates from wild boar meat (0.13–0.19), an example of a separated population of MRSA used as control group. This result indicates that the proposed statistical method is valid to detect existing differences in the distribution of the tested characteristics of MRSA. The degree of similarity in the distribution of spa types, SCCmec types and antimicrobial resistance profiles between MRSA isolates from different process stages of turkey meat production may reflect MRSA transmission along the chain. PMID:24788143
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2013 CFR
2013-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2010 CFR
2010-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2011 CFR
2011-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2012 CFR
2012-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
49 CFR 231.1 - Box and other house cars built or placed in service before October 1, 1966.
Code of Federal Regulations, 2014 CFR
2014-10-01
... and passing through the inside face of knuckle when closed with coupler horn against the buffer block... brake-shaft step which will permit the brake chain to drop under the brake shaft shall not be used. U...-eighths of an inch square. Square-fit taper, nominally 2 in 12 inches. (See plate A.) (vi) Brake chain...
Kay, Cynthia; Jackson, Jeffrey L; Frank, Michael
2015-01-01
To explore the relationship between United States Medical Licensing Examination (USMLE) Step 1 scores, yearly in-service training exam (ITE) scores, and passing the American Board of Internal Medicine certifying examination (ABIM-CE). The authors conducted a retrospective database review of internal medicine residents from the Medical College of Wisconsin from 2004 through 2012. Residents' USMLE Step 1, ITE, and ABIM-CE scores were extracted. Pearson rho, chi-square, and logistic regression were used to determine whether relationships existed between the scores and if Step 1 and ITE scores correlate with passing the ABIM-CE. There were 241 residents, who participated in 728 annual ITEs. There were Step 1 scores for 195 (81%) residents and ABIM-CE scores for 183 (76%). Step 1 and ABIM-CE scores had a modest correlation (rho: 0.59), as did ITE and ABIM-CE scores (rho: 0.48-0.67). Failing Step 1 or being in the bottom ITE quartile during any year of testing markedly increased likelihood of failing the boards (Step 1: relative risk [RR]: 2.4; 95% CI: 1.0-5.9; first-year residents' RR: 1.3; 95% CI: 1.0-1.6; second-year residents' RR: 1.3; 95% CI: 1.1-1.5; third-year residents' RR: 1.3; 95% CI: 1.1-1.5). USMLE Step 1 and ITE scores have a modest correlation with board scores. Failing Step 1 or scoring in the bottom quartile of the ITE increased the risk of failing the boards. What effective intervention, if any, program directors may use with at-risk residents is a question deserving further research.
NASA Astrophysics Data System (ADS)
Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza
2017-07-01
In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.
Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin
2008-08-20
An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.
Simulation, Design, and Test of Square, Apodized Photon Sieves for High Contrast, Exoplanet Imaging
reason, square apodized photon sieves were simulated, designed, and tested for high-contrast performance and use in an exoplanet imaging telescope...for apodizing sieves, measuring PSFs, and characterizing high-contrast performance. Tests indicated that square apodized sieves could detect
Research on spacecraft electrical power conversion
NASA Technical Reports Server (NTRS)
Wilson, T. G.
1974-01-01
The steady state characteristics and starting behavior of some widely used self-oscillating magnetically coupled square wave inverters were studied and the development of LC-tuned square wave inverters is reported. An analysis on high amplitude voltage spikes which occur in dc-to-square-wave parallel converters shows the importance of various circuit parameters for inverter design and for the suppression of spikes. A computerized simulation of an inductor energy storage dc-to-dc converter with closed loop regulators and of a preregulating current step-up converter are detailed. Work continued on the computer aided design of two-winding energy storage dc-to-dc converters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J
Purpose: Metal objects create severe artifacts in kilo-voltage (kV) CT image reconstructions due to the high attenuation coefficients of high atomic number objects. Most of the techniques devised to reduce this artifact utilize a two-step approach, which do not reliably yield the qualified reconstructed images. Thus, for accuracy and simplicity, this work presents a one-step reconstruction method based on a modified penalized weighted least-squares (PWLS) technique. Methods: Existing techniques for metal artifact reduction mostly adopt a two-step approach, which conduct additional reconstruction with the modified projection data from the initial reconstruction. This procedure does not consistently perform well due tomore » the uncertainties in manipulating the metal-contaminated projection data by thresholding and linear interpolation. This study proposes a one-step reconstruction process using a new PWLS operation with total-variation (TV) minimization, while not manipulating the projection. The PWLS for CT reconstruction has been investigated using a pre-defined weight, based on the variance of the projection datum at each detector bin. It works well when reconstructing CT images from metal-free projection data, which does not appropriately penalize metal-contaminated projection data. The proposed work defines the weight at each projection element under the assumption of a Poisson random variable. This small modification using element-wise penalization has a large impact in reducing metal artifacts. For evaluation, the proposed technique was assessed with two noisy, metal-contaminated digital phantoms, against the existing PWLS with TV minimization and the two-step approach. Result: The proposed PWLS with TV minimization greatly improved the metal artifact reduction, relative to the other techniques, by watching the results. Numerically, the new approach lowered the normalized root-mean-square error about 30 and 60% for the two cases, respectively, compared to the two-step method. Conclusion: A new PWLS operation shows promise for improving metal artifact reduction in CT imaging, as well as simplifying the reconstructing procedure.« less
Blockage Testing in the NASA Glenn 225 Square Centimeter Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Sevier, Abigail; Davis, David O.; Schoenenberger, Mark
2017-01-01
The starting characteristics for three different model geometries were tested in the Glenn Research Center 225 Square Centimeter Supersonic Wind Tunnel. The test models were tested at Mach 2, 2.5 and 3 in a square test section and at Mach 2.5 again in an asymmetric test section. The results gathered in this study will help size the test models and inform other design features for the eventual implementation of a magnetic suspension system.
Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu
2006-08-01
In the pharmaceutical preparation of a controlled release drug, it is very important and necessary to understand the entire release properties. As the first step, the dissolution test under various conditions is selected for the in vitro test, and usually the results are analyzed following Drug Approval and Licensing Procedures. In this test, 3 time points for each release ratio, such as 0.2-0.4, 0.4-0.6, and over 0.7, respectively, should be selected in advance. These are analyzed as to whether their values are inside or outside the prescribed aims at each time point. This method is very simple and useful but the details of the release properties can not be clarified or confirmed. The validity of the dissolution test in analysis using a combination of the square-root time law and cube-root law equations to understand all the drug release properties was confirmed by comparing the simulated value with that measured in the previous papers. Dissolution tests under various conditions affecting drug release properties in the human body were then examined, and the results were analyzed by both methods to identify their strengths and weaknesses. Hereafter, the control of pharmaceutical preparation, the manufacturing process, and understanding the drug release properties will be more efficient. It is considered that analysis using the combination of the square-root time law and cube-root law equations is very useful and efficient. The accuracy of predicting drug release properties in the human body was improved and clarified.
Madigan, Michael L; Aviles, Jessica; Allin, Leigh J; Nussbaum, Maury A; Alexander, Neil B
2018-04-16
A growing number of studies are using modified treadmills to train reactive balance after trip-like perturbations that require multiple steps to recover balance. The goal of this study was thus to develop and validate a low-tech reactive balance rating method in the context of trip-like treadmill perturbations to facilitate the implementation of this training outside the research setting. Thirty-five residents of five senior congregate housing facilities participated in the study. Subjects completed a series of reactive balance tests on a modified treadmill from which the reactive balance rating was determined, along with a battery of standard clinical balance and mobility tests that predict fall risk. We investigated the strength of correlation between the reactive balance rating and reactive balance kinematics. We compared the strength of correlation between the reactive balance rating and clinical tests predictive of fall risk, with the strength of correlation between reactive balance kinematics and the same clinical tests. We also compared the reactive balance rating between subjects predicted to be at a high or low risk of falling. The reactive balance rating was correlated with reactive balance kinematics (Spearman's rho squared = .04 - .30), exhibited stronger correlations with clinical tests than most kinematic measures (Spearman's rho squared = .00 - .23), and was 42-60% lower among subjects predicted to be at a high risk for falling. The reactive balance rating method may provide a low-tech, valid measure of reactive balance kinematics, and an indicator of fall risk, after trip-like postural perturbations.
40 CFR 761.310 - Collecting the sample.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standard wipe test as defined in § 761.123 to sample one 10 centimeter by 10 centimeter square (100 cm2) area to represent surface area PCB concentrations of each square meter or fraction of a square meter of... wipe test, only sample the entire area, rather than 10 centimeter by 10 centimeter squares. ...
40 CFR 761.310 - Collecting the sample.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard wipe test as defined in § 761.123 to sample one 10 centimeter by 10 centimeter square (100 cm2) area to represent surface area PCB concentrations of each square meter or fraction of a square meter of... wipe test, only sample the entire area, rather than 10 centimeter by 10 centimeter squares. ...
40 CFR 761.310 - Collecting the sample.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standard wipe test as defined in § 761.123 to sample one 10 centimeter by 10 centimeter square (100 cm2) area to represent surface area PCB concentrations of each square meter or fraction of a square meter of... wipe test, only sample the entire area, rather than 10 centimeter by 10 centimeter squares. ...
40 CFR 761.310 - Collecting the sample.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standard wipe test as defined in § 761.123 to sample one 10 centimeter by 10 centimeter square (100 cm2) area to represent surface area PCB concentrations of each square meter or fraction of a square meter of... wipe test, only sample the entire area, rather than 10 centimeter by 10 centimeter squares. ...
40 CFR 761.310 - Collecting the sample.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard wipe test as defined in § 761.123 to sample one 10 centimeter by 10 centimeter square (100 cm2) area to represent surface area PCB concentrations of each square meter or fraction of a square meter of... wipe test, only sample the entire area, rather than 10 centimeter by 10 centimeter squares. ...
Change Detection via Selective Guided Contrasting Filters
NASA Astrophysics Data System (ADS)
Vizilter, Y. V.; Rubis, A. Y.; Zheltov, S. Y.
2017-05-01
Change detection scheme based on guided contrasting was previously proposed. Guided contrasting filter takes two images (test and sample) as input and forms the output as filtered version of test image. Such filter preserves the similar details and smooths the non-similar details of test image with respect to sample image. Due to this the difference between test image and its filtered version (difference map) could be a basis for robust change detection. Guided contrasting is performed in two steps: at the first step some smoothing operator (SO) is applied for elimination of test image details; at the second step all matched details are restored with local contrast proportional to the value of some local similarity coefficient (LSC). The guided contrasting filter was proposed based on local average smoothing as SO and local linear correlation as LSC. In this paper we propose and implement new set of selective guided contrasting filters based on different combinations of various SO and thresholded LSC. Linear average and Gaussian smoothing, nonlinear median filtering, morphological opening and closing are considered as SO. Local linear correlation coefficient, morphological correlation coefficient (MCC), mutual information, mean square MCC and geometrical correlation coefficients are applied as LSC. Thresholding of LSC allows operating with non-normalized LSC and enhancing the selective properties of guided contrasting filters: details are either totally recovered or not recovered at all after the smoothing. These different guided contrasting filters are tested as a part of previously proposed change detection pipeline, which contains following stages: guided contrasting filtering on image pyramid, calculation of difference map, binarization, extraction of change proposals and testing change proposals using local MCC. Experiments on real and simulated image bases demonstrate the applicability of all proposed selective guided contrasting filters. All implemented filters provide the robustness relative to weak geometrical discrepancy of compared images. Selective guided contrasting based on morphological opening/closing and thresholded morphological correlation demonstrates the best change detection result.
Samitier, C Beatriz; Guirao, Lluis; Costea, Maria; Camós, Josep M; Pleguezuelos, Eulogio
2016-02-01
Lower limb amputation leads to impaired balance, ambulation, and transfers. Proper fit of the prosthesis is a determining factor for successful ambulation. Vacuum-assisted socket systems extract air from the socket, which decreases pistoning and probability of soft-tissue injuries and increases proprioception and socket comfort. To investigate the effect of vacuum-assisted socket system on transtibial amputees' performance-based and perceived balance, transfers, and gait. Quasi-experimental before-and-after study. Subjects were initially assessed using their prosthesis with the regular socket and re-evaluated 4 weeks after fitting including the vacuum-assisted socket system. We evaluated the mobility grade using Medicare Functional Classification Level, Berg Balance Scale, Four Square Step Test, Timed Up and Go Test, the 6-Min Walk Test, the Locomotor Capabilities Index, Satisfaction with Prosthesis (SAT-PRO questionnaire), and Houghton Scale. A total of 16 unilateral transtibial dysvascular amputees, mean age 65.12 (standard deviation = 10.15) years. Using the vacuum-assisted socket system, the patients significantly improved in balance, gait, and transfers: scores of the Berg Balance Scale increased from 45.75 (standard deviation = 6.91) to 49.06 (standard deviation = 5.62) (p < 0.01), Four Square Step Test decreased from 18.18 (standard deviation = 3.84) s to 14.97 (3.9) s (p < 0.01), Timed Up and Go Test decreased from 14.3 (standard deviation = 3.29) s to 11.56 (2.46) s (p < 0.01). The distance walked in the 6-Min Walk Test increased from 288.53 (standard deviation = 59.57) m to 321.38 (standard deviation = 72.81) m (p < 0.01). Vacuum-assisted socket systems are useful for improving balance, gait, and transfers in over-50-year-old dysvascular transtibial amputees. This study gives more insight into the use of vacuum-assisted socket systems to improve elderly transtibial dysvascular amputees' functionality and decrease their risk of falls. The use of an additional distal valve in the socket should be considered in patients with a lower activity level. © The International Society for Prosthetics and Orthotics 2014.
ERIC Educational Resources Information Center
Osler, James Edward
2013-01-01
This monograph provides an epistemological rational for the design of an advanced novel analysis metric. The metric is designed to analyze the outcomes of the Tri-Squared Test. This methodology is referred to as: "Tri-Squared Mean Cross Comparative Analysis" (given the acronym TSMCCA). Tri-Squared Mean Cross Comparative Analysis involves…
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
A Graphic Chi-Square Test For Two-Class Genetic Segregation Ratios
A.E. Squillace; D.J. Squillace
1970-01-01
A chart is presented for testing the goodness of fit of observed two-class genetic segregation ratios against hypothetical ratios, eliminating the need of computing chi-square. Although designed mainly for genetic studies, the chart can also be used for other types of studies involving two-class chi-square tests.
Your Chi-Square Test Is Statistically Significant: Now What?
ERIC Educational Resources Information Center
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
ERIC Educational Resources Information Center
Hill, Kathleen
The final booklet in a series on physical education and sports for the handicapped presents ideas for teaching dance to the physically disabled. Introductory sections consider the rehabilitation role of dance, physiological and psychological benefits, and facilities for dance instruction. Step-by-step suggestions are given for teaching ballroom…
Sebastião, Emerson; McAuley, Edward; Shigematsu, Ryosuke; Motl, Robert W
2017-09-01
We propose a randomized controlled trial (RCT) examining the feasibility of square-stepping exercise (SSE) delivered as a home-based program for older adults with multiple sclerosis (MS). We will assess feasibility in the four domains of process, resources, management and scientific outcomes. The trial will recruit older adults (aged 60 years and older) with mild-to-moderate MS-related disability who will be randomized into intervention or attention control conditions. Participants will complete assessments before and after completion of the conditions delivered over a 12-week period. Participants in the intervention group will have biweekly meetings with an exercise trainer in the Exercise Neuroscience Research Laboratory and receive verbal and visual instruction on step patterns for the SSE program. Participants will receive a mat for home-based practice of the step patterns, an instruction manual, and a logbook and pedometer for monitoring compliance. Compliance will be further monitored through weekly scheduled Skype calls. This feasibility study will inform future phase II and III RCTs that determine the actual efficacy and effectiveness of a home-based exercise program for older adults with MS.
Castine Report S-15 Project: Shipbuilding Standards
1976-01-01
Fixed Square Windows Ships” Extruded Aluminium Alloy Square Windows “ Ships” Foot Steps Ships* Wooden Hand Rail . Pilot Ladders Panama Canal Pilot...Platforms Aluminium Alloy Accommodation Ladders Mouth Pieces for Voice Tube Chain Drwe Type Telegraphs Fittings for Steam Whistle Llfeboats Radial Type...Cast Steel Angle Valves for Compressed Air F 8001-1957 F 8002-1967 F 8003.1975 F 8004.1975 F 8011 1966 F 8013.1969 F 8101.1969 F 8401.1970 F
Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben
2013-11-01
Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Shani, Guy; Shapiro, Amir; Oded, Goldstein; Dima, Kagan; Melzer, Itshak
2017-01-01
Rapid compensatory stepping plays an important role in preventing falls when balance is lost; however, these responses cannot be accurately quantified in the clinic. The Microsoft Kinect™ system provides real-time anatomical landmark position data in three dimensions (3D), which may bridge this gap. Compensatory stepping reactions were evoked in 8 young adults by a sudden platform horizontal motion on which the subject stood or walked on a treadmill. The movements were recorded with both a 3D-APAS motion capture and Microsoft Kinect™ systems. The outcome measures consisted of compensatory step times (milliseconds) and length (centimeters). The average values of two standing and walking trials for Microsoft Kinect™ and the 3D-APAS systems were compared using t -test, Pearson's correlation, Altman-bland plots, and the average difference of root mean square error (RMSE) of joint position. The Microsoft Kinect™ had high correlations for the compensatory step times ( r = 0.75-0.78, p = 0.04) during standing and moderate correlations for walking ( r = 0.53-0.63, p = 0.05). The step length, however had a very high correlations for both standing and walking ( r > 0.97, p = 0.01). The RMSE showed acceptable differences during the perturbation trials with smallest relative error in anterior-posterior direction (2-3%) and the highest in the vertical direction (11-13%). No systematic bias were evident in the Bland and Altman graphs. The Microsoft Kinect™ system provides comparable data to a video-based 3D motion analysis system when assessing step length and less accurate but still clinically acceptable for step times during balance recovery when balance is lost and fall is initiated.
Process for obtaining multiple sheet resistances for thin film hybrid microcircuit resistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norwood, D P
1989-01-31
A standard thin film circuit containing Ta/sub 2/N (100 ohms/square) resistors is fabricated by depositing on a dielectric substrate successive layers of Ta/sub 2/N, Ti and Pd, with a gold layer to provide conductors. The addition of a few simple photoprocessing steps to the standard TFN (thin film network) manufacturing process enables the formation of Ta/sub 2/N + Ti (10 ohms/square) and Ta/sub 2/N + Ti + Pd (1 ohm/square) resistors in the same otherwise standard thin film circuit structure. All three types of resistors are temperature-stable and laser-trimmable for precise definition of resistance values.
46 CFR 32.60-40 - Construction and testing of cargo tanks and bulkheads-TB/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... cargo tanks vented at gage pressure of 4 pounds per square inch or less shall be constructed and tested... 4 pounds per square inch but not exceeding 10 pounds per square inch gage pressure will be given... square inch are considered to be pressure vessels and shall be of cylindrical or similar design and shall...
46 CFR 32.60-40 - Construction and testing of cargo tanks and bulkheads-TB/ALL.
Code of Federal Regulations, 2013 CFR
2013-10-01
... cargo tanks vented at gage pressure of 4 pounds per square inch or less shall be constructed and tested... 4 pounds per square inch but not exceeding 10 pounds per square inch gage pressure will be given... square inch are considered to be pressure vessels and shall be of cylindrical or similar design and shall...
46 CFR 32.60-40 - Construction and testing of cargo tanks and bulkheads-TB/ALL.
Code of Federal Regulations, 2012 CFR
2012-10-01
... cargo tanks vented at gage pressure of 4 pounds per square inch or less shall be constructed and tested... 4 pounds per square inch but not exceeding 10 pounds per square inch gage pressure will be given... square inch are considered to be pressure vessels and shall be of cylindrical or similar design and shall...
46 CFR 32.60-40 - Construction and testing of cargo tanks and bulkheads-TB/ALL.
Code of Federal Regulations, 2011 CFR
2011-10-01
... cargo tanks vented at gage pressure of 4 pounds per square inch or less shall be constructed and tested... 4 pounds per square inch but not exceeding 10 pounds per square inch gage pressure will be given... square inch are considered to be pressure vessels and shall be of cylindrical or similar design and shall...
46 CFR 32.60-40 - Construction and testing of cargo tanks and bulkheads-TB/ALL.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cargo tanks vented at gage pressure of 4 pounds per square inch or less shall be constructed and tested... 4 pounds per square inch but not exceeding 10 pounds per square inch gage pressure will be given... square inch are considered to be pressure vessels and shall be of cylindrical or similar design and shall...
ERIC Educational Resources Information Center
Biermann, Carol
1988-01-01
Described is a study designed to introduce students to the behavior of common invertebrate animals, and to use of the chi-square statistical technique. Discusses activities with snails, pill bugs, and mealworms. Provides an abbreviated chi-square table and instructions for performing the experiments and statistical tests. (CW)
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
Automated Array Assembly, Phase 2
NASA Technical Reports Server (NTRS)
Carbajal, B. G.
1979-01-01
The solar cell module process development activities in the areas of surface preparation are presented. The process step development was carried out on texture etching including the evolution of a conceptual process model for the texturing process; plasma etching; and diffusion studies that focused on doped polymer diffusion sources. Cell processing was carried out to test process steps and a simplified diode solar cell process was developed. Cell processing was also run to fabricate square cells to populate sample minimodules. Module fabrication featured the demonstration of a porcelainized steel glass structure that should exceed the 20 year life goal of the low cost silicon array program. High efficiency cell development was carried out in the development of the tandem junction cell and a modification of the TJC called the front surface field cell. Cell efficiencies in excess of 16 percent at AM1 have been attained with only modest fill factors. The transistor-like model was proposed that fits the cell performance and provides a guideline for future improvements in cell performance.
Inventory of forest and rangeland and detection of forest stress
NASA Technical Reports Server (NTRS)
Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)
1973-01-01
The author has identified the following significant results. At the Atlanta site (226B) it was found that bulk color composites for October 15, 1972, and April 13, 1973, can be interpreted together to disclose the location of the perennial Kudzu vine (Pyeraria lobata). Land managers concerned with Kudzu eradication could use ERTS-1 to inventory locations over 200 meters (660 feet) square. Microdensitometer data collected on ERTS-1 Bulk photographic products for the Manitou test site (226C) have shown that the 15-step gray-scale tablets are not of systematic equal values corresponding to 1/14 the maximum radiant energy incident on the MSS sensor. The gray-scale values present a third-order polynomial function rather than a direct linear relationship. Although data collected on step tablets for precision photographic products appear more discrete, the density variation within blocks in almost as great as variations between blocks. These system errors will cause problems when attempting to analyze radiometric variances among vegetation and land use classes.
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
76 FR 13580 - Bus Testing; Calculation of Average Passenger Weight and Test Vehicle Weight
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... occupied per standing passenger from 1.5 to 1.75 square feet, and updating the Structural Strength and... standing passenger from 1.5 square feet of free floor space to 1.75 square feet of free floor space to... (assumed to be each 1.75 square foot of free floor space). * * * * * 3. Amend Appendix A to part 665 by...
Linear programming phase unwrapping for dual-wavelength digital holography.
Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand
2017-01-20
A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.
MRI-based intelligence quotient (IQ) estimation with sparse learning.
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject's IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge.
Toward an in situ phosphate sensor in seawater using Square Wave Voltammetry.
Barus, C; Romanytsia, I; Striebig, N; Garçon, V
2016-11-01
A Square Wave Voltammetry electrochemical method is proposed to measure phosphate in seawater as pulse techniques offer a higher sensitivity as compared to classical cyclic voltammetry. Chronoamperometry cannot be either adapted for an in situ sensor since this method requires to have controlled convection which will be impossible in a miniaturised sensor. Tests and validation of Square Wave Voltammetry parameters have been performed using an open cell and for the first time with a small volume (<400µL) laboratory prototypes. Two designs of prototypes have been compared. Using high frequency (f=250Hz) allows to obtain a linear behaviour between 0.1 and 1µmolL(-1) with a very low limit of detection of 0.05 µmolL(-1) after 60min of complexation waiting time. In order to obtain a linear regression for a larger concentration range i.e. 0.25-4µmolL(-1), a lower frequency of 2.5Hz is needed. A limit of detection of 0.1µmolL(-1) is obtained in this case after 30min of complexation waiting time for the peak measured at E=0.12V. Changing the position of the molybdenum electrode for the complexation step and moving the detection into another electrochemical cell allow to decrease the reaction time down to 5min. Copyright © 2016 Elsevier B.V. All rights reserved.
A solvent- and vacuum-free route to large-area perovskite films for efficient solar modules
NASA Astrophysics Data System (ADS)
Chen, Han; Ye, Fei; Tang, Wentao; He, Jinjin; Yin, Maoshu; Wang, Yanbo; Xie, Fengxian; Bi, Enbing; Yang, Xudong; Grätzel, Michael; Han, Liyuan
2017-10-01
Recent advances in the use of organic-inorganic hybrid perovskites for optoelectronics have been rapid, with reported power conversion efficiencies of up to 22 per cent for perovskite solar cells. Improvements in stability have also enabled testing over a timescale of thousands of hours. However, large-scale deployment of such cells will also require the ability to produce large-area, uniformly high-quality perovskite films. A key challenge is to overcome the substantial reduction in power conversion efficiency when a small device is scaled up: a reduction from over 20 per cent to about 10 per cent is found when a common aperture area of about 0.1 square centimetres is increased to more than 25 square centimetres. Here we report a new deposition route for methyl ammonium lead halide perovskite films that does not rely on use of a common solvent or vacuum: rather, it relies on the rapid conversion of amine complex precursors to perovskite films, followed by a pressure application step. The deposited perovskite films were free of pin-holes and highly uniform. Importantly, the new deposition approach can be performed in air at low temperatures, facilitating fabrication of large-area perovskite devices. We reached a certified power conversion efficiency of 12.1 per cent with an aperture area of 36.1 square centimetres for a mesoporous TiO2-based perovskite solar module architecture.
Square-core bundles for astronomical imaging
NASA Astrophysics Data System (ADS)
Bryant, Julia J.; Bland-Hawthorn, Joss
2012-09-01
Optical fibre imaging bundles (hexabundles) are proving to be the next logical step for large galaxy surveys as they offer spatially-resolved spectroscopy of galaxies and can be used with conventional fibre positioners. Hexabundles have been effectively demonstrated in the Sydney-AAO Multi-object IFS (SAMI) instrument at the Anglo- Australian Telescope[5]. Based on the success of hexabundles that have circular cores, we have characterised a bundle made instead from square-core fibres. Square cores naturally pack more evenly, which reduces the interstitial holes and can increase the covering, or filling fraction. Furthermore the regular packing simplifies the process of combining and dithering the final images. We discuss the relative issues of filling fraction, focal ratio degradation (FRD), and cross-talk, and find that square-core bundles perform well enough to warrant further development as a format for imaging fibre bundles.
Influence of the least-squares phase on optical vortices in strongly scintillated beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Mingzhou; Roux, Filippus S.; National Laser Centre, CSIR, P.O. Box 395, Pretoria 0001
2009-07-15
The optical vortices that exist in strongly scintillated beams make it difficult for conventional adaptive optics systems to remove the phase distortions. When the least-squares reconstructed phase is removed, the vortices still remain. However, we found that the removal of the least-squares phase induces a portion of the vortices to be annihilated during subsequent propagation, causing a reduction in the total number of vortices. This can be understood in terms of the restoration of equilibrium between explicit vortices, which are visible in the phase function, and vortex bound states, which are somehow encoded in the continuous phase fluctuations. Numerical simulationsmore » are provided to show that the total number of optical vortices in a strongly scintillated beam can be reduced significantly after a few steps of least-squares phase corrections.« less
NASA Astrophysics Data System (ADS)
Ream, Allen E.; Slattery, John C.; Cizmas, Paul G. A.
2018-04-01
This paper presents a new method for determining the Arrhenius parameters of a reduced chemical mechanism such that it satisfies the second law of thermodynamics. The strategy is to approximate the progress of each reaction in the reduced mechanism from the species production rates of a detailed mechanism by using a linear least squares method. A series of non-linear least squares curve fittings are then carried out to find the optimal Arrhenius parameters for each reaction. At this step, the molar rates of production are written such that they comply with a theorem that provides the sufficient conditions for satisfying the second law of thermodynamics. This methodology was used to modify the Arrhenius parameters for the Westbrook and Dryer two-step mechanism and the Peters and Williams three-step mechanism for methane combustion. Both optimized mechanisms showed good agreement with the detailed mechanism for species mole fractions and production rates of most major species. Both optimized mechanisms showed significant improvement over previous mechanisms in minor species production rate prediction. Both optimized mechanisms produced no violations of the second law of thermodynamics.
User error with Diskus and Turbuhaler by asthma patients and pharmacists in Jordan and Australia.
Basheti, Iman A; Qunaibi, Eyad; Bosnic-Anticevich, Sinthia Z; Armour, Carol L; Khater, Samar; Omar, Muthana; Reddel, Helen K
2011-12-01
Use of inhalers requires accurate completion of multiple steps to ensure effective medication delivery. To evaluate the most problematic steps in the use of Diskus and Turbuhaler for pharmacists and patients in Jordon and Australia. With standardized inhaler-technique checklists, we asked community pharmacists to demonstrate the use of Diskus and Turbuhaler. We asked patients with asthma to demonstrate the inhaler (Diskus or Turbuhaler) they were currently using. Forty-two community pharmacists in Jordan, and 31 in Australia, participated. In Jordan, 51 asthma patients demonstrated use of Diskus, and 40 demonstrated use of Turbuhaler. In Australia, 53 asthma patients demonstrated use of Diskus, and 42 demonstrated use of Turbuhaler. The pharmacists in Australia had received inhaler-technique education more recently than those in Jordan (P = .03). With Diskus, few pharmacists in either country demonstrated correct technique for step 3 (exhale to residual volume) or step 4 (exhale away from the device), although there were somewhat fewer errors in Australia than Jordan (16% vs 0% in step 3, P = .007, and 20% vs 0% in step 4, P = .003 via chi-square test). With Turbuhaler there were significant differences between the pharmacists from Australia and Jordan, mainly in step 2 (hold the device upright while loading, 45% vs 2% correct, P < .001). Few of the patients had received inhaler-technique education in the previous year. The patients made errors similar to those of the pharmacists in individual steps with Diskus and Turbuhaler. The essential steps with Diskus were performed correctly more often by the Jordanian patients, and with Turbuhaler by the Australian patients. Despite differences in Jordan's and Australia's health systems, pharmacists from both Australia and Jordan had difficulty with the same Diskus and Turbuhaler steps. In both countries, the errors made by the asthma patients were similar to those made by the pharmacists.
Efficacy and Safety of the Once-Daily GLP-1 Receptor Agonist Lixisenatide in Monotherapy
Fonseca, Vivian A.; Alvarado-Ruiz, Ricardo; Raccah, Denis; Boka, Gabor; Miossec, Patrick; Gerich, John E.
2012-01-01
OBJECTIVE To assess efficacy and safety of lixisenatide monotherapy in type 2 diabetes. RESEARCH DESIGN AND METHODS Randomized, double-blind, 12-week study of 361 patients not on glucose-lowering therapy (HbA1c 7–10%) allocated to one of four once-daily subcutaneous dose increase regimens: lixisenatide 2-step (10 μg for 1 week, 15 μg for 1 week, and then 20 μg; n = 120), lixisenatide 1-step (10 μg for 2 weeks and then 20 μg; n = 119), placebo 2-step (n = 61), or placebo 1-step (n = 61) (placebo groups were combined for analyses). Primary end point was HbA1c change from baseline to week 12. RESULTS Once-daily lixisenatide significantly improved HbA1c (mean baseline 8.0%) in both groups (least squares mean change vs. placebo: −0.54% for 2-step, −0.66% for 1-step; P < 0.0001). Significantly more lixisenatide patients achieved HbA1c <7.0% (52.2% 2-step, 46.5% 1-step) and ≤6.5% (31.9% 2-step, 25.4% 1-step) versus placebo (26.8% and 12.5%, respectively; P < 0.01). Lixisenatide led to marked significant improvements of 2-h postprandial glucose levels and blood glucose excursions measured during a standardized breakfast test. A significant decrease in fasting plasma glucose was observed in both lixisenatide groups versus placebo. Mean decreases in body weight (∼2 kg) were observed in all groups. The most common adverse events were gastrointestinal—nausea was the most frequent (lixisenatide 23% overall, placebo 4.1%). Symptomatic hypoglycemia occurred in 1.7% of lixisenatide and 1.6% of placebo patients, with no severe episodes. Safety/tolerability was similar for the two dose regimens. CONCLUSIONS Once-daily lixisenatide monotherapy significantly improved glycemic control with a pronounced postprandial effect (75% reduction in glucose excursion) and was safe and well tolerated in type 2 diabetes. PMID:22432104
46 CFR 178.330 - Simplified stability proof test.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-meters (foot-pounds); P = wind pressure of: (1) 36.6 kilograms/square meter (7.5 pounds/square foot) for operation on protected waters; (2) 48.8 kilogram/square meter (10.0 pounds/square foot) for operation on partially protected waters; or (3) 73.3 kilograms/square meter (15.0 pounds/square foot) for operation on...
NASA Astrophysics Data System (ADS)
Sharma, Dinesh Kumar; Sharma, Anurag; Tripathi, Saurabh Mani
2017-11-01
The excellent propagation properties of square-lattice microstructured optical fibers (MOFs) have been widely recognized. We generalized our recently developed analytical field model (Sharma and Sharma, 2016), for index-guiding MOFs with square-lattice of circular air-holes in the photonic crystal cladding. Using the field model, we have studied the propagation properties of the fundamental mode of index-guiding square-lattice MOFs with different hole-to-hole spacing and the air-hole diameter. Results for the modal effective index, near and the far-field patterns and the group-velocity dispersion have been included. The evolution of the mode shape has been investigated in transition from the near to the far-field domain. We have also studied the splice losses between two identical square-lattice MOFs and also between an MOF and a traditional step-index single-mode fiber. Comparisons with available numerical simulation results, e.g., those based on the full-vector finite element method have also been included.
NASA Astrophysics Data System (ADS)
Li, Minkang; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Lu, Yancong; Xiang, Changcheng; Xiang, XianSong
2016-10-01
Large-sized gratings are essential optical elements in laser fusion and space astronomy facilities. Scanning beam interference lithography is an effective method to fabricate large-sized gratings. To minimize the nonlinear phase written into the photo-resist, the image grating must be measured to adjust the left and right beams to interfere at their waists. In this paper, we propose a new method to conduct wavefront metrology based on phase-stepping interferometry. Firstly, a transmission grating is used to combine the two beams to form an interferogram which is recorded by a charge coupled device(CCD). Phase steps are introduced by moving the grating with a linear stage monitored by a laser interferometer. A series of interferograms are recorded as the displacement is measured by the laser interferometer. Secondly, to eliminate the tilt and piston error during the phase stepping, the iterative least square phase shift method is implemented to obtain the wrapped phase. Thirdly, we use the discrete cosine transform least square method to unwrap the phase map. Experiment results indicate that the measured wavefront has a nonlinear phase around 0.05 λ@404.7nm. Finally, as the image grating is acquired, we simulate the print-error written into the photo-resist.
Self-avoiding walks that cross a square
NASA Astrophysics Data System (ADS)
Burkhardt, T. W.; Guim, I.
1991-10-01
The authors consider self-avoiding walks that traverse an L*L square lattice. Whittington and Guttmann (1990) have proved the existence of a phase transition in the infinite-L limit at a critical value of the step fugacity. They make several finite-size scaling predictions for the critical region, using the relation between self-avoiding walks and the N-vector model of magnetism. Adsorbing as well as nonadsorbing boundaries are considered. The predictions are in good agreement with numerical data for L
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
Improvements in Dynamic Balance Using an Adaptive Snowboard with the Nintendo Wii.
Sullivan, Brendan; Harding, Alexandra G; Dingley, John; Gras, Laura Z
2012-08-01
The purpose of this case report is to see if a novel balance board could improve balance and gait of a subject with dynamic balance impairments and enjoyment of virtual rehabilitation training. A novel Adaptive Snowboard™ (developed by two of the authors, B.S. and J.D.) was used in conjunction with the Nintendo(®) (Redmond, WA) Wii™ snowboarding and wakeboarding games with a participant in a physical therapy outpatient clinic. Baseline measurements were taken for gait velocity and stride length, Four Square Step Test, Star Balance Excursion Test, Sensory Organization Test, and the Intrinsic Motivation Inventory. Two 60-90-minute sessions per week for 5 weeks included seven to nine trials of Wii snowboarding or wakeboarding games. Improvements were seen in every outcome measure. This study had comparable results to studies performed using a wobble board in that improvements in balance were made. Use of virtual snowboard simulation improved the subject's balance, gait speed, and stride length, as well as being an enjoyable activity.
NASA Technical Reports Server (NTRS)
Jones, R. A. (Inventor)
1974-01-01
The square root of the product of thermophysical properties q, c and k, where p is density, c is specific heat and k is thermal conductivity, is determined directly on a test specimen such as a wind tunnel model. The test specimen and a reference specimen of known specific heat are positioned at a given distance from a heat source. The specimens are provided with a coating, such as a phase change coating, to visually indicate that a given temperature was reached. A shutter interposed between the heat source and the specimens is opened and a motion picture camera is actuated to provide a time record of the heating step. The temperature of the reference specimen is recorded as a function of time. The heat rate to which both the test and reference specimens were subjected is determined from the temperature time response of the reference specimen by the conventional thin-skin calorimeter equation.
Publications - GMC 365 | Alaska Division of Geological & Geophysical
, Susie #1, Gubik Test #2, Square Lake Test Well #1 Authors: FEX L.P., and Weatherford Laboratories Samples: Ivishak Unit #1, Susie #1, Gubik Test #2, Square Lake Test Well #1: Alaska Division of Geological
Chen, Rong; Yang, Jianhua; Cheng, Xinbing; Pan, Zilong
2017-03-01
High voltage pulse generators are widely applied in a number of fields. Defense and industrial applications stimulated intense interests in the area of pulsed power technology towards the system with high power, high repetition rate, solid state characteristics, and compact structure. An all-solid-state microsecond-range quasi-square pulse generator based on a fractional-turn ratio saturable pulse transformer and anti-resonance network is proposed in this paper. This generator consists of a charging system, a step-up system, and a modulating system. In this generator, the fractional-turn ratio saturable pulse transformer is the key component since it acts as a step-up transformer and a main switch during the working process. Demonstrative experiments show that if the primary storage capacitors are charged to 400 V, a quasi-square pulse with amplitude of about 29 kV can be achieved on a 3500 Ω resistive load, as well as the pulse duration (full width at half maximum) of about 1.3 μs. Preliminary repetition rate experiments are also carried out, which indicate that this pulse generator could work stably with the repetition rates of 30 Hz and 50 Hz. It can be concluded that this kind of all-solid-state microsecond-range quasi-square pulse generator can not only lower both the operating voltage of the primary windings and the saturable inductance of the secondary windings, thus ideally realizing the magnetic switch function of the fractional-turn ratio saturable pulse transformer, but also achieve a quasi-square pulse with high quality and fixed flat top after the modulation of a two-section anti-resonance network. This generator can be applied in areas of large power microwave sources, sterilization, disinfection, and wastewater treatment.
Measuring Differential Delays With Sine-Squared Pulses
NASA Technical Reports Server (NTRS)
Hurst, Robert N.
1994-01-01
Technique for measuring differential delays among red, green, and blue components of video signal transmitted on different parallel channels exploits sine-squared pulses that are parts of standard test signals transmitted during vertical blanking interval of frame period. Technique does not entail expense of test-signal generator. Also applicable to nonvideo signals including sine-squared pulses.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Apollo 12 stereo view of lunar surface upon which astronaut had stepped
1969-11-20
AS12-57-8448 (19-20 Nov. 1969) --- An Apollo 12 stereo view showing a three-inch square of the lunar surface upon which an astronaut had stepped. Taken during extravehicular activity of astronauts Charles Conrad Jr. and Alan L. Bean, the exposure of the boot imprint was made with an Apollo 35mm stereo close-up camera. The camera was developed to get the highest possible resolution of a small area. The three-inch square is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. While astronauts Conrad and Bean descended in their Apollo 12 Lunar Module to explore the lunar surface, astronaut Richard F. Gordon Jr. remained with the Command and Service Modules in lunar orbit.
El Mhammedi, M A; Achak, M; Bakasse, M; Chtaini, A
2009-08-01
This paper reports on the use of platinum electrode modified with kaolin (K/Pt) and square wave voltammetry for analytical detection of trace lead(II) in pure water, orange and apple samples. The electroanalytical procedure for determination of the Pb(II) comprises two steps: the chemical accumulation of the analyte under open-circuit conditions followed by the electrochemical detection of the preconcentrated species using square wave voltammetry. The analytical performances of the extraction method has been explored by studying the incubating time, and effect of interferences due to other ions. During the preconcentration step, Pb(II) was accumulated on the surface of the kaolin. The observed detection and quantification limits in pure water were 3.6x10(-9)molL(-1) and 1.2x10(-8)molL(-1), respectively. The precision of the method was also determined; the results was 2.35% (n=5).
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
Kim, Roger H; Kurtzman, Scott H; Collier, Ashley N; Shabahang, Mohsen M
Learning styles theory posits that learners have distinct preferences for how they assimilate new information. The VARK model categorizes learners based on combinations of 4 learning preferences: visual (V), aural (A), read/write (R), and kinesthetic (K). A previous single institution study demonstrated that the VARK preferences of applicants who interview for general surgery residency are different from that of the general population and that learning preferences were associated with performance on standardized tests. This multiinstitutional study was conducted to determine the distribution of VARK preferences among interviewees for general surgery residency and the effect of those preferences on United States Medical Licensing Examination (USMLE) scores. The VARK learning inventory was administered to applicants who interviewed at 3 general surgery programs during the 2014 to 2015 academic year. The distribution of VARK learning preferences among interviewees was compared with that of the general population of VARK respondents. Performance on USMLE Step 1 and Step 2 Clinical Knowledge was analyzed for associations with VARK learning preferences. Chi-square, analysis of variance, and Dunnett's test were used for statistical analysis, with p < 0.05 considered statistically significant. The VARK inventory was completed by a total of 140 residency interviewees. Sixty-four percent of participants were male, and 41% were unimodal, having a preference for a single learning modality. The distribution of VARK preferences of interviewees was different than that of the general population (p = 0.02). By analysis of variance, there were no overall differences in USMLE Step 1 and Step 2 Clinical Knowledge scores by VARK preference (p = 0.06 and 0.21, respectively). However, multiple comparison analysis using Dunnett's test revealed that interviewees with R preferences had significantly higher scores than those with multimodal preferences on USMLE Step 1 (239 vs. 222, p = 0.02). Applicants who interview for general surgery residency have a different pattern of VARK preferences than that of the general population. Interviewees with preferences for read/write learning modalities have higher scores on the USMLE Step 1 than those with multimodal preferences. Learning preferences may have impact on residency applicant selection and represents a topic that warrants further investigation. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Wei; Yao, Xinfeng; Ji, Minhe
2016-01-01
Despite recent rapid advancement in remote sensing technology, accurate mapping of the urban landscape in China still faces a great challenge due to unusually high spectral complexity in many big cities. Much of this complication comes from severe spectral confusion of impervious surfaces with polluted water bodies and bright bare soils. This paper proposes a two-step land cover decomposition method, which combines optical and thermal spectra from different seasons to cope with the issue of urban spectral complexity. First, a linear spectral mixture analysis was employed to generate fraction images for three preliminary endmembers (high albedo, low albedo, and vegetation). Seasonal change analysis on land surface temperature induced from thermal infrared spectra and coarse component fractions obtained from the first step was then used to reduce the confusion between impervious surfaces and nonimpervious materials. This method was tested with two-date Landsat multispectral data in Shanghai, one of China's megacities. The results showed that the method was capable of consistently estimating impervious surfaces in highly complex urban environments with an accuracy of R2 greater than 0.70 and both root mean square error and mean average error less than 0.20 for all test sites. This strategy seemed very promising for landscape mapping of complex urban areas.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
NASA Astrophysics Data System (ADS)
Feurer, Denis; Planchon, Olivier; Amine El Maaoui, Mohamed; Ben Slimane, Abir; Rached Boussema, Mohamed; Pierrot-Deseilligny, Marc; Raclot, Damien
2018-06-01
Monitoring agricultural areas threatened by soil erosion often requires decimetre topographic information over areas of several square kilometres. Airborne lidar and remotely piloted aircraft system (RPAS) imagery have the ability to provide repeated decimetre-resolution and -accuracy digital elevation models (DEMs) covering these extents, which is unrealistic with ground surveys. However, various factors hamper the dissemination of these technologies in a wide range of situations, including local regulations for RPAS and the cost for airborne laser systems and medium-format RPAS imagery. The goal of this study is to investigate the ability of low-tech kite aerial photography to obtain DEMs with decimetre resolution and accuracy that permit 3-D descriptions of active gullying in cultivated areas of several square kilometres. To this end, we developed and assessed a two-step workflow. First, we used both heuristic experimental approaches in field and numerical simulations to determine the conditions that make a photogrammetric flight possible and effective over several square kilometres with a kite and a consumer-grade camera. Second, we mapped and characterised the entire gully system of a test catchment in 3-D. We showed numerically and experimentally that using a thin and light line for the kite is key for a complete 3-D coverage over several square kilometres. We thus obtained a decimetre-resolution DEM covering 3.18 km2 with a mean error and standard deviation of the error of +7 and 22 cm respectively, hence achieving decimetre accuracy. With this data set, we showed that high-resolution topographic data permit both the detection and characterisation of an entire gully system with a high level of detail and an overall accuracy of 74 % compared to an independent field survey. Kite aerial photography with simple but appropriate equipment is hence an alternative tool that has been proven to be valuable for surveying gullies with sub-metric details in a square-kilometre-scale catchment. This case study suggests that access to high-resolution topographic data on these scales can be given to the community, which may help facilitate a better understanding of gullying processes within a broader spectrum of conditions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... records on the basis of linear yards or square yards as provided in § 1631.31 persons furnishing... of square yards. At least one test shall be performed upon commencement of production, importation, or other receipt of such small carpet or rug and every 25,000 units or square yards thereafter. (Sec...
Code of Federal Regulations, 2011 CFR
2011-01-01
... records on the basis of linear yards or square yards as provided in § 1631.31 persons furnishing... of square yards. At least one test shall be performed upon commencement of production, importation, or other receipt of such small carpet or rug and every 25,000 units or square yards thereafter. (Sec...
Code of Federal Regulations, 2012 CFR
2012-01-01
... records on the basis of linear yards or square yards as provided in § 1631.31 persons furnishing... of square yards. At least one test shall be performed upon commencement of production, importation, or other receipt of such small carpet or rug and every 25,000 units or square yards thereafter. (Sec...
Code of Federal Regulations, 2014 CFR
2014-01-01
... records on the basis of linear yards or square yards as provided in § 1631.31 persons furnishing... of square yards. At least one test shall be performed upon commencement of production, importation, or other receipt of such small carpet or rug and every 25,000 units or square yards thereafter. (Sec...
Code of Federal Regulations, 2010 CFR
2010-01-01
... records on the basis of linear yards or square yards as provided in § 1631.31 persons furnishing... of square yards. At least one test shall be performed upon commencement of production, importation, or other receipt of such small carpet or rug and every 25,000 units or square yards thereafter. (Sec...
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.
ERIC Educational Resources Information Center
Overall, John E.; Starbuck, Robert R.
1983-01-01
An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)
Impact of user influence on information multi-step communication in a micro-blog
NASA Astrophysics Data System (ADS)
Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken
2014-06-01
User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
Computer modeling of in terferograms of flowing plasma and determination of the phase shift
NASA Astrophysics Data System (ADS)
Blažek, J.; Kříž, P.; Stach, V.
2000-03-01
Interferograms of the flowing gas contain information about the phase shift between the object and the reference beams. The determination of the phase shift is the first step in getting information about the inner distribution of the density in cylindrically symmetric discharges. Slightly modified Takeda method based on the Fourier transformation is applied to determine the phase information from the interferogram. The least squares spline approximation is used for approximation and smoothing intensity profiles. At the same time, cubic splines with their end-knots conditions naturally realize “hanning windows” eliminating unwanted edge effects. For the purpose of numerical testing of the method, we developed a code that for a density given in advance reconstructs the corresponding interferogram.
Kalron, Alon; Rosenblum, Uri; Frid, Lior; Achiron, Anat
2017-03-01
Evaluate the effects of a Pilates exercise programme on walking and balance in people with multiple sclerosis and compare this exercise approach to conventional physical therapy sessions. Randomized controlled trial. Multiple Sclerosis Center, Sheba Medical Center, Tel-Hashomer, Israel. Forty-five people with multiple sclerosis, 29 females, mean age (SD) was 43.2 (11.6) years; mean Expanded Disability Status Scale (S.D) was 4.3 (1.3). Participants received 12 weekly training sessions of either Pilates ( n=22) or standardized physical therapy ( n=23) in an outpatient basis. Spatio-temporal parameters of walking and posturography parameters during static stance. Functional tests included the Time Up and Go Test, 2 and 6-minute walk test, Functional Reach Test, Berg Balance Scale and the Four Square Step Test. In addition, the following self-report forms included the Multiple Sclerosis Walking Scale and Modified Fatigue Impact Scale. At the termination, both groups had significantly increased their walking speed ( P=0.021) and mean step length ( P=0.023). According to the 2-minute and 6-minute walking tests, both groups at the end of the intervention program had increased their walking speed. Mean (SD) increase in the Pilates and physical therapy groups were 39.1 (78.3) and 25.3 (67.2) meters, respectively. There was no effect of group X time in all instrumented and clinical balance and gait measures. Pilates is a possible treatment option for people with multiple sclerosis in order to improve their walking and balance capabilities. However, this approach does not have any significant advantage over standardized physical therapy.
Living environment and mobility of older adults.
Cress, M Elaine; Orini, Stefania; Kinsler, Laura
2011-01-01
Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.
MRI-Based Intelligence Quotient (IQ) Estimation with Sparse Learning
Wang, Liye; Wee, Chong-Yaw; Suk, Heung-Il; Tang, Xiaoying; Shen, Dinggang
2015-01-01
In this paper, we propose a novel framework for IQ estimation using Magnetic Resonance Imaging (MRI) data. In particular, we devise a new feature selection method based on an extended dirty model for jointly considering both element-wise sparsity and group-wise sparsity. Meanwhile, due to the absence of large dataset with consistent scanning protocols for the IQ estimation, we integrate multiple datasets scanned from different sites with different scanning parameters and protocols. In this way, there is large variability in these different datasets. To address this issue, we design a two-step procedure for 1) first identifying the possible scanning site for each testing subject and 2) then estimating the testing subject’s IQ by using a specific estimator designed for that scanning site. We perform two experiments to test the performance of our method by using the MRI data collected from 164 typically developing children between 6 and 15 years old. In the first experiment, we use a multi-kernel Support Vector Regression (SVR) for estimating IQ values, and obtain an average correlation coefficient of 0.718 and also an average root mean square error of 8.695 between the true IQs and the estimated ones. In the second experiment, we use a single-kernel SVR for IQ estimation, and achieve an average correlation coefficient of 0.684 and an average root mean square error of 9.166. All these results show the effectiveness of using imaging data for IQ prediction, which is rarely done in the field according to our knowledge. PMID:25822851
A least-squares finite element method for incompressible Navier-Stokes problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan
1992-01-01
A least-squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady incompressible Navier-Stokes problems. This method leads to a minimization problem rather than to a saddle-point problem by the classic mixed method and can thus accommodate equal-order interpolations. This method has no parameter to tune. The associated algebraic system is symmetric, and positive definite. Numerical results for the cavity flow at Reynolds number up to 10,000 and the backward-facing step flow at Reynolds number up to 900 are presented.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun
2013-10-01
Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Difference magnitude is not measured by discrimination steps for order of point patterns.
Protonotarios, Emmanouil D; Johnston, Alan; Griffin, Lewis D
2016-07-01
We have shown in previous work that the perception of order in point patterns is consistent with an interval scale structure (Protonotarios, Baum, Johnston, Hunter, & Griffin, 2014). The psychophysical scaling method used relies on the confusion between stimuli with similar levels of order, and the resulting discrimination scale is expressed in just-noticeable differences (jnds). As with other perceptual dimensions, an interesting question is whether suprathreshold (perceptual) differences are consistent with distances between stimuli on the discrimination scale. To test that, we collected discrimination data, and data based on comparison of perceptual differences. The stimuli were jittered square lattices of dots, covering the range from total disorder (Poisson) to perfect order (square lattice), roughly equally spaced on the discrimination scale. Observers picked the most ordered pattern from a pair, and the pair of patterns with the greatest difference in order from two pairs. Although the judgments of perceptual difference were found to be consistent with an interval scale, like the discrimination judgments, no common interval scale that could predict both sets of data was possible. In particular, the midpattern of the perceptual scale is 11 jnds away from the ordered end, and 5 jnds from the disordered end of the discrimination scale.
NASA Astrophysics Data System (ADS)
Saad, Ahmed S.; Hamdy, Abdallah M.; Salama, Fathy M.; Abdelkawy, Mohamed
2016-10-01
Effect of data manipulation in preprocessing step proceeding construction of chemometric models was assessed. The same set of UV spectral data was used for construction of PLS and PCR models directly and after mathematically manipulation as per well known first and second derivatives of the absorption spectra, ratio spectra and first and second derivatives of the ratio spectra spectrophotometric methods, meanwhile the optimal working wavelength ranges were carefully selected for each model and the models were constructed. Unexpectedly, number of latent variables used for models' construction varied among the different methods. The prediction power of the different models was compared using a validation set of 8 mixtures prepared as per the multilevel multifactor design and results were statistically compared using two-way ANOVA test. Root mean squares error of prediction (RMSEP) was used for further comparison of the predictability among different constructed models. Although no significant difference was found between results obtained using Partial Least Squares (PLS) and Principal Component Regression (PCR) models, however, discrepancies among results was found to be attributed to the variation in the discrimination power of adopted spectrophotometric methods on spectral data.
Filik, Hayati; Çetintaş, Gamze; Avan, Asiye Aslıhan; Aydar, Sevda; Koç, Serkan Naci; Boz, İsmail
2013-11-15
An electrochemical sensor composed of Nafion-graphene nanocomposite film for the voltammetric determination of caffeic acid (CA) was studied. A Nafion graphene oxide-modified glassy carbon electrode was fabricated by a simple drop-casting method and then graphene oxide was electrochemically reduced over the glassy carbon electrode. The electrochemical analysis method was based on the adsorption of caffeic acid on Nafion/ER-GO/GCE and then the oxidation of CA during the stripping step. The resulting electrode showed an excellent electrocatalytical response to the oxidation of caffeic acid (CA). The electrochemistry of caffeic acid on Nafion/ER-GO modified glassy carbon electrodes (GCEs) were studied by cyclic voltammetry and square-wave adsorption stripping voltammetry (SW-AdSV). At optimized test conditions, the calibration curve for CA showed two linear segments: the first linear segment increased from 0.1 to 1.5 and second linear segment increased up to 10 µM. The detection limit was determined as 9.1×10(-8) mol L(-1) using SW-AdSV. Finally, the proposed method was successfully used to determine CA in white wine samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Analysis of Nonlinear Dynamics by Square Matrix Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Li Hua
The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. Andmore » more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.« less
Harmonic source wavefront aberration correction for ultrasound imaging
Dianis, Scott W.; von Ramm, Olaf T.
2011-01-01
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
Effect of pH Test-Strip Characteristics on Accuracy of Readings.
Metheny, Norma A; Gunn, Emily M; Rubbelke, Cynthia S; Quillen, Terrilynn Fox; Ezekiel, Uthayashanker R; Meert, Kathleen L
2017-06-01
Little is known about characteristics of colorimetric pH test strips that are most likely to be associated with accurate interpretations in clinical situations. To compare the accuracy of 4 pH test strips with varying characteristics (ie, multiple vs single colorimetric squares per calibration, and differing calibration units [1.0 vs 0.5]). A convenience sample of 100 upper-level nursing students with normal color vision was recruited to evaluate the accuracy of the test strips. Six buffer solutions (pH range, 3.0 to 6.0) were used during the testing procedure. Each of the 100 participants performed 20 pH tests in random order, providing a total of 2000 readings. The sensitivity and specificity of each test strip was computed. In addition, the degree to which the test strips under- or overestimated the pH values was analyzed using descriptive statistics. Our criterion for correct readings was an exact match with the pH buffer solution being evaluated. Although none of the test strips evaluated in our study was 100% accurate at all of the measured pH values, those with multiple squares per pH calibration were clearly superior overall to those with a single test square. Test strips with multiple squares per calibration were associated with greater overall accuracy than test strips with a single square per calibration. However, because variable degrees of error were observed in all of the test strips, use of a pH meter is recommended when precise readings are crucial. ©2017 American Association of Critical-Care Nurses.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Red square test for visual field screening. A sensitive and simple bedside test.
Mandahl, A
1994-12-01
A reliable bedside test for screening of visual field defects is a valuable tool in the examination of patients with a putative disease affecting the sensory visual pathways. Conventional methods such as Donders' confrontation method, counting fingers in the visual field periphery, of two-hand confrontation are not sufficiently sensitive to detect minor but nevertheless serious visual field defects. More sensitive methods requiring only simple tools are also described. In this study, a test card with four red squares surrounding a fixation target, a black dot, with a total test area of about 11 x 12.5 degrees at a distance of 30 cm, was designed for testing experience of red colour saturation in four quadrants, red square test. The Goldmann visual field was used as reference. 125 consecutive patients with pituitary adenoma (159 eyes), craniopharyngeoma (9 eyes), meningeoma (21 eyes), vascular hemisphere lesion (40 eyes), hemisphere tumour (10 eyes) and hemisphere abscess (2 eyes) were examined. The Goldmann visual field and red square test were pathological in pituitary adenomas in 35%, in craniopharyngeomas in 44%, in meningeomas in 52% and in hemisphere tumours or abscess in 100% of the eyes. Among these, no false-normal or false-pathological tests were found. However, in vascular hemisphere disease the corresponding figures were Goldmann visual field 90% and red square test 85%. The 5% difference (4 eyes) was due to Goldmann visual field defects strictly peripheral to the central 15 degrees. These defects were easily diagnosed with two-hand confrontation and
Verification of forecast ensembles in complex terrain including observation uncertainty
NASA Astrophysics Data System (ADS)
Dorninger, Manfred; Kloiber, Simon
2017-04-01
Traditionally, verification means to verify a forecast (ensemble) with the truth represented by observations. The observation errors are quite often neglected arguing that they are small when compared to the forecast error. In this study as part of the MesoVICT (Mesoscale Verification Inter-comparison over Complex Terrain) project it will be shown, that observation errors have to be taken into account for verification purposes. The observation uncertainty is estimated from the VERA (Vienna Enhanced Resolution Analysis) and represented via two analysis ensembles which are compared to the forecast ensemble. For the whole study results from COSMO-LEPS provided by Arpae-SIMC Emilia-Romagna are used as forecast ensemble. The time period covers the MesoVICT core case from 20-22 June 2007. In a first step, all ensembles are investigated concerning their distribution. Several tests have been executed (Kolmogorov-Smirnov-Test, Finkelstein-Schafer Test, Chi-Square Test etc.) showing no exact mathematical distribution. So the main focus is on non-parametric statistics (e.g. Kernel density estimation, Boxplots etc.) and also the deviation between "forced" normal distributed data and the kernel density estimations. In a next step the observational deviations due to the analysis ensembles are analysed. In a first approach scores are multiple times calculated with every single ensemble member from the analysis ensemble regarded as "true" observation. The results are presented as boxplots for the different scores and parameters. Additionally, the bootstrapping method is also applied to the ensembles. These possible approaches to incorporating observational uncertainty into the computation of statistics will be discussed in the talk.
Blocky inversion of multichannel elastic impedance for elastic parameters
NASA Astrophysics Data System (ADS)
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
Variety Wins: Soccer-Playing Robots and Infant Walking.
Ossmy, Ori; Hoch, Justine E; MacAlpine, Patrick; Hasan, Shohan; Stone, Peter; Adolph, Karen E
2018-01-01
Although both infancy and artificial intelligence (AI) researchers are interested in developing systems that produce adaptive, functional behavior, the two disciplines rarely capitalize on their complementary expertise. Here, we used soccer-playing robots to test a central question about the development of infant walking. During natural activity, infants' locomotor paths are immensely varied. They walk along curved, multi-directional paths with frequent starts and stops. Is the variability observed in spontaneous infant walking a "feature" or a "bug?" In other words, is variability beneficial for functional walking performance? To address this question, we trained soccer-playing robots on walking paths generated by infants during free play and tested them in simulated games of "RoboCup." In Tournament 1, we compared the functional performance of a simulated robot soccer team trained on infants' natural paths with teams trained on less varied, geometric paths-straight lines, circles, and squares. Across 1,000 head-to-head simulated soccer matches, the infant-trained team consistently beat all teams trained with less varied walking paths. In Tournament 2, we compared teams trained on different clusters of infant walking paths. The team trained with the most varied combination of path shape, step direction, number of steps, and number of starts and stops outperformed teams trained with less varied paths. This evidence indicates that variety is a crucial feature supporting functional walking performance. More generally, we propose that robotics provides a fruitful avenue for testing hypotheses about infant development; reciprocally, observations of infant behavior may inform research on artificial intelligence.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P.; Marini, J.
1979-01-01
The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.
Wind-tunnel test of an articulated helicopter rotor model with several tip shapes
NASA Technical Reports Server (NTRS)
Berry, J. D.; Mineck, R. E.
1980-01-01
Six interchangeable tip shapes were tested: a square (baseline) tip, an ogee tip, a subwing tip, a swept tip, a winglet tip, and a short ogee tip. In hover at the lower rotational speeds the swept, ogee, and short ogee tips had about the same torque coefficient, and the subwing and winglet tips had a larger torque coefficient than the baseline square tip blades. The ogee and swept tip blades required less torque coefficient at lower rotational speeds and roughly equivalent torque coefficient at higher rotational speeds compared with the baseline square tip blades in forward flight. The short ogee tip required higher torque coefficient at higher lift coefficients than the baseline square tip blade in the forward flight test condition.
NASA Astrophysics Data System (ADS)
Wiley, E. O.
2010-07-01
Relative motion studies of visual double stars can be investigated using least squares regression techniques and readily accessible programs such as Microsoft Excel and a calculator. Optical pairs differ from physical pairs under most geometries in both their simple scatter plots and their regression models. A step-by-step protocol for estimating the rectilinear elements of an optical pair is presented. The characteristics of physical pairs using these techniques are discussed.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
Computer Processing Of Tunable-Diode-Laser Spectra
NASA Technical Reports Server (NTRS)
May, Randy D.
1991-01-01
Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.
What Determines Alumni Generosity?
ERIC Educational Resources Information Center
Baade, Robert A.; Sundberg, Jeffrey O.
1996-01-01
College alumni giving is correlated with institutional characteristics (quality and development efforts) and student characteristics (quality and wealth). This paper uses a two-step least-squares approach with data and quality/wealth variables to explore the "rich-student, quality-school" alumni generosity phenomenon. Alumni giving is…
The Chi-Square Test: Often Used and More Often Misinterpreted
ERIC Educational Resources Information Center
Franke, Todd Michael; Ho, Timothy; Christie, Christina A.
2012-01-01
The examination of cross-classified category data is common in evaluation and research, with Karl Pearson's family of chi-square tests representing one of the most utilized statistical analyses for answering questions about the association or difference between categorical variables. Unfortunately, these tests are also among the more commonly…
Principles and Practice of Scaled Difference Chi-Square Testing
ERIC Educational Resources Information Center
Bryant, Fred B.; Satorra, Albert
2012-01-01
We highlight critical conceptual and statistical issues and how to resolve them in conducting Satorra-Bentler (SB) scaled difference chi-square tests. Concerning the original (Satorra & Bentler, 2001) and new (Satorra & Bentler, 2010) scaled difference tests, a fundamental difference exists in how to compute properly a model's scaling correction…
NASA Astrophysics Data System (ADS)
Paradis, Pierre-Luc
The global energy consumption is still increasing year after year even if different initiatives are set up to decrease fossil fuel dependency. In Canada 80% of the energy is used for space heating and domestic hot water heating in residential sector. This heat could be provided by solar thermal technologies despite few difficulties originating from the cold climate. The aim of this project is to design a solar evacuated tube thermal collector using air as the working fluid. Firstly, needs and specifications of the product are established in a clear way. Then, three concepts of collector are presented. The first one relies on the standard evacuated tube. The second one uses a new technology of tubes; both sides are open. The third one uses heat pipe to extract the heat from the tubes. Based on the needs and specification as criteria, the concept involving tubes with both sides open has been selected as the best idea. In order to simulate the performances of the collector, a model of the heat exchanges in an evacuated tube was developed in 4 steps. The first step is a model in steady state intended to calculate the stagnation temperature of the tube for a fixed solar radiation, outside temperature and wind speed. As a second step, the model is generalised to transient condition in order to validate it with an experimental setup. A root mean square error of 2% is then calculated. The two remainder steps are intended to calculate the temperature of airflow leaving the tube. In the same way, a first model in steady state is developed and then generalised to the transient mode. Then, the validation with an experimental setup gave a difference of 0.2% for the root mean square error. Finally, a preindustrial prototype intended to work in open loop for preheating of fresh air is presented. During the project, explosion of the both sides open evacuated tube in overheating condition blocked the construction of a real prototype for the test. Different path for further work are also identified. One of these is in relation with CFD simulation of the uniformity of the airflow inside of the collector. Another one is the analysis of the design with a design of experiment plan.
Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351
NASA Astrophysics Data System (ADS)
Shi, Jingzhi; Meng, Xiangying; Hao, Mengjian; Cao, Zhenzhu; He, Weiyan; Gao, Yanfang; Liu, Jinrong
2018-02-01
In this study, BiPO4/highly (001) facet exposed square BiOBr flake heterojunction photocatalysts with different molar ratios were fabricated via a two-step method. The synergetic effect of the heterojunction and facet engineering was systematically investigated. The physicochemical properties of the BiPO4/square BiOBr flake composites were characterized based on X-ray diffraction, field emission scanning electron microscopy, transmission electron microscopy, Brunauer-Emmett-Teller method, X-ray photoelectron spectroscopy, ultraviolet-visible diffuse reflectance spectra, photoluminescence, electrochemical impedance spectroscopy, and the photocurrent response. The BiPO4/square BiOBr flake heterojunction photocatalyst exhibited much higher photocatalytic performance compared with the individual BiPO4 and BiOBr. In particular, the BiPO4/BiOBr composite where P/Br = 1/3 exhibited the highest photocatalytic activity. The intensified separation of photoinduced charges at the p-n heterojunction between the BiPO4 nanoparticle and (001) facet of BiOBr was mainly responsible for the enhanced photoactivity.
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
Round versus rectangular: Does the plot shape matter?
NASA Astrophysics Data System (ADS)
Iserloh, Thomas; Bäthke, Lars; Ries, Johannes B.
2016-04-01
Field rainfall simulators are designed to study soil erosion processes and provide urgently needed data for various geomorphological, hydrological and pedological issues. Due to the different conditions and technologies applied, there are several methodological aspects under review of the scientific community, particularly concerning design, procedures and conditions of measurement for infiltration, runoff and soil erosion. Extensive discussions at the Rainfall Simulator Workshop 2011 in Trier and the Splinter Meeting at EGU 2013 "Rainfall simulation: Big steps forward!" lead to the opinion that the rectangular shape is the more suitable plot shape compared to the round plot. A horizontally edging Gerlach trough is installed for sample collection without forming unnatural necks as is found at round or triangle plots. Since most research groups did and currently do work with round plots at the point scale (<1m²), a precise analysis of the differences between the output of round and square plots are necessary. Our hypotheses are: - Round plot shapes disturb surface runoff, unnatural fluvial dynamics for the given plot size such as pool development especially directly at the plot's outlet occur. - A square plot shape prevent these problems. A first comparison between round and rectangular plots (Iserloh et al., 2015) indicates that the rectangular plot could indeed be the more suitable, but the rather ambiguous results make a more elaborate test setup necessary. The laboratory test setup includes the two plot shapes (round, square), a standardised silty substrate and three inclinations (2°, 6°, 12°). The analysis of the laboratory test provide results on the best performance concerning undisturbed surface runoff and soil/water sampling at the plot's outlet. The analysis of the plot shape concerning its influence on runoff and erosion shows that clear methodological standards are necessary in order to make rainfall simulation experiments comparable. Reference: Iserloh, T., Pegoraro, D., Schlösser, A., Thesing, H., Seeger, M., Ries, J.B. (2015): Rainfall simulation experiments: Influence of water temperature, water quality and plot design on soil erosion and runoff. Geophysical Research Abstracts, Vol. 17, EGU2015-5817.
A SIGNIFICANCE TEST FOR THE LASSO1
Lockhart, Richard; Taylor, Jonathan; Tibshirani, Ryan J.; Tibshirani, Robert
2014-01-01
In the sparse linear regression setting, we consider testing the significance of the predictor variable that enters the current lasso model, in the sequence of models visited along the lasso solution path. We propose a simple test statistic based on lasso fitted values, called the covariance test statistic, and show that when the true model is linear, this statistic has an Exp(1) asymptotic distribution under the null hypothesis (the null being that all truly active variables are contained in the current lasso model). Our proof of this result for the special case of the first predictor to enter the model (i.e., testing for a single significant predictor variable against the global null) requires only weak assumptions on the predictor matrix X. On the other hand, our proof for a general step in the lasso path places further technical assumptions on X and the generative model, but still allows for the important high-dimensional case p > n, and does not necessarily require that the current lasso model achieves perfect recovery of the truly active variables. Of course, for testing the significance of an additional variable between two nested linear models, one typically uses the chi-squared test, comparing the drop in residual sum of squares (RSS) to a χ12 distribution. But when this additional variable is not fixed, and has been chosen adaptively or greedily, this test is no longer appropriate: adaptivity makes the drop in RSS stochastically much larger than χ12 under the null hypothesis. Our analysis explicitly accounts for adaptivity, as it must, since the lasso builds an adaptive sequence of linear models as the tuning parameter λ decreases. In this analysis, shrinkage plays a key role: though additional variables are chosen adaptively, the coefficients of lasso active variables are shrunken due to the l1 penalty. Therefore, the test statistic (which is based on lasso fitted values) is in a sense balanced by these two opposing properties—adaptivity and shrinkage—and its null distribution is tractable and asymptotically Exp(1). PMID:25574062
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Sigmund, Erik; Sigmundová, Dagmar; Badura, Petr; Trhlíková, Lucie; Gecková, Andrea Madarasová
2016-07-13
To explore the time trends (2005-2015) of pedometer-determined weekday and weekend physical activity (PA) and obesity prevalence in 4-7-year-old Czech preschool children and changes in proportion of kindergarten vs. leisure-time PA. The study compared data of two cross-sectional cohorts of preschool children (2005: 92 boys and 84 girls; 2015: 105 boys and 87 girls) in the Czech Republic, using the same measurements and procedures in both cases. PA was monitored by the Yamax Digiwalker SW-200 pedometer for at least eight continuous hours a day over seven consecutive days. Body weight and height were measured using calibrated Tanita scales and anthropometry. The analysis of variance was conducted to examine the gender and cohort effect on step counts. The t-test was used to examine the difference in step counts in kindergarten (or leisure-time) between non-obese and obese children, and the chi-square test compared the prevalence of obesity between 2005 and 2015. The steps/day (mean ± standard deviation) of preschoolers was significantly higher (p < 0.05) in 2015 (11,739 ± 4,229 steps/day) than in 2005 (10,922 ± 3,181 steps/day); and (p < 0.001) in boys (11,939 ± 3,855 steps/day) than in girls (10,668 ± 3,587 steps/day). In 2015, girls, but not boys, had a significantly (p < 0.01) greater step count on weekdays than in 2005, but not at weekends. A decline of leisure-time step counts on weekdays between 2005 and 2015 in girls (6,8652005 vs. 6,0592015, p < 0.01) and boys (7,8612005 vs. 6,4362015, p < 0.001) is compensated for by the increase of step counts in kindergarten (girls: 3,0582005 vs. 5,3302015, and boys: 4,0032005 vs. 5,9992015, p < 0.001). The prevalence of obesity was not significantly different either in 2005 or 2015 among preschool girls (7.14 % 2005 vs. 9.20 % 2015) or boys (6.52 % 2005 vs. 9.52 % 2015). The steps/day of preschoolers was higher in 2015 than in 2005; this higher level of PA was the result of increased PA in kindergartens over the last ten years, particularly among girls. Thus, the current PA program in kindergartens effectively compensates for the decline in PA in leisure-time of weekdays of non-obese and obese preschoolers compared to 2005 and 2015. Prevalence of obesity among Czech preschool children remains relatively stable between 2005 and 2015.
Recommendation of LightSquared Subsidiary LLC
DOT National Transportation Integrated Search
2011-01-01
After a five-month effort, LightSquared, in cooperation with interested federal agencies and the commercial GPS device industry, has issued a Report on the results of intensive testing of the interaction between LightSquareds planned terrestrial o...
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
Square-lashing technique in segmental spinal instrumentation: a biomechanical study.
Arlet, Vincent; Draxinger, Kevin; Beckman, Lorne; Steffen, Thomas
2006-07-01
Sublaminar wires have been used for many years for segmental spinal instrumentation in scoliosis surgery. More recently, stainless steel wires have been replaced by titanium cables. However, in rigid scoliotic curves, sublaminar wires or simple cables can either brake or pull out. The square-lashing technique was devised to avoid complications such as cable breakage or lamina cutout. The purpose of the study was therefore to test biomechanically the pull out and failure mode of simple sublaminar constructs versus the square-lashing technique. Individual vertebrae were subjected to pullout testing having one of two different constructs (single loop and square lashing) using either monofilament wire or multifilament cables. Four different methods of fixation were therefore tested: single wire construct, square-lashing wiring construct, single cable construct, and square-lashing cable construct. Ultimate failure load and failure mechanism were recorded. For the single wire the construct failed 12/16 times by wire breakage with an average ultimate failure load of 793 N. For the square-lashing wire the construct failed with pedicle fracture in 14/16, one bilateral lamina fracture, and one wire breakage. Ultimate failure load average was 1,239 N For the single cable the construct failed 12/16 times due to cable breakage (average force 1,162 N). 10/12 of these breakages were where the cable looped over the rod. For the square-lashing cable all of these constructs (16/16) failed by fracture of the pedicle with an average ultimate failure load of 1,388 N. The square-lashing construct had a higher pullout strength than the single loop and almost no cutting out from the lamina. The square-lashing technique with cables may therefore represent a new advance in segmental spinal instrumentation.
Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
Portillo, M; Lorenzo, M C; Moreno, P; García, A; Montero, J; Ceballos, L; Fuentes, M V; Albaladejo, A
2015-02-01
The aim of the present study was to evaluate the influence of erbium:yttrium-aluminum-garnet (Er:YAG) and Ti:sapphire laser irradiation on the microtensile bond strength (MTBS) of three different adhesive systems to dentin. Flat dentin surfaces from 27 molars were divided into three groups according to laser irradiation: control, Er:YAG (2,940 nm, 100 μs, 2.7 W, 9 Hz) and Ti:sapphire laser (795 nm, 120 fs, 1 W, 1 kHz). Each group was divided into three subgroups according to the adhesive system used: two-step total-etching adhesive (Adper Scotchbond 1 XT, from now on XT), two-step self-etching adhesive (Clearfil SE Bond, from now on CSE), and all-in-one self-etching adhesive (Optibond All-in-One, from now on OAO). After 24 h of water storage, beams of section at 1 mm(2) were longitudinally cut from the samples. Each beam underwent traction test in an Instron machine. Fifteen polished dentin specimens were used for the surface morphology analysis by scanning electron microscopy (SEM). Failure modes of representative debonded microbars were SEM-assessed. Data were analyzed by ANOVA, chi-square test, and multiple linear regression (p < 0.05). In the control group, XT obtained higher MTBS than that of laser groups that performed equally. CSE showed higher MTBS without laser than that with laser groups, where Er:YAG attained higher MTBS than ultrashort laser. When OAO was used, MTBS values were equal in the three treatments. CSE obtained the highest MTBS regardless of the surface treatment applied. The Er:YAG and ultrashort laser irradiation reduce the bonding effectiveness when a two-step total-etching adhesive or a two-step self-etching adhesive are used and do not affect their effectiveness when an all-in-one self-etching adhesive is applied.
Latent trajectory studies: the basics, how to interpret the results, and what to report.
van de Schoot, Rens
2015-01-01
In statistics, tools have been developed to estimate individual change over time. Also, the existence of latent trajectories, where individuals are captured by trajectories that are unobserved (latent), can be evaluated (Muthén & Muthén, 2000). The method used to evaluate such trajectories is called Latent Growth Mixture Modeling (LGMM) or Latent Class Growth Modeling (LCGA). The difference between the two models is whether variance within latent classes is allowed for (Jung & Wickrama, 2008). The default approach most often used when estimating such models begins with estimating a single cluster model, where only a single underlying group is presumed. Next, several additional models are estimated with an increasing number of clusters (latent groups or classes). For each of these models, the software is allowed to estimate all parameters without any restrictions. A final model is chosen based on model comparison tools, for example, using the BIC, the bootstrapped chi-square test, or the Lo-Mendell-Rubin test. To ease the use of LGMM/LCGA step by step in this symposium (Van de Schoot, 2015) guidelines are presented which can be used for researchers applying the methods to longitudinal data, for example, the development of posttraumatic stress disorder (PTSD) after trauma (Depaoli, van de Schoot, van Loey, & Sijbrandij, 2015; Galatzer-Levy, 2015). The guidelines include how to use the software Mplus (Muthén & Muthén, 1998-2012) to run the set of models needed to answer the research question: how many latent classes exist in the data? The next step described in the guidelines is how to add covariates/predictors to predict class membership using the three-step approach (Vermunt, 2010). Lastly, it described what essentials to report in the paper. When applying LGMM/LCGA models for the first time, the guidelines presented can be used to guide what models to run and what to report.
Time history prediction of direct-drive implosions on the Omega facility
Laffite, S.; Bourgade, J. L.; Caillaud, T.; ...
2016-01-14
We present in this article direct-drive experiments that were carried out on the Omega facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Two different pulse shapes were tested in order to vary the implosion stability of the same target whose parameters, dimensions and composition, remained the same. The direct-drive configuration on the Omega facility allows the accurate time-resolvedmeasurement of the scattered light. We show that, provided the laser coupling is well controlled, the implosion time history, assessed by the “bang-time” and the shell trajectory measurements, can be predicted. This conclusion is independent on the pulse shape. Inmore » contrast, we show that the pulse shape affects the implosion stability, assessed by comparing the target performances between prediction and measurement. For the 1-ns square pulse, the measuredneutron number is about 80% of the prediction. Lastly, for the 2-step 2-ns pulse, we test here that this ratio falls to about 20%.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Time history prediction of direct-drive implosions on the Omega facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laffite, S.; Bourgade, J. L.; Caillaud, T.
We present in this article direct-drive experiments that were carried out on the Omega facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Two different pulse shapes were tested in order to vary the implosion stability of the same target whose parameters, dimensions and composition, remained the same. The direct-drive configuration on the Omega facility allows the accurate time-resolvedmeasurement of the scattered light. We show that, provided the laser coupling is well controlled, the implosion time history, assessed by the “bang-time” and the shell trajectory measurements, can be predicted. This conclusion is independent on the pulse shape. Inmore » contrast, we show that the pulse shape affects the implosion stability, assessed by comparing the target performances between prediction and measurement. For the 1-ns square pulse, the measuredneutron number is about 80% of the prediction. Lastly, for the 2-step 2-ns pulse, we test here that this ratio falls to about 20%.« less
Time history prediction of direct-drive implosions on the Omega facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laffite, S.; Bourgade, J. L.; Caillaud, T.
We present in this article direct-drive experiments that were carried out on the Omega facility [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Two different pulse shapes were tested in order to vary the implosion stability of the same target whose parameters, dimensions and composition, remained the same. The direct-drive configuration on the Omega facility allows the accurate time-resolved measurement of the scattered light. We show that, provided the laser coupling is well controlled, the implosion time history, assessed by the “bang-time” and the shell trajectory measurements, can be predicted. This conclusion is independent on the pulse shape.more » In contrast, we show that the pulse shape affects the implosion stability, assessed by comparing the target performances between prediction and measurement. For the 1-ns square pulse, the measured neutron number is about 80% of the prediction. For the 2-step 2-ns pulse, we test here that this ratio falls to about 20%.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
Multistep modeling of protein structure: application towards refinement of tyr-tRNA synthetase
NASA Technical Reports Server (NTRS)
Srinivasan, S.; Shibata, M.; Roychoudhury, M.; Rein, R.
1987-01-01
The scope of multistep modeling (MSM) is expanding by adding a least-squares minimization step in the procedure to fit backbone reconstruction consistent with a set of C-alpha coordinates. The analytical solution of Phi and Psi angles, that fits a C-alpha x-ray coordinate is used for tyr-tRNA synthetase. Phi and Psi angles for the region where the above mentioned method fails, are obtained by minimizing the difference in C-alpha distances between the computed model and the crystal structure in a least-squares sense. We present a stepwise application of this part of MSM to the determination of the complete backbone geometry of the 321 N terminal residues of tyrosine tRNA synthetase to a root mean square deviation of 0.47 angstroms from the crystallographic C-alpha coordinates.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
Field Performance of an Optimized Stack of YBCO Square “Annuli” for a Compact NMR Magnet
Hahn, Seungyong; Voccio, John; Bermond, Stéphane; Park, Dong-Keun; Bascuñán, Juan; Kim, Seok-Beom; Masaru, Tomita; Iwasa, Yukikazu
2011-01-01
The spatial field homogeneity and time stability of a trapped field generated by a stack of YBCO square plates with a center hole (square “annuli”) was investigated. By optimizing stacking of magnetized square annuli, we aim to construct a compact NMR magnet. The stacked magnet consists of 750 thin YBCO plates, each 40-mm square and 80- μm thick with a 25-mm bore, and has a Ø10 mm room-temperature access for NMR measurement. To improve spatial field homogeneity of the 750-plate stack (YP750) a three-step optimization was performed: 1) statistical selection of best plates from supply plates; 2) field homogeneity measurement of multi-plate modules; and 3) optimal assembly of the modules to maximize field homogeneity. In this paper, we present analytical and experimental results of field homogeneity and temporal stability at 77 K, performed on YP750 and those of a hybrid stack, YPB750, in which two YBCO bulk annuli, each Ø46 mm and 16-mm thick with a 25-mm bore, are added to YP750, one at the top and the other at the bottom. PMID:22081753
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel
2011-01-01
The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).
A general rough-surface inversion algorithm: Theory and application to SAR data
NASA Technical Reports Server (NTRS)
Moghaddam, M.
1993-01-01
Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.
Technology of welding aluminum alloys-I
NASA Technical Reports Server (NTRS)
Harrison, J. R.; Korb, L. J.; Oleksiak, C. E.
1978-01-01
Systems approach to high-quality aluminum welding uses square-butt joints, kept away from sharp contour changes. Intersecting welds are configured for T-type intersections rather than crossovers. Differences in panel thickness are accommodated with transition step areas where thickness increases or decreases within weld, but never at intersection.
NASA Astrophysics Data System (ADS)
Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca
2017-12-01
An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.
Least squares regression methods for clustered ROC data with discrete covariates.
Tang, Liansheng Larry; Zhang, Wei; Li, Qizhai; Ye, Xuan; Chan, Leighton
2016-07-01
The receiver operating characteristic (ROC) curve is a popular tool to evaluate and compare the accuracy of diagnostic tests to distinguish the diseased group from the nondiseased group when test results from tests are continuous or ordinal. A complicated data setting occurs when multiple tests are measured on abnormal and normal locations from the same subject and the measurements are clustered within the subject. Although least squares regression methods can be used for the estimation of ROC curve from correlated data, how to develop the least squares methods to estimate the ROC curve from the clustered data has not been studied. Also, the statistical properties of the least squares methods under the clustering setting are unknown. In this article, we develop the least squares ROC methods to allow the baseline and link functions to differ, and more importantly, to accommodate clustered data with discrete covariates. The methods can generate smooth ROC curves that satisfy the inherent continuous property of the true underlying curve. The least squares methods are shown to be more efficient than the existing nonparametric ROC methods under appropriate model assumptions in simulation studies. We apply the methods to a real example in the detection of glaucomatous deterioration. We also derive the asymptotic properties of the proposed methods. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dance therapy improves motor and cognitive functions in patients with Parkinson's disease.
de Natale, Edoardo Rosario; Paulus, Kai Stephan; Aiello, Elena; Sanna, Battistina; Manca, Andrea; Sotgiu, Giovanni; Leali, Paolo Tranquilli; Deriu, Franca
2017-01-01
To explore the effects of Dance Therapy (DT) and Traditional Rehabilitation (TR) on both motor and cognitive domains in Parkinson's Disease patients (PD) with postural instability. Sixteen PD patients with recent history of falls were divided in two groups (Dance Therapy, DT and Traditional Rehabilitation, TR); nine patients received 1-hour DT classes twice per week, completing 20 lessons within 10 weeks; seven patients received a similar cycle of 20 group sessions of 60 minutes TR. Motor (Berg Balance Scale - BBS, Gait Dynamic Index - GDI, Timed Up and Go Test - TUG, 4 Square-Step Test - 4SST, 6-Minute Walking Test - 6MWT) and cognitive measures (Frontal Assessment Battery - FAB, Trail Making Test A & B - TMT A&B, Stroop Test) were tested at baseline, after the treatment completion and after 8-week follow-up. In the DT group, but not in the TR group, motor and cognitive outcomes significantly improved after treatment and retained after follow-up. Significant changes were found for 6MWT (p = 0.028), TUG (p = 0.007), TMT-A (p = 0.014) and TMT-B (p = 0.036). DT is an unconventional physical therapy for PD patients which effectively impacts on motor (endurance and risk of falls) and non-motor functions (executive functions).
Design of a lightweight, tethered, torque-controlled knee exoskeleton.
Witte, Kirby Ann; Fatschel, Andreas M; Collins, Steven H
2017-07-01
Lower-limb exoskeletons show promise for improving gait rehabilitation for those with chronic gait abnormalities due to injury, stroke or other illness. We designed and built a tethered knee exoskeleton with a strong lightweight frame and comfortable, four-point contact with the leg. The device is structurally compliant in select directions, instrumented to measure joint angle and applied torque, and is lightweight (0.76 kg). The exoskeleton is actuated by two off-board motors. Closed loop torque control is achieved using classical proportional feedback control with damping injection in conjunction with iterative learning. We tested torque measurement accuracy and found root mean squared (RMS) error of 0.8 Nm with a max load of 62.2 Nm. Bandwidth was measured to be phase limited at 45 Hz when tested on a rigid test stand and 23 Hz when tested on a person's leg. During bandwidth tests peak extension torques were measured up to 50 Nm. Torque tracking was tested during walking on a treadmill at 1.25 m/s with peak flexion torques of 30 Nm. RMS torque tracking error averaged over a hundred steps was 0.91 Nm. We intend to use this knee exoskeleton to investigate robotic assistance strategies to improve gait rehabilitation and enhance human athletic ability.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Trends in NRMP Data from 2007-2014 for U.S. Seniors Matching into Emergency Medicine.
Manthey, David E; Hartman, Nicholas D; Newmyer, Aileen; Gunalda, Jonah C; Hiestand, Brian C; Askew, Kim L; Lefebvre, Cedric
2017-01-01
Since 1978, the National Residency Matching Program (NRMP) has published data demonstrating characteristics of applicants who have matched into their preferred specialty in the NRMP main residency match. These data have been published approximately every two years. There is limited information about trends within these published data for students matching into emergency medicine (EM). Our objective was to investigate and describe trends in NRMP data to include the following: the ratio of applicants to available EM positions; United State Medical Licensing Examination (USMLE) Step 1 and Step 2 scores (compared to the national means); number of programs ranked; and Alpha Omega Alpha Honor Medical Society (AOA) membership among U.S. seniors matching into EM. This was a retrospective observational review of NRMP data published between 2007 and 2016. We analyzed the data using analysis of variance (ANOVA) or Kruskal-Wallis testing, and Fischer's exact or chi-squared testing, as appropriate to determine statistical significance. The ratio of applicants to available EM positions remained essentially stable from 2007 to 2014 but did increase slightly in 2016. We observed a net upward trend in overall Step 1 and Step 2 scores for EM applicants. However, this did not outpace the national trend increase in Step 1 and 2 scores overall. There was an increase in the mean number of programs ranked by EM applicants over the years studied from 7.8 (SD4.2) to 9.2 (SD5.0, p<0.001), driven predominantly by the cohort of U.S. students successful in the match. Among time intervals, there was a difference in the number of EM applicants with AOA membership (p=0.043) due to a drop in the number of AOA students in 2011. No sustained statistical trend in AOA membership was identified over the seven-year period studied. NRMP data demonstrate trends among EM applicants that are similar to national trends in other specialties for USMLE board scores, and a modest increase in number of programs ranked. AOA membership was largely stable. EM does not appear to have become more competitive relative to other specialties or previous years in these categories.
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
The chi-square test of independence.
McHugh, Mary L
2013-01-01
The Chi-square statistic is a non-parametric (distribution free) tool designed to analyze group differences when the dependent variable is measured at a nominal level. Like all non-parametric statistics, the Chi-square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the study groups or homoscedasticity in the data. It permits evaluation of both dichotomous independent variables, and of multiple group studies. Unlike many other non-parametric and some parametric statistics, the calculations needed to compute the Chi-square provide considerable information about how each of the groups performed in the study. This richness of detail allows the researcher to understand the results and thus to derive more detailed information from this statistic than from many others. The Chi-square is a significance statistic, and should be followed with a strength statistic. The Cramer's V is the most common strength test used to test the data when a significant Chi-square result has been obtained. Advantages of the Chi-square include its robustness with respect to distribution of the data, its ease of computation, the detailed information that can be derived from the test, its use in studies for which parametric assumptions cannot be met, and its flexibility in handling data from both two group and multiple group studies. Limitations include its sample size requirements, difficulty of interpretation when there are large numbers of categories (20 or more) in the independent or dependent variables, and tendency of the Cramer's V to produce relative low correlation measures, even for highly significant results.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Motalebi, Seyedeh Ameneh; Cheong, Loke Seng; Iranagh, Jamileh Amirzadeh; Mohammadi, Fatemeh
2018-01-01
Background/Study Context: Given the rapid increase in the aging population worldwide, fall prevention is of utmost importance. It is essential to establish an efficient, simple, safe, and low-cost intervention method for reducing the risk of falls. This study examined the effect of 12 weeks of progressive elastic resistance training on lower-limb muscle strength and balance in seniors living in the Rumah Seri Kenangan, social welfare home in Cheras, Malaysia. A total of 51 subjects qualified to take part in this quasi-experimental study. They were assigned to either the resistance exercise group (n = 26) or control group (n = 25). The mean age of the 45 participants who completed the program was 70.7 (SD = 6.6). The exercise group met twice per week and performing one to three sets of 8 to 10 repetitions for each of nine lower-limb elastic resistance exercises. All exercises were conducted at low to moderate intensities in sitting or standing positions. The subjects were tested at baseline and 6 and 12 weeks into the program. The results showed statistically significant improvements in lower-limb muscle strength as measured by five times sit-to-stand test (%Δ = 22.6) and dynamic balance quantified by the timed up-and-go test (%Δ = 18.7), four-square step test (%Δ = 14.67), and step test for the right (%Δ = 18.36) and left (%Δ = 18.80) legs. No significant changes were observed in static balance as measured using the tandem stand test (%Δ = 3.25), and one-leg stand test with eyes opened (%Δ = 9.58) and eyes closed (%Δ = -0.61) after completion of the program. The findings support the feasibility and efficacy of a simple and inexpensive resistance training program to improve lower-limb muscle strength and dynamic balance among the institutionalized older adults.
Squared eigenfunctions for the Sasa-Satsuma equation
NASA Astrophysics Data System (ADS)
Yang, Jianke; Kaup, D. J.
2009-02-01
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
Tavakolian, Samira; Doulabi, Mahbobeh Ahmadi; Baghban, Alireza Akbarzade; Mortazavi, Alireza; Ghorbani, Maryam
2015-01-01
Introduction: Copper IUD is a long term and reversible contraception which equals tubal ligation in terms of sterilization. One of the barriers to using this contraception method is the fear and the pain associated with its insertion. Eutectic mixture of local anesthetics (EMLA) 5% is a local anesthetic that contains 25 mg lidocaine and 25 mg of prilocaine per gram. Application of topical analgesic cream to the cervix for laser surgery, hysteroscopy and hysterosalpingography is known Aims: this study aimed to determine the effect of EMLA on IUD insertion pain. Methods: This triple blind clinical trial was conducted on 92 women in a clinic in Hamedan in 2012. After applying the cream on the cervix, pain in three steps, after using Tenaculum, after inserting hystrometr and after inserting IUD and removing IUD insertion tube were assessed with visual analog scale and were compared in EMLA group and placebo group Statistical analysis used to determine and compare the pain of independent t tests, Mann-Whitney U test and repeated measures analysis of variance and chi-square tests to determine the homogeneity of variables and Fisher’s exact test was used Results: Insertion hystrometr was determined as the most painful IUD insertion. The mean pain at step 2 (inserting hystrometr) was (3/11±2/53) in EMLA group, (5/23±2/31) in placebo group. EMLA cream significantly reduced the pain after using tenaculum (P<0/001), pain inserting Hystrometr (P< 0/001) and pain at IUD insertion and removing insertion tube (P< 0/001) Conclusions: Topical Application of EMLA 5% cream as a topical anesthetic on the cervix before insertion IUD reduced the pain during this procedure. PMID:25946948
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Roux, C Z
2009-05-01
Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.
Domain ordering of strained 5 ML SrTiO3 films on Si(001)
NASA Astrophysics Data System (ADS)
Ryan, P.; Wermeille, D.; Kim, J. W.; Woicik, J. C.; Hellberg, C. S.; Li, H.
2007-05-01
High resolution x-ray diffraction data indicate ordered square shaped coherent domains, ˜1200Å in length, coexisting with longer, ˜9500Å correlated regions in highly strained 5 ML SrTiO3 films grown on Si(001). These long range film structures are due to the Si substrate terraces defined by the surface step morphology. The silicon surface "step pattern" is comprised of an "intrinsic" terrace length from strain relaxation and a longer "extrinsic" interstep distance due to the surface miscut.
Shear Melting of a Colloidal Glass
NASA Astrophysics Data System (ADS)
Eisenmann, Christoph; Kim, Chanjoong; Mattsson, Johan; Weitz, David A.
2010-01-01
We use confocal microscopy to explore shear melting of colloidal glasses, which occurs at strains of ˜0.08, coinciding with a strongly non-Gaussian step size distribution. For larger strains, the particle mean square displacement increases linearly with strain and the step size distribution becomes Gaussian. The effective diffusion coefficient varies approximately linearly with shear rate, consistent with a modified Stokes-Einstein relationship in which thermal energy is replaced by shear energy and the length scale is set by the size of cooperatively moving regions consisting of ˜3 particles.
Conductance and refraction across a Barrier in Phosphorene
NASA Astrophysics Data System (ADS)
Dahal, Dipendra; Gumbs, Godfrey
The transmission coefficient and ballistic conductance for monolayer black phosphorene is calculated when a potential step or square barrier is present. The Landauer-B¨uttiker formalism is employed in our calculations of the conductance. We obtain the refractive index for the step potential barrier when an incident beam of electron travel along different paths so as to observe what role the anisotropy of the energy bands plays. Numerical results are presented for various potential heights and barrier widths and these are compared with those for gapless and gapped graphene.
Variety Wins: Soccer-Playing Robots and Infant Walking
Ossmy, Ori; Hoch, Justine E.; MacAlpine, Patrick; Hasan, Shohan; Stone, Peter; Adolph, Karen E.
2018-01-01
Although both infancy and artificial intelligence (AI) researchers are interested in developing systems that produce adaptive, functional behavior, the two disciplines rarely capitalize on their complementary expertise. Here, we used soccer-playing robots to test a central question about the development of infant walking. During natural activity, infants' locomotor paths are immensely varied. They walk along curved, multi-directional paths with frequent starts and stops. Is the variability observed in spontaneous infant walking a “feature” or a “bug?” In other words, is variability beneficial for functional walking performance? To address this question, we trained soccer-playing robots on walking paths generated by infants during free play and tested them in simulated games of “RoboCup.” In Tournament 1, we compared the functional performance of a simulated robot soccer team trained on infants' natural paths with teams trained on less varied, geometric paths—straight lines, circles, and squares. Across 1,000 head-to-head simulated soccer matches, the infant-trained team consistently beat all teams trained with less varied walking paths. In Tournament 2, we compared teams trained on different clusters of infant walking paths. The team trained with the most varied combination of path shape, step direction, number of steps, and number of starts and stops outperformed teams trained with less varied paths. This evidence indicates that variety is a crucial feature supporting functional walking performance. More generally, we propose that robotics provides a fruitful avenue for testing hypotheses about infant development; reciprocally, observations of infant behavior may inform research on artificial intelligence. PMID:29867427
Freitag, L E; Tyack, P L
1993-04-01
A method for localization and tracking of calling marine mammals was tested under realistic field conditions that include noise, multipath, and arbitrarily located sensors. Experiments were performed in two locations using four and six hydrophones with captive Atlantic bottlenose dolphins (Tursiops truncatus). Acoustic signals from the animals were collected in the field using a digital acoustic data acquisition system. The data were then processed off-line to determine relative hydrophone positions and the animal locations. Accurate hydrophone position estimates are achieved by pinging sequentially from each hydrophone to all the others. A two-step least-squares algorithm is then used to determine sensor locations from the calibration data. Animal locations are determined by estimating the time differences of arrival of the dolphin signals at the different sensors. The peak of a matched filter output or the first cycle of the observed waveform is used to determine arrival time of an echolocation click. Cross correlation between hydrophones is used to determine inter-sensor time delays of whistles. Calculation of source location using the time difference of arrival measurements is done using a least-squares solution to minimize error. These preliminary experimental results based on a small set of data show that realistic trajectories for moving animals may be generated from consecutive location estimates.
Mishra, Vishal
2015-01-01
The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
Searching regional rainfall homogeneity using atmospheric fields
NASA Astrophysics Data System (ADS)
Gabriele, Salvatore; Chiaravalloti, Francesco
2013-03-01
The correct identification of homogeneous areas in regional rainfall frequency analysis is fundamental to ensure the best selection of the probability distribution and the regional model which produce low bias and low root mean square error of quantiles estimation. In an attempt at rainfall spatial homogeneity, the paper explores a new approach that is based on meteo-climatic information. The results are verified ex-post using standard homogeneity tests applied to the annual maximum daily rainfall series. The first step of the proposed procedure selects two different types of homogeneous large regions: convective macro-regions, which contain high values of the Convective Available Potential Energy index, normally associated with convective rainfall events, and stratiform macro-regions, which are characterized by low values of the Q vector Divergence index, associated with dynamic instability and stratiform precipitation. These macro-regions are identified using Hot Spot Analysis to emphasize clusters of extreme values of the indexes. In the second step, inside each identified macro-region, homogeneous sub-regions are found using kriging interpolation on the mean direction of the Vertically Integrated Moisture Flux. To check the proposed procedure, two detailed examples of homogeneous sub-regions are examined.
NASA Astrophysics Data System (ADS)
Picot, Joris; Glockner, Stéphane
2018-07-01
We present an analytical study of discretization stencils for the Poisson problem and the incompressible Navier-Stokes problem when used with some direct forcing immersed boundary methods. This study uses, but is not limited to, second-order discretization and Ghost-Cell Finite-Difference methods. We show that the stencil size increases with the aspect ratio of rectangular cells, which is undesirable as it breaks assumptions of some linear system solvers. To circumvent this drawback, a modification of the Ghost-Cell Finite-Difference methods is proposed to reduce the size of the discretization stencil to the one observed for square cells, i.e. with an aspect ratio equal to one. Numerical results validate this proposed method in terms of accuracy and convergence, for the Poisson problem and both Dirichlet and Neumann boundary conditions. An improvement on error levels is also observed. In addition, we show that the application of the chosen Ghost-Cell Finite-Difference methods to the Navier-Stokes problem, discretized by a pressure-correction method, requires an additional interpolation step. This extra step is implemented and validated through well known test cases of the Navier-Stokes equations.
Universal quantum computation with temporal-mode bilayer square lattices
NASA Astrophysics Data System (ADS)
Alexander, Rafael N.; Yokoyama, Shota; Furusawa, Akira; Menicucci, Nicolas C.
2018-03-01
We propose an experimental design for universal continuous-variable quantum computation that incorporates recent innovations in linear-optics-based continuous-variable cluster state generation and cubic-phase gate teleportation. The first ingredient is a protocol for generating the bilayer-square-lattice cluster state (a universal resource state) with temporal modes of light. With this state, measurement-based implementation of Gaussian unitary gates requires only homodyne detection. Second, we describe a measurement device that implements an adaptive cubic-phase gate, up to a random phase-space displacement. It requires a two-step sequence of homodyne measurements and consumes a (non-Gaussian) cubic-phase state.
Kampf, Günter; Reise, Gesche; James, Claudia; Gittelbauer, Kirsten; Gosch, Jutta; Alpers, Birgit
2013-01-01
Peripheral venous catheters are frequently used in hospitalized patients but increase the risk of nosocomial bloodstream infection. Evidence-based guidelines describe specific steps that are known to reduce infection risk. However, the degree of guideline implementation in clinical practice is not known. The aim of this study was to determine the use of specific steps for insertion of peripheral venous catheters in clinical practice and to implement a multimodal intervention aimed at improving both compliance and the optimum order of the steps. The study was conducted at University Hospital Hamburg. An optimum procedure for inserting a peripheral venous catheter was defined based on three evidence-based guidelines (WHO, CDC, RKI) including five steps with 1A or 1B level of evidence: hand disinfection before patient contact, skin antisepsis of the puncture site, no palpation of treated puncture site, hand disinfection before aseptic procedure, and sterile dressing on the puncture site. A research nurse observed and recorded procedures for peripheral venous catheter insertion for healthcare workers in four different departments (endoscopy, central emergency admissions, pediatrics, and dermatology). A multimodal intervention with 5 elements was established (teaching session, dummy training, e-learning tool, tablet and poster, and direct feedback), followed by a second observation period. During the last observation week, participants evaluated the intervention. In the control period, 207 insertions were observed, and 202 in the intervention period. Compliance improved significantly for four of five steps (e.g., from 11.6% to 57.9% for hand disinfection before patient contact; p<0.001, chi-square test). Compliance with skin antisepsis of the puncture site was high before and after intervention (99.5% before and 99.0% after). Performance of specific steps in the correct order also improved (e.g., from 7.7% to 68.6% when three of five steps were done; p<0.001). The intervention was described as helpful by 46.8% of the participants, as neutral by 46.8%, and as disruptive by 6.4%. A multimodal strategy to improve both compliance with safety steps for peripheral venous catheter insertion and performance of an optimum procedure was effective and was regarded helpful by healthcare workers.
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
An improved partial least-squares regression method for Raman spectroscopy
NASA Astrophysics Data System (ADS)
Momenpour Tehran Monfared, Ali; Anis, Hanan
2017-10-01
It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Sound power and vibration levels for two different piano soundboards
NASA Astrophysics Data System (ADS)
Squicciarini, Giacomo; Valiente, Pablo Miranda; Thompson, David J.
2016-09-01
This paper compares the sound power and vibration levels for two different soundboards for upright pianos. One of them is made of laminated spruce and the other of solid spruce (tone-wood). These differ also in the number of ribs and manufacturing procedure. The methodology used is defined in two major steps: (i) acoustic power due to a unit force is obtained reciprocally by measuring the acceleration response of the piano soundboards when excited by acoustic waves in reverberant field; (ii) impact tests are adopted to measure driving point and spatially-averaged mean-square transfer mobility. The results show that, in the midhigh frequency range, the soundboard made of solid spruce has a greater vibrational and acoustic response than the laminated soundboard. The effect of string tension is also addressed, showing that is only relevant at low frequencies.
Towards a Unified Framework for Pose, Expression, and Occlusion Tolerant Automatic Facial Alignment.
Seshadri, Keshav; Savvides, Marios
2016-10-01
We propose a facial alignment algorithm that is able to jointly deal with the presence of facial pose variation, partial occlusion of the face, and varying illumination and expressions. Our approach proceeds from sparse to dense landmarking steps using a set of specific models trained to best account for the shape and texture variation manifested by facial landmarks and facial shapes across pose and various expressions. We also propose the use of a novel l1-regularized least squares approach that we incorporate into our shape model, which is an improvement over the shape model used by several prior Active Shape Model (ASM) based facial landmark localization algorithms. Our approach is compared against several state-of-the-art methods on many challenging test datasets and exhibits a higher fitting accuracy on all of them.
Pharmacy Students' Knowledge Assessment of Naegleria fowleri Infection
Shakeel, Sadia; Iffat, Wajiha; Khan, Madeeha
2016-01-01
A cross-sectional study was conducted from April to August 2015 to assess the knowledge of pharmacy students towards Naegleria fowleri infection. A questionnaire was distributed to senior pharmacy students in different private and public sector universities of Karachi. Descriptive statistics were used to demonstrate students' demographic information and their responses to the questionnaire. Pearson chi-square test was adopted to assess the relationship between independent variables and responses of students. The study revealed that pharmacy students were having adequate awareness of Naegleria fowleri infection and considered it as a serious health issue that necessitates instantaneous steps by the government to prevent the general public from the fatal neurological infection. The students recommended that appropriate methods should be projected in the community from time to time that increases public awareness about the associated risk factors. PMID:26981318
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Sfondrini, Maria Francesca; Gatti, Sara; Scribante, Andrea
2011-07-01
Our aim was to assess the effect of blood contamination on the shear bonding strength and sites of failure of orthodontic brackets and bondable buttons. We randomly divided 160 bovine permanent mandibular incisors into 8 groups of 20 specimens each. Both orthodontic brackets (Step brackets, Leone, Sesto Fiorentino, Italy) and bondable buttons (Flat orthodontic buttons, Leone, Sesto Fiorentino, Italy) were tested on four different enamel surfaces: dry; contamination with blood before priming; after priming; and before and after priming. Brackets and buttons were bonded to the teeth and subsequently tested using a Instron universal testing machine. Shear bonding strength and the rate of adhesive failures were recorded. Data were analysed using the analysis of variance (ANOVA), Scheffè tests, and the chi-square test. Uncontaminated enamel surfaces showed the highest bonding strengths for both brackets and buttons. When they were contaminated with blood, orthodontic brackets had significantly lower shear strengths than bondable buttons (P=0.0001). There were significant differences in sites of failure among the groups for the various enamel surfaces (P=0.001). Contamination of enamel by blood during bonding lowers the strength of the bond, more so with orthodontic brackets than with bondable buttons. Copyright © 2010 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Calibration of Self-Efficacy for Conducting a Chi-Squared Test of Independence
ERIC Educational Resources Information Center
Zimmerman, Whitney Alicia; Goins, Deborah D.
2015-01-01
Self-efficacy and knowledge, both concerning the chi-squared test of independence, were examined in education graduate students. Participants rated statements concerning self-efficacy and completed a related knowledge assessment. After completing a demographic survey, participants completed the self-efficacy and knowledge scales a second time.…
NASA Astrophysics Data System (ADS)
Bates, Paul D.; Horritt, Matthew S.; Fewtrell, Timothy J.
2010-06-01
SummaryThis paper describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models where flows in the x and y Cartesian directions are decoupled. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to simulation results from the diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ˜1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1/Δ x) 2, the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1/Δ x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. For the tests reported in this paper the maximum speed up achieved over a diffusive storage cell model was 1120×, although the actual value seen will depend on model resolution and water surface gradient. Solutions using the new equation set are shown to be grid-independent for the conditions considered and to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
ERIC Educational Resources Information Center
Mooijaart, Ab; Satorra, Albert
2009-01-01
In this paper, we show that for some structural equation models (SEM), the classical chi-square goodness-of-fit test is unable to detect the presence of nonlinear terms in the model. As an example, we consider a regression model with latent variables and interactions terms. Not only the model test has zero power against that type of…
Examinations of electron temperature calculation methods in Thomson scattering diagnostics.
Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin
2012-10-01
Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.
Sajeevan, Thara Purath; Saraswathi, Tillai Rajasekaran; Ranganathan, Kannan; Joshua, Elizabeth; Rao, Uma Devi K
2014-07-01
p53 protein is a product of p53 gene, which is now classified as a tumor suppressor gene. The gene is a frequent target for mutation, being seen as a common step in the pathogenesis of many human cancers. Proliferating cell nuclear antigen (PCNA) is an auxiliary protein of DNA polymerase delta and plays a critical role in initiation of cell proliferation. The aim of this study is to assess and compare the expression of p53 and PCNA in lining epithelium of odontogenic keratocyst (OKC) and periapical cyst (PA). A total of 20 cases comprising 10 OKC and 10 PA were included in retrospective study. Three paraffin section of 4 μm were cut, one was used for routine hematoxylin and eosin stain, while the other two were used for immunohistochemistry. Statistical analysis was performed using Chi-square test. The level of staining and intensity were assessed in all these cases. OKC showed PCNA expression in all cases (100%), whereas in perapical cyst only 60% of cases exhibited PCNA staining. (1) OKC showed p53 expression in 6 cases (60%) whereas in PA only 10% of the cases exhibited p53 staining. Chi-square test showed PCNA staining intensity was more significant than p53 in OKC. (2) The staining intensity of PA using p53, PCNA revealed that PCNA stating intensity was more significant than p53. OKC shows significant proliferative activity than PA using PCNA and p53. PCNA staining was more intense when compared with p53 in both OKC and PA.
Flight Testing of the Gulfstream Quiet Spike(TradeMark) on a NASA F-15B
NASA Technical Reports Server (NTRS)
Smolka, James W.; Cowert, Robert A.; Molzahn, Leslie M.
2007-01-01
Gulfstream Aerospace has long been interested in the development of an economically viable supersonic business jet (SBJ). A design requirement for such an aircraft is the ability for unrestricted supersonic flight over land. Although independent studies continue to substantiate that a market for a SBJ exists, regulatory and public acceptance challenges still remain for supersonic operation over land. The largest technical barrier to achieving this goal is sonic boom attenuation. Gulfstream's attention has been focused on fundamental research into sonic boom suppression for several years. This research was conducted in partnership with the NASA Aeronautics Research Mission Directorate (ARMD) supersonic airframe cruise efficiency technical challenge. The Quiet Spike, a multi-stage telescopic nose boom and a Gulfstream-patented design (references 1 and 2), was developed to address the sonic boom attenuation challenge and validate the technical feasibility of a morphing fuselage. The Quiet Spike Flight Test Program represents a major step into supersonic technology development for sonic boom suppression. The Gulfstream Aerospace Quiet Spike was designed to reduce the sonic boom signature of the forward fuselage for an aircraft flying at supersonic speeds. In 2004, the Quiet Spike Flight Test Program was conceived by Gulfstream and NASA to demonstrate the feasibility of sonic boom mitigation and centered on the structural and mechanical viability of the translating test article design. Research testing of the Quiet Spike consisted of numerous ground and flight operations. Each step in the process had unique objectives, and involved numerous test team members from the NASA Dryden Flight Research Center (DFRC) and Gulfstream Aerospace. Flight testing of the Quiet Spike was conducted at the NASA Dryden Flight Research Center on an F-15B aircraft from August, 2006, to February, 2007. During this period, the Quiet Spike was flown at supersonic speeds up to Mach 1.8 at the maximum design dynamic pressure of 685 pounds per square foot. Extension and retraction tests were conducted at speeds up to Mach 1.4. The design of the Quiet Spike to shape the forward shock wave environment of the aircraft was confirmed during near-field shock wave probing at Mach 1.4. Thirty-two flights were performed without incident and all project objectives were achieved. The success of the Quiet Spike Flight Test Program represents an important step towards developing commercial aircraft capable of supersonic flight over land within the continental United States and in international airspace.
Arend, Carlos Frederico; Arend, Ana Amalia; da Silva, Tiago Rodrigues
2014-06-01
The aim of our study was to systematically compare different methodologies to establish an evidence-based approach based on tendon thickness and structure for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. US was obtained from 164 symptomatic patients with supraspinatus tendinopathy detected at MRI and 42 asymptomatic controls with normal MRI. Diagnostic yield was calculated for either maximal supraspinatus tendon thickness (MSTT) and tendon structure as isolated criteria and using different combinations of parallel and sequential testing at US. Chi-squared tests were performed to assess sensitivity, specificity, and accuracy of different diagnostic approaches. Mean MSTT was 6.68 mm in symptomatic patients and 5.61 mm in asymptomatic controls (P<.05). When used as an isolated criterion, MSTT>6.0mm provided best results for accuracy (93.7%) when compared to other measurements of tendon thickness. Also as an isolated criterion, abnormal tendon structure (ATS) yielded 93.2% accuracy for diagnosis. The best overall yield was obtained by both parallel and sequential testing using either MSTT>6.0mm or ATS as diagnostic criteria at no particular order, which provided 99.0% accuracy, 100% sensitivity, and 95.2% specificity. Among these parallel and sequential tests that provided best overall yield, additional analysis revealed that sequential testing first evaluating tendon structure required assessment of 258 criteria (vs. 261 for sequential testing first evaluating tendon thickness and 412 for parallel testing) and demanded a mean of 16.1s to assess diagnostic criteria and reach the diagnosis (vs. 43.3s for sequential testing first evaluating tendon thickness and 47.4s for parallel testing). We found that using either MSTT>6.0mm or ATS as diagnostic criteria for both parallel and sequential testing provides the best overall yield for sonographic diagnosis of supraspinatus tendinopathy when compared to MRI. Among these strategies, a two-step sequential approach first assessing tendon structure was advantageous because it required a lower number of criteria to be assessed and demanded less time to assess diagnostic criteria and reach the diagnosis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Survival Position Location Using Star Sighting
1959-08-20
sky connecting Polaris and the star at the end of the handle of the Big Dipper (Alkaid). As a final step he readjusts the tie-downs (making sure the...unskilled user in reliably finding the desired stars has been added. Qophasls is placed on the Big Dipper , Orion, Square of Pegasus, and Northern Cross
Differential equations for loop integrals in Baikov representation
NASA Astrophysics Data System (ADS)
Bosma, Jorrit; Larsen, Kasper J.; Zhang, Yang
2018-05-01
We present a proof that differential equations for Feynman loop integrals can always be derived in Baikov representation without involving dimension-shift identities. We moreover show that in a large class of two- and three-loop diagrams it is possible to avoid squared propagators in the intermediate steps of setting up the differential equations.
Multistep hierarchical self-assembly of chiral nanopore arrays
Kim, Hanim; Lee, Sunhee; Shin, Tae Joo; Korblova, Eva; Walba, David M.; Clark, Noel A.; Lee, Sang Bok; Yoon, Dong Ki
2014-01-01
A series of simple hierarchical self-assembly steps achieve self-organization from the centimeter to the subnanometer-length scales in the form of square-centimeter arrays of linear nanopores, each one having a single chiral helical nanofilament of large internal surface area and interfacial interactions based on chiral crystalline molecular arrangements. PMID:25246585
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
Fisseha, Berihu; Janakiraman, Balamurugan; Yitayeh, Asmare; Ravichandran, Hariharasudhan
2017-02-01
Falls and fall related injuries become an emerging health problem among older adults. As a result a review of the recent evidences is needed to design a prevention strategy. The aim of this review was to determine the effect of square stepping exercise (SSE) for fall down injury among older adults compared with walking training or other exercises. An electronic database search for relevant randomized control trials published in English from 2005 to 2016 was conducted. Articles with outcome measures of functional reach, perceived health status, fear of fall were included. Quality of the included articles was rated using Physiotherapy Evidence Database (PEDro) scale and the pooled effect of SSE was obtained by Review Manager (RevMan5) software. Significant effect of SSE was detected over walking or no treatment to improve balance as well to prevent fear of fall and improve perceived health status. The results of this systematic review proposed that SSE significantly better than walking or no treatment to prevent fall, prevent fear of fall and improve perceived health status.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.
2003-01-01
Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
21 CFR 177.1670 - Polyvinyl alcohol film.
Code of Federal Regulations, 2013 CFR
2013-04-01
... tables 1 and 2 of § 176.170(c) of this chapter, yields total extractives not to exceed 0.078 milligram per square centimeter (0.5 milligram per square inch) of food-contact surface when tested by ASTM... Materials,” which is incorporated by reference. Copies may be obtained from the American Society for Testing...
21 CFR 177.1670 - Polyvinyl alcohol film.
Code of Federal Regulations, 2012 CFR
2012-04-01
... tables 1 and 2 of § 176.170(c) of this chapter, yields total extractives not to exceed 0.078 milligram per square centimeter (0.5 milligram per square inch) of food-contact surface when tested by ASTM... Materials,” which is incorporated by reference. Copies may be obtained from the American Society for Testing...
False star detection and isolation during star tracking based on improved chi-square tests.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Yang, Yanqiang; Su, Guohua
2017-08-01
The star sensor is a precise attitude measurement device for a spacecraft. Star tracking is the main and key working mode for a star sensor. However, during star tracking, false stars become an inevitable interference for star sensor applications, which may result in declined measurement accuracy. A false star detection and isolation algorithm in star tracking based on improved chi-square tests is proposed in this paper. Two estimations are established based on a Kalman filter and a priori information, respectively. The false star detection is operated through adopting the global state chi-square test in a Kalman filter. The false star isolation is achieved using a local state chi-square test. Semi-physical experiments under different trajectories with various false stars are designed for verification. Experiment results show that various false stars can be detected and isolated from navigation stars during star tracking, and the attitude measurement accuracy is hardly influenced by false stars. The proposed algorithm is proved to have an excellent performance in terms of speed, stability, and robustness.
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
Effect of Islam-based religious program on spiritual wellbeing in elderly with hypertension.
Moeini, Mahin; Sharifi, Somaye; Kajbaf, Mohamed Bagher
2016-01-01
Lack of spiritual health in patients with hypertension leads to many mental, social, and physical effects, On the other hand, considering the prevalence of hypertension among the elderly, interventions to enhance their spiritual wellbeing is essential. Therefore, the aim of this study was to examine the effect of religious programs based on Islam on spiritual wellbeing in elderly patients with hypertension who referred to the health centers of Isfahan in 2014. This study was a randomized clinical trial. The participants (52 elderly patients with hypertension) were randomly divided in to experimental and control groups. Religious program was implemented for the experimental group in eight sessions in two Isfahan health centers. Spirituality wellbeing survey (SWB) questionnaire was completed in three steps, namely, pretest, posttest and follow-up (1 month) in two groups. In the study, Chi-square test, independent t -test, and repeated-measures analysis of variance were performed for analyzing the data. Before the intervention, there was no significant difference between the mean scores of spiritual wellbeing, the religious dimension, and the existential aspect of spiritual wellbeing of the two groups. However in the posttest step and follow-up stage, the mean scores of spiritual wellbeing, the religious dimension, and the existential aspect of spiritual wellbeing in the experimental group was significantly higher than in the control group ( P < 0.001). The religious program based on Islam promoted the SWB of elderly patients with hypertension; further, nurses can use these programs to promote the SWB of elderly patients with hypertension.
Ghoneim, Mohamed M; El-Desoky, Hanaa S; Abdel-Galeil, Mohamed M
2011-06-01
Naltrexone HCl (NAL.HCl) has been reduced at the mercury electrode in Britton-Robinson universal buffer of pH values 2-11 with a mechanism involving the quasi-reversible uptake of the first transferring electron followed by a rate-determining protonation step of its C=O double bond at position C-6. Simple, sensitive, selective and reliable linear-sweep and square-wave adsorptive cathodic stripping voltammetry methods have been described for trace quantitation of NAL.HCl in bulk form, commercial formulation and human body fluids without the necessity for sample pretreatment and/or time-consuming extraction steps prior to the analysis. Limits of quantitation of 6.0×10(-9)M and 8.0×10(-10)M NAL.HCl in bulk form or commercial formulation and of 9.0×10(-9) and 1.0×10(-9)M NAL.HCl in spiked human serum samples were achieved by the described linear and square-wave stripping voltammetry methods, respectively. Furthermore, pharmacokinetic parameters of the drug in human plasma samples of healthy volunteers following the administration of an oral single dose of 50mg NAL.HCl (one Revia(®) tablet) were estimated by means of the described square-wave stripping voltammetry method without interferences from the drug's metabolites and/or endogenous human plasma constituents. The estimated pharmacokinetic parameters were favorably compared with those reported in literature. Copyright © 2011 Elsevier B.V. All rights reserved.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Analysis of the Latin Square Task with Linear Logistic Test Models
ERIC Educational Resources Information Center
Zeuch, Nina; Holling, Heinz; Kuhn, Jorg-Tobias
2011-01-01
The Latin Square Task (LST) was developed by Birney, Halford, and Andrews [Birney, D. P., Halford, G. S., & Andrews, G. (2006). Measuring the influence of cognitive complexity on relational reasoning: The development of the Latin Square Task. Educational and Psychological Measurement, 66, 146-171.] and represents a non-domain specific,…
A Comparison of Lord's Chi Square and Raju's Area Measures in Detection of DIF.
ERIC Educational Resources Information Center
Cohen, Allan S.; Kim, Seock-Ho
1993-01-01
The effectiveness of two statistical tests of the area between item response functions (exact signed area and exact unsigned area) estimated in different samples, a measure of differential item functioning (DIF), was compared with Lord's chi square. Lord's chi square was found the most effective in determining DIF. (SLD)
Gillen, Alex M; Munsterman, Amelia S; Hanson, R Reid
2016-11-01
To investigate the strength, size, and holding capacity of the self-locking forwarder knot compared to surgeon's and square knots using large gauge suture. In vitro mechanical study. Knotted suture. Forwarder, surgeon's, and square knots were tested on a universal testing machine under linear tension using 2 and 3 USP polyglactin 910 and 2 USP polydioxanone. Knot holding capacity (KHC) and mode of failure were recorded and relative knot security (RKS) was calculated as a percentage of KHC. Knot volume and weight were assessed by digital micrometer and balance, respectively. ANOVA and post hoc testing were used tocompare strength between number of throws, suture, suture size, and knot type. P<.05 was considered significant. Forwarder knots had a higher KHC and RKS than surgeon's or square knots for all suture types and number of throws. No forwarder knots unraveled, but a proportion of square and surgeon's knots with <6 throws did unravel. Forwarder knots had a smaller volume and weight than surgeon's and square knots with equal number of throws. The forwarder knot of 4 throws using 3 USP polyglactin 910 had the highest KHC, RKS, and the smallest size and weight. Forwarder knots may be an alternative for commencing continuous patterns in large gauge suture, without sacrificing knot integrity, but further in vivo and ex vivo testing is required to assess the effects of this sliding knot on tissue perfusion before clinical application. © Copyright 2016 by The American College of Veterinary Surgeons.
NASA Astrophysics Data System (ADS)
Chen, Lin-Jie; Ma, Chang-Feng
2010-01-01
This paper proposes a lattice Boltzmann model with an amending function for one-dimensional nonlinear partial differential equations (NPDEs) in the form ut + αuux + βunux + γuxx + δuxxx + ζuxxxx = 0. This model is different from existing models because it lets the time step be equivalent to the square of the space step and derives higher accuracy and nonlinear terms in NPDEs. With the Chapman-Enskog expansion, the governing evolution equation is recovered correctly from the continuous Boltzmann equation. The numerical results agree well with the analytical solutions.
Fatone, Stefania; Caldwell, Ryan
2017-06-01
Current transfemoral prosthetic sockets are problematic as they restrict function, lack comfort, and cause residual limb problems. Development of a subischial socket with lower proximal trim lines is an appealing way to address this problem and may contribute to improving quality of life of persons with transfemoral amputation. The purpose of this study was to illustrate the use of a new subischial socket in two subjects. Case series. Two unilateral transfemoral prosthesis users participated in preliminary socket evaluations comparing functional performance of the new subischial socket to ischial containment sockets. Testing included gait analysis, socket comfort score, and performance-based clinical outcome measures (Rapid-Sit-To-Stand, Four-Square-Step-Test, and Agility T-Test). For both subjects, comfort was better in the subischial socket, while gait and clinical outcomes were generally comparable between sockets. While these evaluations are promising regarding the ability to function in this new socket design, more definitive evaluation is needed. Clinical relevance Using gait analysis, socket comfort score and performance-based outcome measures, use of the Northwestern University Flexible Subischial Vaccum Socket was evaluated in two transfemoral prosthesis users. Socket comfort improved for both subjects with comparable function compared to ischial containment sockets.
Węgrzynowska-Teodorczyk, Kinga; Mozdzanowska, Dagmara; Josiak, Krystian; Siennicka, Agnieszka; Nowakowska, Katarzyna; Banasiak, Waldemar; Jankowska, Ewa A; Ponikowski, Piotr; Woźniewski, Marek
2016-08-01
The consequence of exercise intolerance for patients with heart failure is the difficulty climbing stairs. The two-minute step test is a test that reflects the activity of climbing stairs. The aim of the study design is to evaluate the applicability of the two-minute step test in an assessment of exercise tolerance in patients with heart failure and the association between the six-minute walk test and the two-minute step test. Participants in this study were 168 men with systolic heart failure (New York Heart Association (NYHA) class I-IV). In the study we used the two-minute step test, the six-minute walk test, the cardiopulmonary exercise test and isometric dynamometer armchair. Patients who performed more steps during the two-minute step test covered a longer distance during the six-minute walk test (r = 0.45). The quadriceps strength was correlated with the two-minute step test and the six-minute walk test (r = 0.61 and r = 0.48). The greater number of steps performed during the two-minute step test was associated with higher values of peak oxygen consumption (r = 0.33), ventilatory response to exercise slope (r = -0.17) and longer time of exercise during the cardiopulmonary exercise test (r = 0.34). Fatigue and leg fatigue were greater after the two-minute step test than the six-minute walk test whereas dyspnoea and blood pressure responses were similar. The two-minute step test is well tolerated by patients with heart failure and may thus be considered as an alternative for the six-minute walk test. © The European Society of Cardiology 2016.
Christopher, David; Adams, Wallace P; Lee, Douglas S; Morgan, Beth; Pan, Ziqing; Singh, Gur Jai Pal; Tsong, Yi; Lyapustina, Svetlana
2007-01-19
The purpose of this article is to present the thought process, methods, and interim results of a PQRI Working Group, which was charged with evaluating the chi-square ratio test as a potential method for determining in vitro equivalence of aerodynamic particle size distribution (APSD) profiles obtained from cascade impactor measurements. Because this test was designed with the intention of being used as a tool in regulatory review of drug applications, the capability of the test to detect differences in APSD profiles correctly and consistently was evaluated in a systematic way across a designed space of possible profiles. To establish a "base line," properties of the test in the simplest case of pairs of identical profiles were studied. Next, the test's performance was studied with pairs of profiles, where some difference was simulated in a systematic way on a single deposition site using realistic product profiles. The results obtained in these studies, which are presented in detail here, suggest that the chi-square ratio test in itself is not sufficient to determine equivalence of particle size distributions. This article, therefore, introduces the proposal to combine the chi-square ratio test with a test for impactor-sized mass based on Population Bioequivalence and describes methods for evaluating discrimination capabilities of the combined test. The approaches and results described in this article elucidate some of the capabilities and limitations of the original chi-square ratio test and provide rationale for development of additional tests capable of comparing APSD profiles of pharmaceutical aerosols.
Relationship between Bruxism and Malocclusion among Preschool Children in Isfahan
Ghafournia, Maryam; Hajenourozali Tehrani, Maryam
2012-01-01
Background and aims Bruxism is defined as a habitual nonfunctional forceful contact between occlusal tooth surfaces. In younger children bruxism may be a consequence of the masticatory neuromuscular system immaturity. The aim of this study was to assess the prevalence of bruxism and investigate the relationship between occlusal factors and bruxism among preschool children. Materials and methods In this cross-sectional survey, 400 3-6-year-old children were selected randomly from different preschools in Isfahan, Iran. The subjects were divided into two groups of bruxers and non-bruxers as determined by the clinical examination and their parents’ reports. The examiner recorded the primary canines (Class I, Class II, and Class III) and molars (mesial step, distal step, flash terminal plane) relationship, existence of anterior and posterior crossbite, open and deep bite. Also, rotated teeth, food impaction, sharp tooth edges, high restorations, extensive tooth caries, and painful teeth (categorized as irritating tooth conditions) were evaluated. The relationship between bruxism and occlusal factors and irritating tooth conditions was evaluated with chi-square test. Results Bruxism was seen in 12.75% of the subjects. Statistically significant relationships existed between bruxism and some occlusal factors, such as flash terminal plane (P = 0.023) and mesial step (P = 0.001) and also, between food impaction, extensive tooth caries, tooth pain, sharp tooth edge and bruxism. Conclusion The results showed significant relationship of bruxism with primary molar relationships and irritating tooth conditions among preschool children. PMID:23277860
Gold Nanoparticle Labels and Heterogeneous Immunoassays: The Case for the Inverted Substrate.
Crawford, Alexis C; Young, Colin C; Porter, Marc D
2018-06-15
This paper examines how the difference in the spatial orientation of the capture substrate influences the analytical sensitivity and limits of detection for immunoassays that use gold nanoparticle labels (AuNPs) and rely on diffusion in quiet solution in the antigen capture and labeling steps. Ideally, the accumulation of both reactants should follow a dependence governed by the rate in which diffusion delivers reactants to the capture surface. In other words, the accumulation of reactants should increase with the square root of the incubation time, i.e., t1/2. The work herein shows, however, that this expectation is only obeyed when the capture substrate is oriented to direct the gravity-induced sedimentation of the AuNP labels away from the substrate. Using an assay for human IgG, the results show that circumventing the sedimentation of the gold nanoparticle labels by substrate inversion enables the dependence of the labeling step on diffusion, reduces nonspecific label adsorption, and improves the estimated detection limit by ~30×. High-density maps of the signal across the two types of substrates also demonstrate that inversion in the labeling step results in a more uniform distribution of AuNP labels across the surface, which translates to a greater measurement reproducibility. These results, which are supported by model simulations via the Mason-Weaver sedimentation-diffusion equation, and their potential implications when using other nanoparticle labels and related materials in diagnostic tests and other applications, are briefly discussed.
Lewin, L O; Papp, K K; Hodder, S L; Workings, M G; Wolfe, L; Glover, P; Headrick, L A
1999-01-01
In 1994, Case Western Reserve University School of Medicine established a Primary Care Track (PCT) with an integrated curriculum as part of The Robert Wood Johnson Foundation's Generalist Physician Initiative. This study compared the performance of the first cohort of students to participate in the PCT third year with that of their classmates and determined student attitudes toward their experiences. The performances of 24 PCT and 81 traditional students on the Medical School Admissions Test (MCAT) and the United States Medical Licensure Examination (USMLE) Step 1 and 2 were compared using analysis of variance. Grades on the six core clerkships were compared using chi-square analysis. Performances of the PCT students and a subset of traditional students on the generalist school's objective structured clinical exam (OSCE) were compared using multivariate analysis. The students reported their perceptions on a questionnaire. The traditional students had significantly higher scores on the physical science section of the MCAT and on the USMLE Step 1, but at the end of year three, their USMLE Step 2 scores did not differ. Grade distributions in the core clerkships did not differ, except in psychiatry, where the PCT students received honors significantly more often. The PCT students had a lower mean score on the internal medicine National Board of Medicine Examiners shelf exam but performed better on the generalist OSCE exam. A majority of PCT students reported that they would choose the integrated third year again and recommend it to others.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda
2017-09-01
In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.
Gauss, Julie W.; Mabiso, Athur; Williams, Karen Patricia
2013-01-01
BACKGROUND Understanding women’s psychological barriers to getting Papanicolaou (Pap) screening has potential to impact cancer disparities. This study examined pain perceptions of Pap testing among Black, Latina and Arab women and goal setting to receive Pap tests. METHODS Data on 420 women, a longitudinal study, were analyzed using Chi-square tests of differences and generalized linear mixed models. RESULTS At baseline, 30.3% of Black and 35.5% of Latina women perceived Pap tests to be very painful compared to 24.2% of Arab women. Perceptions of pain influenced goal settings, such as scheduling a first ever Pap test (Odds ratio = 0.58, 95% Confidence interval: 0.14-0.94). Immediately following the intervention, women’s perception that Pap tests are very painful significantly declined (P-value<0.001) with Arab and Black women registering the greatest improvements (20.3 and 17.3 percent reduction, respectively compared to 8.4 percent for Latina). CONCLUSIONS Having the perception that the Pap test is very painful significantly reduces the likelihood of Black, Latina and Arab women setting the goal to schedule their first ever Pap test. Latina women are the least likely to improve their perception that the Pap test is very painful, though national statistics show they have the highest rates of morbidity and mortality from cervical cancer. These findings are instructive for designing tailored interventions to break down psychological barriers to Pap screening among underserved women. PMID:23288606
Ionization tube simmer current circuit
Steinkraus, R.F. Jr.
1994-12-13
A highly efficient flash lamp simmer current circuit utilizes a fifty percent duty cycle square wave pulse generator to pass a current over a current limiting inductor to a full wave rectifier. The DC output of the rectifier is then passed over a voltage smoothing capacitor through a reverse current blocking diode to a flash lamp tube to sustain ionization in the tube between discharges via a small simmer current. An alternate embodiment of the circuit combines the pulse generator and inductor in the form of an FET off line square wave generator with an impedance limited step up output transformer which is then applied to the full wave rectifier as before to yield a similar simmer current. 6 figures.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Making High-Pass Filters For Submillimeter Waves
NASA Technical Reports Server (NTRS)
Siegel, Peter H.; Lichtenberger, John A.
1991-01-01
Micromachining-and-electroforming process makes rigid metal meshes with cells ranging in size from 0.002 in. to 0.05 in. square. Series of steps involving cutting, grinding, vapor deposition, and electroforming creates self-supporting, electrically thick mesh. Width of holes typically 1.2 times cutoff wavelength of dominant waveguide mode in hole. To obtain sharp frequency-cutoff characteristic, thickness of mesh made greater than one-half of guide wavelength of mode in hole. Meshes used as high-pass filters (dichroic plates) for submillimeter electromagnetic waves. Process not limited to square silicon wafers. Round wafers also used, with slightly more complication in grinding periphery. Grid in any pattern produced in electroforming mandrel. Any platable metal or alloy used for mesh.
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Sonnad, Vijay
1991-01-01
A p-version of the least squares finite element method, based on the velocity-pressure-vorticity formulation, is developed for solving steady state incompressible viscous flow problems. The resulting system of symmetric and positive definite linear equations can be solved satisfactorily with the conjugate gradient method. In conjunction with the use of rapid operator application which avoids the formation of either element of global matrices, it is possible to achieve a highly compact and efficient solution scheme for the incompressible Navier-Stokes equations. Numerical results are presented for two-dimensional flow over a backward facing step. The effectiveness of simple outflow boundary conditions is also demonstrated.
Phase holograms in polymethyl methacrylate
NASA Technical Reports Server (NTRS)
Maker, P. D.; Muller, R. E.
1992-01-01
A procedure is described for the fabrication of complex computer-generated phase holograms in polymethyl methacrylate (PMMA) by means of partial-exposure e-beam lithography and subsequent carefully controlled partial development. Following the development, the pattern appears (rendered in relief) in the PMMA, which then acts as the phase-delay medium. The devices fabricated were designed with 16 equal phase steps per retardation cycle, were up to 3 mm square, and consisted of up to 10 millions of 0.3-2.0-micron square pixels. Data files were up to 60 Mb-long, and the exposure times ranged to several hours. A Fresnel phase lens was fabricated with a diffraction-limited optical performance of 83-percent efficiency.
Alignment of Ge nanoislands on Si(111) by Ga-induced substrate self-patterning.
Schmidt, Th; Flege, J I; Gangopadhyay, S; Clausen, T; Locatelli, A; Heun, S; Falta, J
2007-02-09
A novel mechanism is described which enables the selective formation of three-dimensional Ge islands. Submonolayer adsorption of Ga on Si(111) at high temperature leads to a self-organized two-dimensional pattern formation by separation of the 7 x 7 substrate and Ga/Si(111)-(square root[3] x square root[3])-R30 degrees domains. The latter evolve at step edges and domain boundaries of the initial substrate reconstruction. Subsequent Ge deposition results in the growth of 3D islands which are aligned at the boundaries between bare and Ga-covered domains. This result is explained in terms of preferential nucleation conditions due to a modulation of the surface chemical potential.
NASA Technical Reports Server (NTRS)
Grimes, C. A.; Lumpp, J. K.
2000-01-01
Laser ablation arrays of triangular and square shaped clusters, comprised of 23 micrometers diam circular holes, are defined upon 100 nm thick Ni81Fe19 films used to control the rf permeability spectra. Cluster-to-cluster spacing is varied from 200 to 600 micrometers. For each geometry it is found that the loss peak frequency and permeability magnitude shift lower, in a step-wise fashion, at a cluster-to-cluster spacing between 275 and 300 micrometers. The nonlinear shift in the behavior of the permeability spectra correlates with a dramatic increase in domain wall density. c2000 American Institute of Physics.
Ionization tube simmer current circuit
Steinkraus, Jr., Robert F.
1994-01-01
A highly efficient flash lamp simmer current circuit utilizes a fifty percent duty cycle square wave pulse generator to pass a current over a current limiting inductor to a full wave rectifier. The DC output of the rectifier is then passed over a voltage smoothing capacitor through a reverse current blocking diode to a flash lamp tube to sustain ionization in the tube between discharges via a small simmer current. An alternate embodiment of the circuit combines the pulse generator and inductor in the form of an FET off line square wave generator with an impedance limited step up output transformer which is then applied to the full wave rectifier as before to yield a similar simmer current.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.
Huang, Jiyan; Zhang, Ying; Luo, Shan
2017-12-15
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars
Zhang, Ying; Luo, Shan
2017-01-01
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727
ERIC Educational Resources Information Center
Osler, James Edward
2014-01-01
This monograph provides an epistemological rational for the design of a novel post hoc statistical measure called "Tri-Center Analysis". This new statistic is designed to analyze the post hoc outcomes of the Tri-Squared Test. In Tri-Center Analysis trichotomous parametric inferential parametric statistical measures are calculated from…
How do cattle respond to sloped floors? An investigation using behavior and electromyograms.
Rajapaksha, E; Tucker, C B
2014-05-01
On dairy farms, flooring is often sloped to facilitate drainage. Sloped floors have been identified as a possible risk factor for lameness, but relatively little is known about how this flooring feature affects dairy cattle. Ours is the first study to evaluate the short-term effects of floor slope on skeletal muscle activity, restless behavior (measured by number of steps), and latency to lie down after 90 min of standing. Sixteen Holstein cows were exposed to floors with a 0, 3, 6, or 9% slope in a crossover design, with a minimum of 45 h between each testing session. Electromyograms were used to evaluate the activity of middle gluteal and biceps femoris muscles. Muscle activity was evaluated in 2 contexts: (1) static muscle contractions when cows continuously transferred weight to each hind leg, before and after 90 min of standing; and (2) dynamic contractions that occurred during 90 min of treatment exposure. Median power frequency and median amplitude of both static and dynamic muscle electrical signals were calculated. Total muscle activity was calculated using the root mean square of the signals. Restless behavior, the number of steps per treatment, steps and kicks in the milking parlor, and the latency to lie down after the test sessions were also measured. It was predicted that restless behavior, muscle fatigue (as measured by median power frequency and median amplitude), total muscle activity, and latency to lie down after testing would increase with floor slope. However, no treatment differences were found. Median power frequency was significantly greater for the middle gluteal muscle [35 ± 4 Hz (mean and SE)] compared with the biceps femoris muscle (24 ± 3 Hz), indicating that the contractive properties of these muscles differ. The number of steps per minute and total muscle activity increased significantly over 90 min of standing, irrespective of floor slope. Although restless behavior and muscle function did not change with slope in our study, this work demonstrates that electromyograms can be used to measure skeletal leg muscle activity in cattle. This technology, along with restless behavior, could be useful in assessing cow comfort in other situations, such as prolonged standing. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy; Bates, Paul; Horritt, Matthew
2010-05-01
This abstract describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four analytical and numerical test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to benchmark results from the optimally stable diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ~1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1-?x)2 the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1-?x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. The maximum speed up achieved over a diffusive storage cell model was 1120x in these tests, although the actual value seen will depend on model resolution and water depth and surface gradient. Solutions using the new equation set are shown to be relatively grid-independent for the conditions considered given the numerical diffusion likely at coarse model resolution. In addition, the inertial formulation appears to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. These small instabilities are likely to be a result of the numerical scheme employed, whereby friction is acting to stabilise the solution although this scheme is still widely used in practice. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.
Sustainability of the whole-community project '10,000 Steps': a longitudinal study.
Van Acker, Ragnar; De Bourdeaudhuij, Ilse; De Cocker, Katrien; Klesges, Lisa M; Willem, Annick; Cardon, Greet
2012-03-05
In the dissemination and implementation literature, there is a dearth of information on the sustainability of community-wide physical activity (PA) programs in general and of the '10,000 Steps' project in particular. This paper reports a longitudinal evaluation of organizational and individual sustainability indicators of '10,000 Steps'. Among project adopters, department heads of 24 public services were surveyed 1.5 years after initially reported project implementation to assess continuation, institutionalization, sustained implementation of intervention components, and adaptations. Barriers and facilitators of project sustainability were explored. Citizens (n = 483) living near the adopting organizations were interviewed to measure maintenance of PA differences between citizens aware and unaware of '10,000 Steps'. Independent-samples t, Mann-Whitney U, and chi-square tests were used to compare organizations for representativeness and individual PA differences. Of all organizations, 50% continued '10,000 Steps' (mostly in cycles) and continuation was independent of organizational characteristics. Level of intervention institutionalization was low to moderate on evaluations of routinization and moderate for project saturation. The global implementation score (58%) remained stable and three of nine project components were continued by less than half of organizations (posters, street signs and variants, personalized contact). Considerable independent adaptations of the project were reported (e.g. campaign image). Citizens aware of '10,000 Steps' remained more active during leisure time than those unaware (227 ± 235 and 176 ± 198 min/week, respectively; t = -2.6; p < .05), and reported more household-related (464 ± 397 and 389 ± 346 min/week, respectively; t = -2.2; p < .05) and moderate-intensity-PA (664 ± 424 and 586 ± 408 min/week, respectively; t = -2.0; p < .05). Facilitators of project sustainability included an organizational leader supporting the project, availability of funding or external support, and ready-for-use materials with ample room for adaptation. Barriers included insufficient synchronization between regional and community policy levels and preference for other PA projects. '10,000 Steps' could remain sustainable but design, organizational, and contextual barriers need consideration. Sustainability of '10,000 Steps' in organizations can occur in cycles rather than in ongoing projects. Future research should compare sustainability other whole-community PA projects with '10,000 Steps' to contrast sustainability of alternative models of whole-community PA projects. This would allow optimization of project elements and methods to support decisions of choice for practitioners.
Roughness characterization of the galling of metals
NASA Astrophysics Data System (ADS)
Hubert, C.; Marteau, J.; Deltombe, R.; Chen, Y. M.; Bigerelle, M.
2014-09-01
Several kinds of tests exist to characterize the galling of metals, such as that specified in ASTM Standard G98. While the testing procedure is accurate and robust, the analysis of the specimen's surfaces (area=1.2 cm) for the determination of the critical pressure of galling remains subject to operator judgment. Based on the surface's topography analyses, we propose a methodology to express the probability of galling according to the macroscopic pressure load. After performing galling tests on 304L stainless steel, a two-step segmentation of the S q parameter (root mean square of surface amplitude) computed from local roughness maps (100 μ m× 100 μ m) enables us to distinguish two tribological processes. The first step represents the abrasive wear (erosion) and the second one the adhesive wear (galling). The total areas of both regions are highly relevant to quantify galling and erosion processes. Then, a one-parameter phenomenological model is proposed to objectively determine the evolution of non-galled relative area A e versus the pressure load P, with high accuracy ({{A}e}=100/(1+a{{P}2}) with a={{0.54}+/- 0.07}× {{10}-3} M P{{a}-2} and with {{R}2}=0.98). From this model, the critical pressure of galling is found to be equal to 43MPa. The {{S}5 V} roughness parameter (the five deepest valleys in the galled region's surface) is the most relevant roughness parameter for the quantification of damages in the ‘galling region’. The significant valleys’ depths increase from 10 μm-250 μm when the pressure increases from 11-350 MPa, according to a power law ({{S}5 V}=4.2{{P}0.75}, with {{R}2}=0.93).
21 CFR 177.1315 - Ethylene-1, 4-cyclohexylene dimethylene terephthalate copolymers.
Code of Federal Regulations, 2013 CFR
2013-04-01
... terephthaloyl moletles/square centimeter of food-contact surface) Test for orientability Conditions of use 1... solution expressed in grams per 100 milliliters (1) 0.23 microgram per square centimeter (1.5 micrograms per square inch) of food-contact surface when extracted with water added at 82.2 °C (180 °F) and...
21 CFR 177.1315 - Ethylene-1, 4-cyclohexylene dimethylene terephthalate copolymers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... terephthaloyl moletles/square centimeter of food-contact surface) Test for orientability Conditions of use 1... solution expressed in grams per 100 milliliters (1) 0.23 microgram per square centimeter (1.5 micrograms per square inch) of food-contact surface when extracted with water added at 82.2 °C (180 °F) and...
21 CFR 177.1315 - Ethylene-1, 4-cyclohexylene dimethylene terephthalate copolymers.
Code of Federal Regulations, 2012 CFR
2012-04-01
... terephthaloyl moletles/square centimeter of food-contact surface) Test for orientability Conditions of use 1... solution expressed in grams per 100 milliliters (1) 0.23 microgram per square centimeter (1.5 micrograms per square inch) of food-contact surface when extracted with water added at 82.2 °C (180 °F) and...
Additivity and maximum likelihood estimation of nonlinear component biomass models
David L.R. Affleck
2015-01-01
Since Parresol's (2001) seminal paper on the subject, it has become common practice to develop nonlinear tree biomass equations so as to ensure compatibility among total and component predictions and to fit equations jointly using multi-step least squares (MSLS) methods. In particular, many researchers have specified total tree biomass models by aggregating the...
29 CFR 1917.121 - Spiral stairways.
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum dimensions of Figure F-1; EC21OC91.020 Spiral Stairway—Minimum Dimensions A (half-tread width) B... 26.67 cm) in height; (3) Minimum loading capability shall be 100 pounds per square foot (4.79kN), and... least 6 feet, 6 inches (1.98 m) above the top step. (c) Maintenance. Spiral stairways shall be...
29 CFR 1917.121 - Spiral stairways.
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum dimensions of Figure F-1; EC21OC91.020 Spiral Stairway—Minimum Dimensions A (half-tread width) B... 26.67 cm) in height; (3) Minimum loading capability shall be 100 pounds per square foot (4.79kN), and... least 6 feet, 6 inches (1.98 m) above the top step. (c) Maintenance. Spiral stairways shall be...
Sen. Barrasso, John [R-WY
2014-05-15
Senate - 06/04/2014 Resolution agreed to in Senate without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Quality gap in primary health care services in Isfahan: women's perspective
Sharifirad, Gholam R.; Shamsi, Mohsen; Pirzadeh, Asiyeh; Farzanegan, Parvin D.
2012-01-01
Background: Quality gap is the gap between client's understanding and expectations. The first step in removing this gap is to recognize client's understanding and expectations of the services. This study aimed to determine women's viewpoint of quality gap in primary health care centers of Isfahan. Materials and Methods: This cross-sectional study was conducted on women who came to primary health care centers in Isfahan city. Sample size was 1280 people. Service Quality was used to collect data including tangible dimensions, confidence, responsiveness, assurance and sympathy in providing services. Data were analyzed by t test and chi square test. Results: The results showed that women had controversy over all 5 dimensions. The least mean quality gap was seen in assurance (-11.08) and the highest mean quality gap was seen in tangible dimension (-14.41). The difference between women's viewpoint in all 5 dimensions was significant. (P < 0.05) Conclusion: Negative difference means clients’ expectations are much higher than their understanding of the current situation, so there is a large space to improve services and satisfy clients. PMID:23555148
Effects of specific surface area of metallic nickel particles on carbon deposition kinetics
NASA Astrophysics Data System (ADS)
Chen, Zhi-yuan; Bian, Liu-zhen; Yu, Zi-you; Wang, Li-jun; Li, Fu-shen; Chou, Kuo-Chih
2018-02-01
Carbon deposition on nickel powders in methane involves three stages in different reaction temperature ranges. Temperature programing oxidation test and Raman spectrum results indicated the formation of complex and ordered carbon structures at high deposition temperatures. The values of I(D)/ I(G) of the deposited carbon reached 1.86, 1.30, and 1.22 in the first, second, and third stages, respectively. The structure of carbon in the second stage was similar to that in the third stage. Carbon deposited in the first stage rarely contained homogeneous pyrolytic deposit layers. A kinetic model was developed to analyze the carbon deposition behavior in the first stage. The rate-determining step of the first stage is supposed to be interfacial reaction. Based on the investigation of carbon deposition kinetics on nickel powders from different resources, carbon deposition rate is suggested to have a linear relation with the square of specific surface area of nickel particles.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Program Monitoring with LTL in EAGLE
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2004-01-01
We briefly present a rule-based framework called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialization of EAGLE. For an initial formula of size m, we establish upper bounds of O(m(sup 2)2(sup m)log m) and O(m(sup 4)2(sup 2m)log(sup 2) m) for the space and time complexity, respectively, of single step evaluation over an input trace. This bound is close to the lower bound O(2(sup square root m) for future-time LTL presented. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover.
Recognition of coarse-grained protein tertiary structure.
Lezon, Timothy; Banavar, Jayanth R; Maritan, Amos
2004-05-15
A model of the protein backbone is considered in which each residue is characterized by the location of its C(alpha) atom and one of a discrete set of conformal (phi, psi) states. We investigate the key differences between a description that offers a locally precise fit to known backbone structures and one that provides a globally accurate fit to protein structures. Using a statistical scoring scheme and threading, a protein's local best-fit conformation is highly recognizable, but its global structure cannot be directly determined from an amino acid sequence. The incorporation of information about the conformal states of neighboring residues along the chain allows one to accurately translate the local structure into a global structure. We present a two-step algorithm, which recognizes up to 95% of the tested protein native-state structures to within a 2.5 A root mean square deviation. Copyright 2004 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Tan, Yayun; Zhang, He; Zha, Bingting
2017-09-01
Underwater target detection and ranging in seawater are of interest in unmanned underwater vehicles. This study presents an underwater detection system that synchronously scans a collimated laser beam and a narrow field of view to circumferentially detect an underwater target. Hybrid methods of range-gated and variable step-size least mean squares (VSS-LMS) adaptive filter are proposed to suppress water backscattering. The range-gated receiver eliminates the backscattering of near-field water. The VSS-LMS filter extracts the target echo in the remaining backscattering and the constant fraction discriminator timing method is used to improve ranging accuracy. The optimal constant fraction is selected by analysing the jitter noise and slope of the target echo. The prototype of the underwater detection system is constructed and tested in coastal seawater, then the effectiveness of backscattering suppression and high-ranging accuracy is verified through experimental results and analysis discussed in this paper.
The effects of control-display gain on performance of race car drivers in an isometric braking task.
de Winter, J C F; de Groot, S
2012-12-01
To minimise lap times during car racing, it is important to build up brake forces rapidly and maintain precise control. We examined the effect of the amplification factor (gain) between brake pedal force and a visually represented output value on a driver's ability to track a target value. The test setup was a formula racing car cockpit fitted with an isometric brake pedal. Thirteen racing drivers performed tracking tasks with four control-display gains and two target functions: a step function (35 trials per gain) and a multisine function (15 trials per gain). The control-display gain had only minor effects on root mean-squared error between output value and target value, but it had large effects on build-up speed, overshoot, within-participants variability, and self-reported physical load. The results confirm the hypothesis that choosing an optimum gain involves balancing stability against physical effort.
Physical Function Does Not Predict Care Assessment Need Score in Older Veterans.
Serra, Monica C; Addison, Odessa; Giffuni, Jamie; Paden, Lydia; Morey, Miriam C; Katzel, Leslie
2017-01-01
The Veterans Health Administration's Care Assessment Need (CAN) score is a statistical model, aimed to predict high-risk patients. We were interested in determining if a relationship existed between physical function and CAN scores. Seventy-four older (71 ± 1 years) male Veterans underwent assessment of CAN score and subjective (Short Form-36 [SF-36]) and objective (self-selected walking speed, four square step test, short physical performance battery) assessment of physical function. Approximately 25% of participants self-reported limitations performing lower intensity activities, while 70% to 90% reported limitations with more strenuous activities. When compared with cut points indicative of functional limitations, 35% to 65% of participants had limitations for each of the objective measures. Any measure of subjective or objective physical function did not predict CAN score. These data indicate that the addition of a physical function assessment may complement the CAN score in the identification of high-risk patients.
Epitaxial growth of lithium fluoride on the (1 1 1) surface of CaF 2
NASA Astrophysics Data System (ADS)
Klumpp, St; Dabringhaus, H.
1999-08-01
Growth of lithium fluoride by molecular beam epitaxy on the (1 1 1) surface of calcium fluoride crystals was studied by TEM and LEED for crystal temperatures from 400 to 773 K and impinging lithium fluoride fluxes from 3×10 11 to 3×10 14 cm -2 s -1. Growth starts, usually, at the <1 1 0> steps on the (1 1 1) surface of CaF 2. For larger step distances and at later growth stages also growth on the terraces between the steps is found. Preferably, longish, roof-like crystallites are formed, which can be interpreted by growth of LiF(2 0 1¯)[0 1 0] parallel to CaF 2(1 1 1)[ 1¯ 0 1]. To a lesser extent square crystallites, i.e. growth with LiF(0 0 1), and, rarely, three-folded pyramidal crystallites, i.e. growth with LiF(1 1 1) parallel to CaF 2(1 1 1), are observed. While the pyramidal crystallites show strict epitaxial orientation with LiF[ 1¯ 0 1]‖CaF 2[ 1¯ 0 1] and LiF[ 1¯ 0 1]‖CaF 2[1 2¯ 1], only about 80% of the square crystallites exhibit an epitaxial alignment, where LiF[1 0 0]‖CaF 2[ 1¯ 0 1] is preferred to LiF[1 1 0]‖CaF 2[ 1¯ 0 1]. The epitaxial relationships are discussed on the basis of theoretically calculated adsorption positions of the lithium fluoride monomer and dimer on the terrace and at the steps of the CaF 2(1 1 1) surface.
NASA Astrophysics Data System (ADS)
Parise, M.
2018-01-01
A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
21 CFR 177.1670 - Polyvinyl alcohol film.
Code of Federal Regulations, 2014 CFR
2014-04-01
... extractives not to exceed 0.078 milligram per square centimeter (0.5 milligram per square inch) of food... the American Society for Testing Materials, 100 Barr Harbor Dr., West Conshohocken, Philadelphia, PA...
[Evaluation of the effect of one-step self etching adhesives applied in pit and fissure sealing].
Su, Hong-Ru; Xu, Pei-Cheng; Qian, Wen-Hao
2016-06-01
To observe the effect of three one-step self etching adhesive systems used in fit and fissure sealant and explore the feasibility of application in caries prevention in school. Seven hundred and twenty completely erupted mandibular first molars in 360 children aged 7 to 9 years old were chosen. The split-mouth design was used to select one side as the experimental group, divided into A1(Easy One Adper), B1(Adper Easy One), and C1(iBond SE).The contra lateral teeth served as A2,B2 and C2 groups (phosphoric acid). The retention and caries status were regularly reviewed .The clinical effect of the two groups was compared using SPSS19.0 software package for Chi - square test. At 3 and 6 months, pit and fissure sealant retention rate in A1 and A2, B1 and B2,C1 and C2 group had no significant difference. At 12 months, sealant retention in A1 and B1 group was significantly lower than A2 and B2 group (P<0.05). No significant difference was found between C1 and C2 groups (P>0.05). At 24 months, sealant retention rate in A1, B1 and C1 group was significantly lower than A2, B2 and C2 group (P<0.05). The caries rate in A1and A2, B1 and B2, C1 and C2 group had no significant difference during different follow-up time (P>0.05). The clinical anticariogenic effect of three kinds of one-step etching adhesives and phosphoric acid etching sealant was similar .One-step self etching adhesive system was recommended for pit and fissure sealant to improve the students' oral health. The long-term retention rate of one-step self etching adhesive system was lower than the phosphoric acid method to long term observation is needed.
Mohammadi Moghaddam, Toktam; Razavi, Seyed M A; Taghizadeh, Masoud; Sazgarnia, Ameneh
2016-01-01
Roasting is an important step in the processing of pistachio nuts. The effect of hot air roasting temperature (90, 120 and 150 °C), time (20, 35 and 50 min) and air velocity (0.5, 1.5 and 2.5 m/s) on textural and sensory characteristics of pistachio nuts and kernels were investigated. The results showed that increasing the roasting temperature decreased the fracture force (82-25.54 N), instrumental hardness (82.76-37.59 N), apparent modulus of elasticity (47-21.22 N/s), compressive energy (280.73-101.18 N.s) and increased amount of bitterness (1-2.5) and the hardness score (6-8.40) of pistachio kernels. Higher roasting time improved the flavor of samples. The results of the consumer test showed that the roasted pistachio kernels have good acceptability for flavor (score 5.83-8.40), color (score 7.20-8.40) and hardness (score 6-8.40) acceptance. Moreover, Partial Least Square (PLS) analysis of instrumental and sensory data provided important information for the correlation of objective and subjective properties. The univariate analysis showed that over 93.87 % of the variation in sensory hardness and almost 87 % of the variation in sensory acceptability could be explained by instrumental texture properties.
Liang, Gaozhen; Dong, Chunwang; Hu, Bin; Zhu, Hongkai; Yuan, Haibo; Jiang, Yongwen; Hao, Guoshuang
2018-05-18
Withering is the first step in the processing of congou black tea. With respect to the deficiency of traditional water content detection methods, a machine vision based NDT (Non Destructive Testing) method was established to detect the moisture content of withered leaves. First, according to the time sequences using computer visual system collected visible light images of tea leaf surfaces, and color and texture characteristics are extracted through the spatial changes of colors. Then quantitative prediction models for moisture content detection of withered tea leaves was established through linear PLS (Partial Least Squares) and non-linear SVM (Support Vector Machine). The results showed correlation coefficients higher than 0.8 between the water contents and green component mean value (G), lightness component mean value (L * ) and uniformity (U), which means that the extracted characteristics have great potential to predict the water contents. The performance parameters as correlation coefficient of prediction set (Rp), root-mean-square error of prediction (RMSEP), and relative standard deviation (RPD) of the SVM prediction model are 0.9314, 0.0411 and 1.8004, respectively. The non-linear modeling method can better describe the quantitative analytical relations between the image and water content. With superior generalization and robustness, the method would provide a new train of thought and theoretical basis for the online water content monitoring technology of automated production of black tea.
Soler, C; García, A; Contell, J; Segervall, J; Sancho, M
2014-08-01
Over recent years, technological advances have brought innovation in assisted reproduction to the agriculture. Fox species are of great economical interest in some countries, but their semen characteristics have not been studied enough. To advance the knowledge of function of fox spermatozoa, five samples were obtained by masturbation, in the breeding season. Kinetic analysis was performed using ISAS® v1 system. Usual kinematic parameters (VCL, VSL, VAP, LIN, STR, WOB, ALH and BCF) were considered. To establish the standardization for the analysis of samples, the minimum number of cells to analyse and the minimum number of fields to capture were defined. In the second step, the presence of subpopulations in blue fox semen was analysed. The minimum number of cells to test was 30, because kinematic parameters remained constant along the groups of analysis. Also, the effectiveness of ISAS® D4C20 counting chamber was studied, showing that the first five squares presented equivalent results, while in the squares six and seven, the kinematic parameters showed a reduction in all of them, but not in the concentration or motility percentage. Kinematic variables were grouped into two principal components (PC). A linear movement characterized PC1, while PC2 showed an oscillatory movement. Three subpopulations were found, varying in structure among different animals. © 2014 Blackwell Verlag GmbH.
Laborda, Eduardo; Gómez-Gil, José María; Molina, Angela
2017-06-28
A very general and simple theoretical solution is presented for the current-potential-time response of reversible multi-electron transfer processes complicated by homogeneous chemical equilibria (the so-called extended square scheme). The expressions presented here are applicable regardless of the number of electrons transferred and coupled chemical processes, and they are particularized for a wide variety of microelectrode geometries. The voltammetric response of very different systems presenting multi-electron transfers is considered for the most widely-used techniques (namely, cyclic voltammetry, square wave voltammetry, differential pulse voltammetry and steady state voltammetry), studying the influence of the microelectrode geometry and the number and thermodynamics of the (electro)chemical steps. Most appropriate techniques and procedures for the determination of the 'interaction' between successive transfers are discussed. Special attention is paid to those situations where homogeneous chemical processes, such as protonation, complexation or ion association, affect the electrochemical behaviour of the system by different stabilization of the oxidation states.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Growth and characterization of epitaxially stabilized ceria(001) nanostructures on Ru(0001)
Flege, Jan Ingo; Hocker, Jan; Kaemena, Bjorn; ...
2016-05-03
We have studied (001) surface terminated cerium oxide nanoparticles grown on a ruthenium substrate using physical vapor deposition. Their morphology, shape, crystal structure, and chemical state are determined by low-energy electron microscopy and micro-diffraction, scanning probe microscopy, and synchrotron-based X-ray absorption spectroscopy. Square islands are identified as CeO 2 nanocrystals exhibiting a (001) oriented top facet of varying size; they have a height of about 7 to 10 nm and a side length between about 50 and 500 nm, and are terminated with a p(2 × 2) surface reconstruction. Micro-illumination electron diffraction reveals the existence of a coincidence lattice atmore » the interface to the ruthenium substrate. The orientation of the side facets of the rod-like particles is identified as (111); the square particles are most likely of cuboidal shape, exhibiting (100) oriented side facets. Lastly, the square and needle-like islands are predominantly found at step bunches and may be grown exclusively at temperatures exceeding 1000 °C.« less
Neiworth, Julie J; Gleichman, Amy J; Olinick, Anne S; Lamp, Kristen E
2006-11-01
This study compared adults (Homo sapiens), young children (Homo sapiens), and adult tamarins (Saguinus oedipus) while they discriminated global and local properties of stimuli. Subjects were trained to discriminate a circle made of circle elements from a square made of square elements and were tested with circles made of squares and squares made of circles. Adult humans showed a global bias in testing that was unaffected by the density of the elements in the stimuli. Children showed a global bias with dense displays but discriminated by both local and global properties with sparse displays. Adult tamarins' biases matched those of the children. The striking similarity between the perceptual processing of adult monkeys and humans diagnosed with autism and the difference between this and normatively developing human perception is discussed.
Nonmetallic Material Compatibility with Liquid Fluorine
NASA Technical Reports Server (NTRS)
Price, Harold G , Jr; Douglass, Howard W
1957-01-01
Static tests were made on the compatibility of liquid fluorine with several nonmetallic materials at -3200 F and at pressures of 0 and 1500 pounds per square inch gage. The results are compared with those from previous work with gaseous fluorine at the same pressures, but at atmospheric temperature. In general, although environmental effects were not always consistent, reactivity was least with the low-temperature, low-pressure liquid fluorine. Reactivity was greatest with the warm, high-pressure gaseous fluorine. None of the liquids and greases tested was found to be entirely suitable for use in fluorine systems. Polytrifluorochloroethylene and N-43, the formula for which is (C4F9)3N, did not react with liquid fluorine at atmospheric pressure or 1500 pounds per square inch gage under static conditions, but they did react when injected into liquid fluorine at 1500 pounds per square inch gage; they also reacted with gaseous fluorine at 1500 pounds per square inch gage. While water did not react with liquid fluorine at 1500 pounds per square inch gage, it is known to react violently with fluorine under other conditions. The pipe-thread lubricant Q-Seal did not react with liquid fluorine, but did react with gaseous fluorine at 1500 pounds per square inch gage. Of the solids, ruby (Al2O3) and Teflon did not react under the test conditions. The results show that the compatibility of fluorine with nonmetals depends on the state of the fluorine and the system design.
Eta- and Partial Eta-Squared in L2 Research: A Cautionary Review and Guide to More Appropriate Usage
ERIC Educational Resources Information Center
Norouzian, Reza; Plonsky, Luke
2018-01-01
Eta-squared (?[superscript 2]) and partial eta-squared (?[subscript p][superscript 2]) are effect sizes that express the amount of variance accounted for by one or more independent variables. These indices are generally used in conjunction with ANOVA, the most commonly used statistical test in second language (L2) research (Plonsky, 2013).…
NASA Astrophysics Data System (ADS)
Thangsunan, Patcharapong; Kittiwachana, Sila; Meepowpan, Puttinan; Kungwan, Nawee; Prangkio, Panchika; Hannongbua, Supa; Suree, Nuttee
2016-06-01
Improving performance of scoring functions for drug docking simulations is a challenging task in the modern discovery pipeline. Among various ways to enhance the efficiency of scoring function, tuning of energetic component approach is an attractive option that provides better predictions. Herein we present the first development of rapid and simple tuning models for predicting and scoring inhibitory activity of investigated ligands docked into catalytic core domain structures of HIV-1 integrase (IN) enzyme. We developed the models using all energetic terms obtained from flexible ligand-rigid receptor dockings by AutoDock4, followed by a data analysis using either partial least squares (PLS) or self-organizing maps (SOMs). The models were established using 66 and 64 ligands of mercaptobenzenesulfonamides for the PLS-based and the SOMs-based inhibitory activity predictions, respectively. The models were then evaluated for their predictability quality using closely related test compounds, as well as five different unrelated inhibitor test sets. Weighting constants for each energy term were also optimized, thus customizing the scoring function for this specific target protein. Root-mean-square error (RMSE) values between the predicted and the experimental inhibitory activities were determined to be <1 (i.e. within a magnitude of a single log scale of actual IC50 values). Hence, we propose that, as a pre-functional assay screening step, AutoDock4 docking in combination with these subsequent rapid weighted energy tuning methods via PLS and SOMs analyses is a viable approach to predict the potential inhibitory activity and to discriminate among small drug-like molecules to target a specific protein of interest.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Design and manufacture a coconut milk squeezer
NASA Astrophysics Data System (ADS)
Wayan Surata, I.; Gde Tirta Nindhia, Tjokorda; Budyanto, D.; Yulianto, A. E.
2017-05-01
The process of cooking oil production generally is started by grating the ripe coconut meat, then pressing the grated meat to obtain coconut milk, and finally heating the coconut milk to obtain the cooking oil. Pressing mechanism to obtain coconut milk is a very important step and decisive in the process of producing cooking oil. The amount of milk produced depends on the pressure applied at the time of pressing grated coconut. The higher the pressure, the more milk is obtained. Some commercial mechanical pressing tools that available in the market are not efficient due to the working steps too much and take long time per cycle of work. The aims of this study was to design and manufacture a power screw squeezer for the collection of coconut milk. Power screw produces a compressive force in the cylinder to push and press the grated coconut until the end of the cylinder while the coconut milk and coconut dregs flow out simultaneously. Screw press was designed using straight shaft configuration with square profile. Performance test was done to investigate the actual capacity and yield of milk produced. The results showed that squeezer of grated coconut worked well with capacity an average of 13,63 kg/h and coconut milk yield of 58%.
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco; Llana-Fúnez, Sergio
2016-04-01
The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).
Hierarchical self-assembly of actin in micro-confinements using microfluidics
Deshpande, Siddharth; Pfohl, Thomas
2012-01-01
We present a straightforward microfluidics system to achieve step-by-step reaction sequences in a diffusion-controlled manner in quasi two-dimensional micro-confinements. We demonstrate the hierarchical self-organization of actin (actin monomers—entangled networks of filaments—networks of bundles) in a reversible fashion by tuning the Mg2+ ion concentration in the system. We show that actin can form networks of bundles in the presence of Mg2+ without any cross-linking proteins. The properties of these networks are influenced by the confinement geometry. In square microchambers we predominantly find rectangular networks, whereas triangular meshes are predominantly found in circular chambers. PMID:24032070
Area law violations and quantum phase transitions in modified Motzkin walk spin chains
NASA Astrophysics Data System (ADS)
Sugino, Fumihiko; Padmanabhan, Pramod
2018-01-01
Area law violations for entanglement entropy in the form of a square root have recently been studied for one-dimensional frustration-free quantum systems based on the Motzkin walks and their variations. Here we consider a Motzkin walk with a different Hilbert space on each step of the walk spanned by the elements of a symmetric inverse semigroup with the direction of each step governed by its algebraic structure. This change alters the number of paths allowed in the Motzkin walk and introduces a ground state degeneracy that is sensitive to boundary perturbations. We study the frustration-free spin chains based on three symmetric inverse semigroups, \
Smith, Michelle D; Harvey, Elizabeth H; van den Hoorn, Wolbert; Shay, Barbara L; Pereira, Gisèle M; Hodges, Paul W
2016-04-01
Recent studies show balance impairment in subjects with chronic respiratory disease. The aim of this proof-of-concept study was to investigate clinical and quantitative measures of balance in people with chronic respiratory disease following participation in an out-patient pulmonary rehabilitation (PR) program to better understand features of balance improvement. A secondary aim was to probe possible mechanisms for balance improvement to provide the foundation for optimal design of future studies. Eleven individuals with chronic respiratory disease enrolled in an 8-week out-patient PR program participated. Standing balance, measured with a force plate, in the medial-lateral and anterior-posterior directions with eyes open and closed was assessed with linear (SD and sway path length) and non-linear (diffusion analysis) center-of-pressure measures. Balance was evaluated clinically with the Timed Up and Go and Four Square Step Test. Fear of falling and balance confidence were assessed with questionnaires. After participation in PR, medial-lateral sway path length decreased (P = .031), and center-of-pressure diffusion in the medial-lateral direction was slower (P = .02) and traveled over less distance (P = .03) with eyes closed. This suggests greater control of medial-lateral sway. There was no change in anterior-posterior balance (P > .067). Performance improved on the Timed Up and Go (median [interquartile range] pre-PR = 9.4 [7.9-12.8] vs. post-PR = 8.1 [7.3-12.2] s, P = .003) and Four Square Step Test (median [interquartile range] pre-PR = 9.3 [7.2-14.2] vs. post-PR = 8.7 [7.4-10.2] s, P = .050). There were no changes in balance confidence (P = .72) or fear of falling (P = .57). Participation in an 8-week out-patient PR program improved balance, as assessed by clinical and laboratory measures. Detailed analysis of force plate measures demonstrated improvements primarily with respect to medial-lateral balance control. These data provide a basis for the development of larger scale studies to investigate the mechanisms for medial-lateral balance improvements following PR and to determine how PR may be refined to enhance balance outcomes in this population. (ClinicalTrials.gov registration NCT00864084.). Copyright © 2016 by Daedalus Enterprises.
Frew, Paula M; Mulligan, Mark J; Hou, Su-I; Chan, Kayshin; del Rio, Carlos
2010-01-01
Objective This study examines whether men-who-have-sex-with-men (MSM) and transgender (TG) persons’ attitudes, beliefs, and risk perceptions toward human immunodeficiency virus (HIV) vaccine research have been altered as a result of the negative findings from a phase 2B HIV vaccine study. Design We conducted a cross-sectional survey among MSM and TG persons (N = 176) recruited from community settings in Atlanta from 2007 to 2008. The first group was recruited during an active phase 2B HIV vaccine trial in which a candidate vaccine was being evaluated (the “Step Study”), and the second group was recruited after product futility was widely reported in the media. Methods Descriptive statistics, t tests, and chi-square tests were conducted to ascertain differences between the groups, and ordinal logistic regressions examined the influences of the above-mentioned factors on a critical outcome, future HIV vaccine study participation. The ordinal regression outcomes evaluated the influences on disinclination, neutrality, and inclination to study participation. Results Behavioral outcomes such as future recruitment, event attendance, study promotion, and community mobilization did not reveal any differences in participants’ intentions between the groups. However, we observed greater interest in HIV vaccine study screening (t = 1.07, P < 0.05) and enrollment (t = 1.15, P < 0.05) following negative vaccine findings. Means on perceptions, attitudes, and beliefs did not differ between the groups. Before this development, only beliefs exhibited a strong relationship on the enrollment intention (β = 2.166, P = 0.002). However, the effect disappeared following negative trial results, with the positive assessment of the study-site perceptions being the only significant contributing factor on enrollment intentions (β = 1.369, P = 0.011). Conclusion Findings show greater enrollment intention among this population in the wake of negative efficacy findings from the Step Study. The resolve of this community to find an HIV vaccine is evident. Moreover, any exposure to information disseminated in the public arena did not appear to negatively influence the potential for future participation in HIV vaccine studies among this population. The results suggest that subsequent studies testing candidate vaccines could be conducted in this population. PMID:21152413
[Determination of Cu in Shell of Preserved Egg by LIBS Coupled with PLS].
Hu, Hui-qin; Xu, Xue-hong; Liu, Mu-hua; Tu, Jian-ping; Huang, Le; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; Yang, Ping
2015-12-01
In this work, the content of copper in the shell of preserved eggs were determined directly by Laser induced breakdown spectroscopy (LIBS), and the characteristics lines of Cu was obtained. The samples of eggshell were pretreated by acid wet digestion, and the real content of Cu was obtained by atomic absorption spectrophotometer (AAS). Due to the test precision and accuracy of LIBS was influenced by a serious of factors, for example, the complex matrix effect of sample, the enviro nment noise, the system noise of the instrument, the stability of laser energy and so on. And the conventional unvariate linear calibration curve between LIBS intensity and content of element of sample, such as by use of Schiebe G-Lomakin equation, can not meet the requirement of quantitative analysis. In account of that, a kind of multivariate calibration method is needed. In this work, the data of LIBS spectra were processed by partial least squares (PLS), the precision and accuracy of PLS model were compared by different smoothing treatment and five pretreatment methods. The result showed that the correlation coefficient and the accuracy of the PLS model were improved, and the root mean square error and the average relative error were reduced effectively by 11 point smoothing with Multiplicative scatter correction (MSC) pretreatment. The results of the study show that, heavy metal Cu in preserved egg shells can be direct detected accurately by laser induced breakdown spectroscopy, and the next step batch tests will been conducted to find out the relationship of heavy metal Cu content in the preserved egg between the eggshell, egg white and egg yolk. And the goal of the contents of heavy metals in the egg white, egg yolk can be knew through determinate the eggshell by the LIBS can be achieved, to provide new method for rapid non-destructive testing technology for quality and satety of agricultural products.
Kampf, Günter; Reise, Gesche; James, Claudia; Gittelbauer, Kirsten; Gosch, Jutta; Alpers, Birgit
2013-01-01
Background: Peripheral venous catheters are frequently used in hospitalized patients but increase the risk of nosocomial bloodstream infection. Evidence-based guidelines describe specific steps that are known to reduce infection risk. However, the degree of guideline implementation in clinical practice is not known. The aim of this study was to determine the use of specific steps for insertion of peripheral venous catheters in clinical practice and to implement a multimodal intervention aimed at improving both compliance and the optimum order of the steps. Methods: The study was conducted at University Hospital Hamburg. An optimum procedure for inserting a peripheral venous catheter was defined based on three evidence-based guidelines (WHO, CDC, RKI) including five steps with 1A or 1B level of evidence: hand disinfection before patient contact, skin antisepsis of the puncture site, no palpation of treated puncture site, hand disinfection before aseptic procedure, and sterile dressing on the puncture site. A research nurse observed and recorded procedures for peripheral venous catheter insertion for healthcare workers in four different departments (endoscopy, central emergency admissions, pediatrics, and dermatology). A multimodal intervention with 5 elements was established (teaching session, dummy training, e-learning tool, tablet and poster, and direct feedback), followed by a second observation period. During the last observation week, participants evaluated the intervention. Results: In the control period, 207 insertions were observed, and 202 in the intervention period. Compliance improved significantly for four of five steps (e.g., from 11.6% to 57.9% for hand disinfection before patient contact; p<0.001, chi-square test). Compliance with skin antisepsis of the puncture site was high before and after intervention (99.5% before and 99.0% after). Performance of specific steps in the correct order also improved (e.g., from 7.7% to 68.6% when three of five steps were done; p<0.001). The intervention was described as helpful by 46.8% of the participants, as neutral by 46.8%, and as disruptive by 6.4%. Conclusions: A multimodal strategy to improve both compliance with safety steps for peripheral venous catheter insertion and performance of an optimum procedure was effective and was regarded helpful by healthcare workers. PMID:24327944
Rep. Butterfield, G. K. [D-NC-1
2014-07-17
Senate - 11/18/2014 Received in the Senate and Read twice and referred to the Committee on Homeland Security and Governmental Affairs. (All Actions) Tracker: This bill has the status Passed HouseHere are the steps for Status of Legislation:
Short-range inverse-square law experiment in space
NASA Technical Reports Server (NTRS)
Strayer, D.; Paik, H. J.; Moody, M. V.
2002-01-01
The objective of ISLES (Inverse-Square Law Experiment in Space) is to perform a null test ofNewton's law on the ISS with a resolution of one part in lo5 at ranges from 100 pm to 1 mm. ISLES will be sensitive enough to detect axions with the strongest allowed coupling and to test the string-theory prediction with R z 5 pm.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Stencils and problem partitionings: Their influence on the performance of multiple processor systems
NASA Technical Reports Server (NTRS)
Reed, D. A.; Adams, L. M.; Patrick, M. L.
1986-01-01
Given a discretization stencil, partitioning the problem domain is an important first step for the efficient solution of partial differential equations on multiple processor systems. Partitions are derived that minimize interprocessor communication when the number of processors is known a priori and each domain partition is assigned to a different processor. This partitioning technique uses the stencil structure to select appropriate partition shapes. For square problem domains, it is shown that non-standard partitions (e.g., hexagons) are frequently preferable to the standard square partitions for a variety of commonly used stencils. This investigation is concluded with a formalization of the relationship between partition shape, stencil structure, and architecture, allowing selection of optimal partitions for a variety of parallel systems.
Asymptotic shape of the region visited by an Eulerian walker.
Kapri, Rajeev; Dhar, Deepak
2009-11-01
We study an Eulerian walker on a square lattice, starting from an initial randomly oriented background using Monte Carlo simulations. We present evidence that, for a large number of steps N , the asymptotic shape of the set of sites visited by the walker is a perfect circle. The radius of the circle increases as N1/3, for large N , and the width of the boundary region grows as Nalpha/3, with alpha=0.40+/-0.06 . If we introduce stochasticity in the evolution rules, the mean-square displacement of the walker,
NASA Astrophysics Data System (ADS)
Foster, A. L.; Klofas, J. M.; Hein, J. R.; Koschinsky, A.; Bargar, J.; Dunham, R. E.; Conrad, T. A.
2011-12-01
Marine ferromanganese crusts and nodules ("Fe-Mn crusts") are considered a potential mineral resource due to their accumulation of several economically-important elements at concentrations above mean crustal abundances. They are typically composed of intergrown Fe oxyhydroxide and Mn oxide; thicker (older) crusts can also contain carbonate fluorapatite. We used X-ray absorption fine-structure (XAFS) spectroscopy, a molecular-scale structure probe, to determine the speciation of several elements (Te, Bi, Mo, Zr, Pt) in Fe-Mn crusts. As a first step in analysis of this dataset, we have conducted principal component analysis (PCA) of Te K-edge and Mo K-edge, k3-weighted XAFS spectra. The sample set consisted of 12 homogenized, ground Fe-Mn crust samples from 8 locations in the global ocean. One sample was subjected to a chemical leach to selectively remove Mn oxides and the elements associated with it. The samples in the study set contain 50-205 mg/kg Te (average = 88) and 97-802 mg/kg Mo (average = 567). PCAs of background-subtracted, normalized Te K-edge and Mo K-edge XAFS spectra were performed on a data matrix of 12 rows x 122 columns (rows = samples; columns = Te or Mo fluorescence value at each energy step) and results were visualized without rotation. The number of significant components was assessed by the Malinowski indicator function and ability of the components to reconstruct the features (minus noise) of all sample spectra. Two components were significant by these criteria for both Te and Mo PCAs and described a total of 74 and 75% of the total variance, respectively. Reconstruction of potential model compounds by the principal components derived from PCAs on the sample set ("target transformation") provides a means of ranking models in terms of their utility for subsequent linear-combination, least-squares (LCLS) fits (the next step of data analysis). Synthetic end-member models of Te4+, Te6+, and Mo adsorbed to Fe(III) oxyhydroxide and Mn oxide were tested. Te6+ sorbed to Fe oxyhydroxide and Mo sorbed to Fe oxyhydroxide were identified as the best models for Te and Mo PCAs, respectively. However, in the case of Mo, least-squares fits contradicted these results, indicating that about 80% of Mo in crust samples was associated with Mn oxides. Ultimately it was discovered that the sample from which Mn oxide had been leached was skewing the results in the Mo PCA but not in the Te PCA. When the leached sample was removed and the Mo PCA repeated (n = 11), target transformation indicated that Mo sorbed to Mn oxide was indeed the best model for the set. Our results indicate that Te and Mo are strongly partitioned into different phases in these Fe-Mn crusts, and emphasize the importance of evaluating outliers and their effects on PCA.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.
An accurate test for homogeneity of odds ratios based on Cochran's Q-statistic.
Kulinskaya, Elena; Dollinger, Michael B
2015-06-10
A frequently used statistic for testing homogeneity in a meta-analysis of K independent studies is Cochran's Q. For a standard test of homogeneity the Q statistic is referred to a chi-square distribution with K-1 degrees of freedom. For the situation in which the effects of the studies are logarithms of odds ratios, the chi-square distribution is much too conservative for moderate size studies, although it may be asymptotically correct as the individual studies become large. Using a mixture of theoretical results and simulations, we provide formulas to estimate the shape and scale parameters of a gamma distribution to fit the distribution of Q. Simulation studies show that the gamma distribution is a good approximation to the distribution for Q. Use of the gamma distribution instead of the chi-square distribution for Q should eliminate inaccurate inferences in assessing homogeneity in a meta-analysis. (A computer program for implementing this test is provided.) This hypothesis test is competitive with the Breslow-Day test both in accuracy of level and in power.
NASA Technical Reports Server (NTRS)
Shykoff, Barbara E.; Swanson, Harvey T.
1987-01-01
A new method for correction of mass spectrometer output signals is described. Response-time distortion is reduced independently of any model of mass spectrometer behavior. The delay of the system is found first from the cross-correlation function of a step change and its response. A two-sided time-domain digital correction filter (deconvolution filter) is generated next from the same step response data using a regression procedure. Other data are corrected using the filter and delay. The mean squared error between a step response and a step is reduced considerably more after the use of a deconvolution filter than after the application of a second-order model correction. O2 consumption and CO2 production values calculated from data corrupted by a simulated dynamic process return to near the uncorrupted values after correction. Although a clean step response or the ensemble average of several responses contaminated with noise is needed for the generation of the filter, random noise of magnitude not above 0.5 percent added to the response to be corrected does not impair the correction severely.
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
Using Curved Crystals to Study Terrace-Width Distributions.
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.
Recent experiments on curved crystals of noble and late transition metals (Ortega and Juurlink groups) have renewed interest in terrace width distributions (TWD) for vicinal surfaces. Thus, it is timely to discuss refinements of TWD analysis that are absent from the standard reviews. Rather than by Gaussians, TWDs are better described by the generalized Wigner surmise, with a power-law rise and a Gaussian decay, thereby including effects evident for weak step repulsion: skewness and peak shifts down from the mean spacing. Curved crystals allow analysis of several mean spacings with the same substrate, so that one can check the scaling with the mean width. This is important since such scaling confirms well-established theory. Failure to scale also can provide significant insights. Complicating factors can include step touching (local double-height steps), oscillatory step interactions mediated by metallic (but not topological) surface states, short-range corrections to the inverse-square step repulsion, and accounting for the offset between adjacent layers of almost all surfaces. We discuss how to deal with these issues. For in-plane misoriented steps there are formulas to describe the stiffness but not yet the strength of the elastic interstep repulsion. Supported in part by NSF-CHE 13-05892.
Monitoring and evaluating civil structures using measured vibration
NASA Astrophysics Data System (ADS)
Straser, Erik G.; Kiremidjian, Anne S.
1996-04-01
The need for a rapid assessment of the state of critical and conventional civil structures, such as bridges, control centers, airports, and hospitals, among many, has been amply demonstrated during recent natural disasters. Research is underway at Stanford University to develop a state-of-the-art automated damage monitoring system for long term and extreme event monitoring based on both ambient and forced response measurements. Such research requires a multi-disciplinary approach harnessing the talents and expertise of civil, electrical, and mechanical engineering to arrive at a novel hardware and software solution. Recent advances in silicon micro-machining and microprocessor design allow for the economical integration of sensing, processing, and communication components. Coupling these technological advances with parameter identification algorithms allows for the realization of extreme event damage monitoring systems for civil structures. This paper addresses the first steps toward the development of a near real-time damage diagnostic and monitoring system based on structural response to extreme events. Specifically, micro-electro-mechanical- structures (MEMS) and microcontroller embedded systems (MES) are demonstrated to be an effective platform for the measurement and analysis of civil structures. Experimental laboratory tests with small scale model specimens and a preliminary sensor module are used to evaluate hardware and obtain structural response data from input accelerograms. A multi-step analysis procedure employing ordinary least squares (OLS), extended Kalman filtering (EKF), and a substructuring approach is conducted to extract system characteristics of the model. Results from experimental tests and system identification (SI) procedures as well as fundamental system design issues are presented.
Sartori, Neimar; Stolf, Sheila C; Silva, Silvana B; Lopes, Guilherme C; Carrilho, Marcela
2013-12-01
The aim of this clinical study was to evaluate the long-term clinical performance of non-carious Class V restorations with and without application of chlorhexidine digluconate to acid-etched dentine. After the approval of the Ethics and Informed Consent Committee, 70 non-carious cervical lesions were selected and randomly assigned into two groups, according to the split mouth design. The control group was restored with a two-step etch-and-rinse adhesive (Adper Single Bond 2) following manufacturer's instructions; whereas in the experimental group 2% chlorhexidine digluconate solution was applied to acid etched dentine for 30s after etching and prior to the adhesive application. All lesions were restored with a nanofilled composite resin (Filtek Supreme XT) and polymerized with a light-curing unit operating at 600mW/cm(2). Clinical performance was recorded after 1 week, 6, 12, and 36 months using modified Ryge/USPHS criteria in terms of retention, marginal discoloration, marginal integrity, post-operative sensitivity, and secondary caries incidence. Data were analyzed using Chi-Square, Fisher's exact test and McNemar tests (α=.05). After 36 months the control group showed a success rate of 88% in comparison to 76% of experimental group; however, no statistically difference between them was found (p=.463). Moreover, no statistical differences were observed between groups in the criteria post-operative sensitivity, marginal discoloration, marginal integrity, and secondary caries incidence between the two groups. The addition of 2% chlorhexidine digluconate conditioning step does not improve the clinical durability of adhesive restorations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pipeline for effective denoising of digital mammography and digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.
2017-03-01
Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.
Single-Stage Step up/down Driver for Permanent-Magnet Synchronous Machines
NASA Astrophysics Data System (ADS)
Chen, T. R.; Juan, Y. L.; Huang, C. Y.; Kuo, C. T.
2017-11-01
The two-stage circuit composed of a step up/down dc converter and a three-phase voltage source inverter is usually adopted as the electric vehicle’s motor driver. The conventional topology is more complicated. Additional power loss resulted from twice power conversion would also cause lower efficiency. A single-stage step up/down Permanent-Magnet Synchronous Motor driver for Brushless DC (BLDC) Motor is proposed in this study. The number components and circuit complexity are reduced. The low frequency six-step square-wave control is used to reduce the switching losses. In the proposed topology, only one active switch is gated with a high frequency PWM signal for adjusting the rotation speed. The rotor position signals are fed back to calculate the motor speed for digital close-loop control in a MCU. A 600W prototype circuit is constructed to drive a BLDC motor with rated speed 3000 rpm, and can control the speed of six sections.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Plowes, Nicola J.R; Adams, Eldridge S
2005-01-01
Lanchester's models of attrition describe casualty rates during battles between groups as functions of the numbers of individuals and their fighting abilities. Originally developed to describe human warfare, Lanchester's square law has been hypothesized to apply broadly to social animals as well, with important consequences for their aggressive behaviour and social structure. According to the square law, the fighting ability of a group is proportional to the square of the number of individuals, but rises only linearly with fighting ability of individuals within the group. By analyzing mortality rates of fire ants (Solenopsis invicta) fighting in different numerical ratios, we provide the first quantitative test of Lanchester's model for a non-human animal. Casualty rates of fire ants were not consistent with the square law; instead, group fighting ability was an approximately linear function of group size. This implies that the relative numbers of casualties incurred by two fighting groups are not strongly affected by relative group sizes and that battles do not disproportionately favour group size over individual prowess. PMID:16096093
Electron transport in stepped Bi2Se3 thin films
NASA Astrophysics Data System (ADS)
Bauer, S.; Bobisch, C. A.
2017-08-01
We analyse the electron transport in a 16 quintuple layer thick stepped Bi2Se3 film grown on Si(1 1 1) by means of scanning tunnelling potentiometry (STP) and multi-point probe measurements. Scanning tunnelling microscopy images reveal that the local structure of the Bi2Se3 film is dominated by terrace steps and domain boundaries. From a microscopic study on the nm scale by STP, we find a mostly linear gradient of the voltage on the Bi2Se3 terraces which is interrupted by voltage drops at the position of the domain boundaries. The voltage drops indicate that the domain boundaries are scatterers for the electron transport. Macroscopic resistance measurements (2PP and in-line 4PP measurement) on the µm scale support the microscopic results. An additional rotational square 4PP measurement shows an electrical anisotropy of the sheet conductance parallel and perpendicular to the Bi2Se3 steps of about 10%. This is a result of the anisotropic step distribution at the stepped Bi2Se3 surface while domain boundaries are distributed isotropically. The determined value of the conductivity of the Bi2Se3 steps of about 1000 S cm-1 verifies the value of an earlier STP study.
Estimation of genomic breeding values for milk yield in UK dairy goats.
Mucha, S; Mrode, R; MacLaren-Lee, I; Coffey, M; Conington, J
2015-11-01
The objective of this study was to estimate genomic breeding values for milk yield in crossbred dairy goats. The research was based on data provided by 2 commercial goat farms in the UK comprising 590,409 milk yield records on 14,453 dairy goats kidding between 1987 and 2013. The population was created by crossing 3 breeds: Alpine, Saanen, and Toggenburg. In each generation the best performing animals were selected for breeding, and as a result, a synthetic breed was created. The pedigree file contained 30,139 individuals, of which 2,799 were founders. The data set contained test-day records of milk yield, lactation number, farm, age at kidding, and year and season of kidding. Data on milk composition was unavailable. In total 1,960 animals were genotyped with the Illumina 50K caprine chip. Two methods for estimation of genomic breeding value were compared-BLUP at the single nucleotide polymorphism level (BLUP-SNP) and single-step BLUP. The highest accuracy of 0.61 was obtained with single-step BLUP, and the lowest (0.36) with BLUP-SNP. Linkage disequilibrium (r(2), the squared correlation of the alleles at 2 loci) at 50 kb (distance between 2 SNP) was 0.18. This is the first attempt to implement genomic selection in UK dairy goats. Results indicate that the single-step method provides the highest accuracy for populations with a small number of genotyped individuals, where the number of genotyped males is low and females are predominant in the reference population. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Colorimetric calibration of wound photography with off-the-shelf devices
NASA Astrophysics Data System (ADS)
Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.
2017-03-01
Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.
Testing for independence in J×K contingency tables with complex sample survey data.
Lipsitz, Stuart R; Fitzmaurice, Garrett M; Sinha, Debajyoti; Hevelone, Nathanael; Giovannucci, Edward; Hu, Jim C
2015-09-01
The test of independence of row and column variables in a (J×K) contingency table is a widely used statistical test in many areas of application. For complex survey samples, use of the standard Pearson chi-squared test is inappropriate due to correlation among units within the same cluster. Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) proposed an approach in which the standard Pearson chi-squared statistic is multiplied by a design effect to adjust for the complex survey design. Unfortunately, this test fails to exist when one of the observed cell counts equals zero. Even with the large samples typical of many complex surveys, zero cell counts can occur for rare events, small domains, or contingency tables with a large number of cells. Here, we propose Wald and score test statistics for independence based on weighted least squares estimating equations. In contrast to the Rao-Scott test statistic, the proposed Wald and score test statistics always exist. In simulations, the score test is found to perform best with respect to type I error. The proposed method is motivated by, and applied to, post surgical complications data from the United States' Nationwide Inpatient Sample (NIS) complex survey of hospitals in 2008. © 2015, The International Biometric Society.
Quality of semen: a 6-year single experience study on 5680 patients.
Cozzolino, Mauro; Coccia, Maria E; Picone, Rita
2018-02-08
The aim of our study was to evaluate the quality of semen of a large sample from general healthy population living in Italy, in order to identify possible variables that could influence several parameters of spermiogram. We conducted a cross-sectional study from February 2010 to March 2015, collecting semen samples from the general population. Semen analysis was performed according to the WHO guidelines. The collected data were inserted in a database and processed using the software Stata 12. The Mann - Whitney test was used to assess the relationship of dichotomus variables with the parameters of the spermiogram; Kruskal-Wallis test for variables with more than two categories. We used also Robust regression and Spearman correlation to analyze the relationship between age and the parameters. We collected 5680 samples of semen. The mean age of our patients was 41.4 years old. Mann-Whitney test showed that the citizenship (codified as "Italian/Foreign") influences some parameters: pH, vitality, number of spermatozoa, sperm concentration, with worse results for the Italian group. Kruskal-Wallis test showed that the single nationality influences pH, volume, Sperm motility A-B-C-D, vitality, morphology, number of spermatozoa, sperm concentration. Robust regression showed a relationship between age and several parameters: volume (p=0.04, R squared= 0.0007 β: - 0.06); sperm motility A (p<0.01; R squared 0.0051 β: 0.02); sperm motility B (p<0.01; R squared 0.02 β: -0.35); sperm motility C (p<0.01; R squared 0.01 β: 0.12); sperm motility D (p<0.01; R squared 0.006 β: 0.2); vitality (p<0.01; R squared 0.01 β: -0.32); sperm concentration (p=0.01; R squared 0.001 β: 0.19). Our patients had spermiogram's results quite better than the standard guidelines. Our study showed that the country of origin could be a factor influencing several parameters of the spermiogram in healthy population and through Robust regression confirmed a strict correlation between age and these parameters.
Blockage Testing in the NASA Glenn 225 Square Centimeter Supersonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Sevier, Abigail; Davis, David; Schoenenberger, Mark
2017-01-01
A feasibility study is in progress at NASA Glenn Research Center to implement a magnetic suspension and balance system in the 225 sq cm Supersonic Wind Tunnel for the purpose of testing the dynamic stability of blunt bodies. An important area of investigation in this study was determining the optimum size of the model and the iron spherical core inside of it. In order to minimize the required magnetic field and thus the size of the magnetic suspension system, it was determined that the test model should be as large as possible. Blockage tests were conducted to determine the largest possible model that would allow for tunnel start at Mach 2, 2.5, and 3. Three different forebody model geometries were tested at different Mach numbers, axial locations in the tunnel, and in both a square and axisymmetric test section. Experimental results showed that different model geometries produced more varied results at higher Mach Numbers. It was also shown that testing closer to the nozzle allowed larger models to start compared with testing near the end of the test section. Finally, allowable model blockage was larger in the axisymmetric test section compared with the square test section at the same Mach number. This testing answered key questions posed by the feasibility study and will be used in the future to dictate model size and performance required from the magnetic suspension system.
Computerized Color Vision Test Based Upon Postreceptoral Channel Sensitivities
E, Miyahara; J, Pokorny; VC, Smith; E, Szewczyk; J, McCartin; K, Caldwell; A, Klerer
2006-01-01
An automated, computerized color vision test was designed to diagnose congenital red-green color vision defects. The observer viewed a yellow appearing CRT screen. The principle was to measure increment thresholds for three different chromaticities, the background yellow, a red, and a green chromaticity. Spatial and temporal parameters were chosen to favor parvocellular pathway mediation of thresholds. Thresholds for the three test stimuli were estimated by 4AFC, randomly interleaved staircases. Four 1.5°, 4.2 cd/m2 square pedestals were arranged as a 2 x 2 matrix around the center of the display with 15’ separations. A trial incremented all four squares by 1.0 cd/m2 for 133 msec. One randomly chosen square included an extra increment of a test chromaticity. The observer identified the different appearing square using the cursor. Administration time was ~5 minutes. Normal trichromats showed clear Sloan notch as defined by log (ΔY/ΔR), whereas red-green color defectives generally showed little or no Sloan notch, indicating that their thresholds were mediated by their luminance system, not by the chromatic system. Data from 107 normal trichromats showed a mean Sloan notch of 0.654 (SD = 0.123). Among 16 color vision defectives tested (2 protanopes, 1 protanomal, 6 deuteranopes, 7 deuteranomals), the Sloan notch was between −0.062 and 0.353 for deutans and was < −0.10 for protans. A sufficient number of color defective observers have not yet been tested to determine whether the test can reliably discriminate between protans and deutans. Nevertheless, the current data show that the test can work as a quick diagnostic procedure (functional trichromatism or dichromatism) of red-green color vision defect. PMID:15518231
Medical training fails to prepare providers to care for patients with chronic hepatitis B infection
Chao, Stephanie D; Wang, Bing-Mei; Chang, Ellen T; Ma, Li; So, Samuel K
2015-01-01
AIM: To investigate physicians’ knowledge including chronic hepatitis B (CHB) diagnosis, screening, and management in various stages of their training. METHODS: A voluntary 20-question survey was administered in Santa Clara County, CA where Asian and Pacific Islanders (API) account for a third of the population. Among the 219 physician participants, there were 63 interns, 60 second-year residents, 26 chief residents and 70 attending physicians. The survey asked questions regarding respondents’ demographics, general hepatitis B virus knowledge questions (i.e., transmission, prevalence, diagnostic testing, prevention, and treatment options), as well as, self-reported practice behavior and confidence in knowledge. RESULTS: Knowledge about screening and managing patients with CHB was poor: only 24% identified the correct tests to screen for CHB, 13% knew the next steps for patients testing positive for CHB, 18% knew the high prevalence rate among API, and 31% knew how to screen for liver cancer. Wald chi-square analysis determined the effect of training level on knowledge; in all cases except for knowledge of liver cancer screening (P = 0.0032), knowledge did not significantly increase with length in residency training or completion of residency. CONCLUSION: Even in a high-risk region, both medical school and residency training have not adequately prepared physicians in the screening and management of CHB. PMID:26078568
Fuzzy control of small servo motors
NASA Technical Reports Server (NTRS)
Maor, Ron; Jani, Yashvant
1993-01-01
To explore the benefits of fuzzy logic and understand the differences between the classical control methods and fuzzy control methods, the Togai InfraLogic applications engineering staff developed and implemented a motor control system for small servo motors. The motor assembly for testing the fuzzy and conventional controllers consist of servo motor RA13M and an encoder with a range of 4096 counts. An interface card was designed and fabricated to interface the motor assembly and encoder to an IBM PC. The fuzzy logic based motor controller was developed using the TILShell and Fuzzy C Development System on an IBM PC. A Proportional-Derivative (PD) type conventional controller was also developed and implemented in the IBM PC to compare the performance with the fuzzy controller. Test cases were defined to include step inputs of 90 and 180 degrees rotation, sine and square wave profiles in 5 to 20 hertz frequency range, as well as ramp inputs. In this paper we describe our approach to develop a fuzzy as well as PH controller, provide details of hardware set-up and test cases, and discuss the performance results. In comparison, the fuzzy logic based controller handles the non-linearities of the motor assembly very well and provides excellent control over a broad range of parameters. Fuzzy technology, as indicated by our results, possesses inherent adaptive features.
Fatone, Stefania; Caldwell, Ryan
2017-01-01
Background: Current transfemoral prosthetic sockets are problematic as they restrict function, lack comfort, and cause residual limb problems. Development of a subischial socket with lower proximal trim lines is an appealing way to address this problem and may contribute to improving quality of life of persons with transfemoral amputation. Objectives: The purpose of this study was to illustrate the use of a new subischial socket in two subjects. Study design: Case series. Methods: Two unilateral transfemoral prosthesis users participated in preliminary socket evaluations comparing functional performance of the new subischial socket to ischial containment sockets. Testing included gait analysis, socket comfort score, and performance-based clinical outcome measures (Rapid-Sit-To-Stand, Four-Square-Step-Test, and Agility T-Test). Results: For both subjects, comfort was better in the subischial socket, while gait and clinical outcomes were generally comparable between sockets. Conclusion: While these evaluations are promising regarding the ability to function in this new socket design, more definitive evaluation is needed. Clinical relevance Using gait analysis, socket comfort score and performance-based outcome measures, use of the Northwestern University Flexible Subischial Vaccum Socket was evaluated in two transfemoral prosthesis users. Socket comfort improved for both subjects with comparable function compared to ischial containment sockets. PMID:28132589
Breast cancer treatment and ethnicity in British Columbia, Canada
2010-01-01
Background Racial and ethnic disparities in breast cancer incidence, stage at diagnosis, survival and mortality are well documented; but few studies have reported on disparities in breast cancer treatment. This paper compares the treatment received by breast cancer patients in British Columbia (BC) for three ethnic groups and three time periods. Values for breast cancer treatments received in the BC general population are provided for reference. Methods Information on patients, tumour characteristics and treatment was obtained from BC Cancer Registry (BCCR) and BC Cancer Agency (BCCA) records. Treatment among ethnic groups was analyzed by stage at diagnosis and time period at diagnosis. Differences among the three ethnic groups were tested using chi-square tests, Fisher exact tests and a multivariate logistic model. Results There was no significant difference in overall surgery use for stage I and II disease between the ethnic groups, however there were significant differences when surgery with and without radiation were considered separately. These differences did not change significantly with time. Treatment with chemotherapy and hormone therapy did not differ among the minority groups. Conclusion The description of treatment differences is the first step to guiding interventions that reduce ethnic disparities. Specific studies need to examine reasons for the observed differences and the influence of culture and beliefs. PMID:20406489
Square2 - A Web Application for Data Monitoring in Epidemiological and Clinical Studies
Schmidt, Carsten Oliver; Krabbe, Christine; Schössow, Janka; Albers, Martin; Radke, Dörte; Henke, Jörg
2017-01-01
Valid scientific inferences from epidemiological and clinical studies require high data quality. Data generating departments therefore aim to detect data irregularities as early as possible in order to guide quality management processes. In addition, after the completion of data collections the obtained data quality must be evaluated. This can be challenging in complex studies due to a wide scope of examinations, numerous study variables, multiple examiners, devices, and examination centers. This paper describes a Java EE web application used to monitor and evaluate data quality in institutions with complex and multiple studies, named Square 2 . It uses the Java libraries Apache MyFaces 2, extended by BootsFaces for layout and style. RServe and REngine manage calls to R server processes. All study data and metadata are stored in PostgreSQL. R is the statistics backend and LaTeX is used for the generation of print ready PDF reports. A GUI manages the entire workflow. Square 2 covers all steps in the data monitoring workflow, including the setup of studies and their structure, the handling of metadata for data monitoring purposes, selection of variables, upload of data, statistical analyses, and the generation as well as inspection of quality reports. To take into account data protection issues, Square 2 comprises an extensive user rights and roles concept.
NASA Astrophysics Data System (ADS)
Liu, Fei; He, Yong
2008-02-01
Visible and near infrared (Vis/NIR) transmission spectroscopy and chemometric methods were utilized to predict the pH values of cola beverages. Five varieties of cola were prepared and 225 samples (45 samples for each variety) were selected for the calibration set, while 75 samples (15 samples for each variety) for the validation set. The smoothing way of Savitzky-Golay and standard normal variate (SNV) followed by first-derivative were used as the pre-processing methods. Partial least squares (PLS) analysis was employed to extract the principal components (PCs) which were used as the inputs of least squares-support vector machine (LS-SVM) model according to their accumulative reliabilities. Then LS-SVM with radial basis function (RBF) kernel function and a two-step grid search technique were applied to build the regression model with a comparison of PLS regression. The correlation coefficient (r), root mean square error of prediction (RMSEP) and bias were 0.961, 0.040 and 0.012 for PLS, while 0.975, 0.031 and 4.697x10 -3 for LS-SVM, respectively. Both methods obtained a satisfying precision. The results indicated that Vis/NIR spectroscopy combined with chemometric methods could be applied as an alternative way for the prediction of pH of cola beverages.
Multilayer DNA Origami Packed on a Square Lattice
Ke, Yonggang; Douglas, Shawn M.; Liu, Minghui; Sharma, Jaswinder; Cheng, Anchi; Leung, Albert; Liu, Yan; Shih, William M.; Yan, Hao
2009-01-01
Molecular self-assembly using DNA as a structural building block has proven to be an efficient route to the construction of nanoscale objects and arrays of increasing complexity. Using the remarkable “scaffolded DNA origami” strategy, Rothemund demonstrated that a long single-stranded DNA from a viral genome (M13) can be folded into a variety of custom two-dimensional (2D) shapes using hundreds of short synthetic DNA molecules as staple strands. More recently, we generalized a strategy to build custom-shaped, three-dimensional (3D) objects formed as pleated layers of helices constrained to a honeycomb lattice, with precisely controlled dimensions ranging from 10 to 100 nm. Here we describe a more compact design for 3D origami, with layers of helices packed on a square lattice, that can be folded successfully into structures of designed dimensions in a one-step annealing process, despite the increased density of DNA helices. A square lattice provides a more natural framework for designing rectangular structures, the option for a more densely packed architecture, and the ability to create surfaces that are more flat than is possible with the honeycomb lattice. Thus enabling the design and construction of custom 3D shapes from helices packed on a square lattice provides a general foundational advance for increasing the versatility and scope of DNA nanotechnology. PMID:19807088
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
Using High Spatial Resolution Digital Imagery
2005-02-01
digital base maps were high resolution U.S. Geological Survey (USGS) Digital Orthophoto Quarter Quadrangles (DOQQ). The Root Mean Square Errors (RMSE...next step was to assign real world coordinates to the linear im- age. The mosaics were geometrically registered to the panchromatic orthophotos ...useable thematic map from high-resolution imagery. A more practical approach may be to divide the Refuge into a set of smaller areas, or tiles
McMinn, David; Rowe, David A; Murtagh, Shemane; Nelson, Norah M
2012-05-01
To investigate the effect of a school-based intervention called Travelling Green (TG) on children's walking to and from school and total daily physical activity. A quasi-experiment with 166 Scottish children (8-9 years) was conducted in 2009. One group (n=79) received TG and another group (n=87) acted as a comparison. The intervention lasted 6 weeks and consisted of educational lessons and goal-setting tasks. Steps and MVPA (daily, a.m. commute, p.m. commute, and total commute) were measured for 5 days pre- and post-intervention using accelerometers. Mean steps (daily, a.m., p.m., and total commute) decreased from pre- to post-intervention in both groups (TG by 901, 49, 222, and 271 steps/day and comparison by 2528, 205, 120, and 325 steps/day, respectively). No significant group by time interactions were found for a.m., p.m., and total commuting steps. A medium (partial eta squared=0.09) and significant (p<0.05) group by time interaction was found for total daily steps. MVPA results were similar to step results. TG has a little effect on walking to and from school. However, for total daily steps and daily MVPA, TG results in a smaller seasonal decrease than for children who do not receive the intervention. Copyright © 2012 Elsevier Inc. All rights reserved.
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Identifying elderly people at risk for cognitive decline by using the 2-step test.
Maruya, Kohei; Fujita, Hiroaki; Arai, Tomoyuki; Hosoi, Toshiki; Ogiwara, Kennichi; Moriyama, Shunnichiro; Ishibashi, Hideaki
2018-01-01
[Purpose] The purpose is to verify the effectiveness of the 2-step test in predicting cognitive decline in elderly individuals. [Subjects and Methods] One hundred eighty-two participants aged over 65 years underwent the 2-step test, cognitive function tests and higher level competence testing. Participants were classified as Robust, <1.3, and <1.1 using criteria regarding the locomotive syndrome risk stage for the 2-step test, variables were compared between groups. In addition, ordered logistic analysis was used to analyze cognitive functions as independent variables in the three groups, using the 2-step test results as the dependent variable, with age, gender, etc. as adjustment factors. [Results] In the crude data, the <1.3 and <1.1 groups were older and displayed lower motor and cognitive functions than did the Robust group. Furthermore, the <1.3 group exhibited significantly lower memory retention than did the Robust group. The 2-step test was related to the Stroop test (β: 0.06, 95% confidence interval: 0.01-0.12). [Conclusion] The finding is that the risk stage of the 2-step test is related to cognitive functions, even at an initial risk stage. The 2-step test may help with earlier detection and implementation of prevention measures for locomotive syndrome and mild cognitive impairment.
Study of Factors Related to Army Delayed-Entry Program Attrition
1985-11-01
level, gender , and tenure in DEP. Military classification and assignment are determined almost solely on cognitive factors, physical examinations...Between Gender and Responses to Question 13 for Voluntary DEP Losses . . . . . , . . . , . . . . o , o . ° . 99 P-2. Chi-square Tests for Independence...Between Gender and Responses to Question 13 for DHP Aooession/ Voluntary Active Duty Losses . . 9 0 4 . 0 0 0 0 a a 0 106 B-9. Chi-square Tests for
NASA Astrophysics Data System (ADS)
Stewart, J. B.
2018-02-01
This paper presents experimental data on incident overpressures and the corresponding impulses obtained in the test section of an explosively driven 10° (full angle) conical shock tube. Due to the shock tube's steel walls approximating the boundary conditions seen by a spherical sector cut out of a detonating sphere of energetic material, a 5.3-g pentolite shock tube driver charge produces peak overpressures corresponding to a free-field detonation from an 816-g sphere of pentolite. The four test section geometries investigated in this paper (open air, cylindrical, 10° inscribed square frustum, and 10° circumscribed square frustum) provide a variety of different time histories for the incident overpressures and impulses, with a circumscribed square frustum yielding the best approximation of the estimated blast environment that would have been produced by a free-field detonation.
49 CFR 40.251 - What are the first steps in an alcohol confirmation test?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What are the first steps in an alcohol... What are the first steps in an alcohol confirmation test? As the BAT for an alcohol confirmation test, you must follow these steps to begin the confirmation test process: (a) You must carry out a...
Sustainability of the whole-community project '10,000 Steps': a longitudinal study
2012-01-01
Background In the dissemination and implementation literature, there is a dearth of information on the sustainability of community-wide physical activity (PA) programs in general and of the '10,000 Steps' project in particular. This paper reports a longitudinal evaluation of organizational and individual sustainability indicators of '10,000 Steps'. Methods Among project adopters, department heads of 24 public services were surveyed 1.5 years after initially reported project implementation to assess continuation, institutionalization, sustained implementation of intervention components, and adaptations. Barriers and facilitators of project sustainability were explored. Citizens (n = 483) living near the adopting organizations were interviewed to measure maintenance of PA differences between citizens aware and unaware of '10,000 Steps'. Independent-samples t, Mann-Whitney U, and chi-square tests were used to compare organizations for representativeness and individual PA differences. Results Of all organizations, 50% continued '10,000 Steps' (mostly in cycles) and continuation was independent of organizational characteristics. Level of intervention institutionalization was low to moderate on evaluations of routinization and moderate for project saturation. The global implementation score (58%) remained stable and three of nine project components were continued by less than half of organizations (posters, street signs and variants, personalized contact). Considerable independent adaptations of the project were reported (e.g. campaign image). Citizens aware of '10,000 Steps' remained more active during leisure time than those unaware (227 ± 235 and 176 ± 198 min/week, respectively; t = -2.6; p < .05), and reported more household-related (464 ± 397 and 389 ± 346 min/week, respectively; t = -2.2; p < .05) and moderate-intensity-PA (664 ± 424 and 586 ± 408 min/week, respectively; t = -2.0; p < .05). Facilitators of project sustainability included an organizational leader supporting the project, availability of funding or external support, and ready-for-use materials with ample room for adaptation. Barriers included insufficient synchronization between regional and community policy levels and preference for other PA projects. Conclusions '10,000 Steps' could remain sustainable but design, organizational, and contextual barriers need consideration. Sustainability of '10,000 Steps' in organizations can occur in cycles rather than in ongoing projects. Future research should compare sustainability other whole-community PA projects with '10,000 Steps' to contrast sustainability of alternative models of whole-community PA projects. This would allow optimization of project elements and methods to support decisions of choice for practitioners. PMID:22390341
Epidermal segmentation in high-definition optical coherence tomography.
Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang
2015-01-01
Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Particle acceleration in step function shear flows - A microscopic analysis
NASA Technical Reports Server (NTRS)
Jokipii, J. R.; Morfill, G. E.
1990-01-01
The transport of energetic particles in a moving, scattering fluid, which has a large shear in its velocity over a distance small compared with the scattering mean free path is discussed. The analysis is complementary to an earlier paper by Earl, Jokipii, and Morfill (1988), which considered effects of more-gradual shear in the diffusion approximation. The case in which the scattering fluid undergoes a step function change in velocity, in the direction normal to the flow is considered. An analytical, approximate calculation and a Monte Carlo analysis of particle motion are presented. It is found that particles gain energy at a rate proportional to the square of the magnitude of the velocity change.
Transforming helplessness: an approach to the therapy of "stuck" couples.
Fineberg, D E; Walter, S
1989-09-01
Therapists working with couples often find themselves frustrated when they proceed to negotiate conflicts, even when the historical antecedents and psychological dynamics seem well understood. Despite the genuine willingness of each member to negotiate a solution, they remain stuck in demands that the other take the first steps toward change. This article proposes that a crucial step--transforming helplessness--must precede efforts at fostering communication/negotiation skills. A four-square analysis of the situation helps to explain how this stalemate occurs. Also, it points the way to strategies that can transform the couple's attitudes from helpless complaining to empowered, creative action. Three case examples illustrate clinical applications of this approach.
A simple test of association for contingency tables with multiple column responses.
Decady, Y J; Thomas, D R
2000-09-01
Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Properties of wavelet discretization of Black-Scholes equation
NASA Astrophysics Data System (ADS)
Finěk, Václav
2017-07-01
Using wavelet methods, the continuous problem is transformed into a well-conditioned discrete problem. And once a non-symmetric problem is given, squaring yields a symmetric positive definite formulation. However squaring usually makes the condition number of discrete problems substantially worse. This note is concerned with a wavelet based numerical solution of the Black-Scholes equation for pricing European options. We show here that in wavelet coordinates a symmetric part of the discretized equation dominates over an unsymmetric part in the standard economic environment with low interest rates. It provides some justification for using a fractional step method with implicit treatment of the symmetric part of the weak form of the Black-Scholes operator and with explicit treatment of its unsymmetric part. Then a well-conditioned discrete problem is obtained.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Microwave response of hole and patch arrays
NASA Astrophysics Data System (ADS)
Taylor, Melita C.; Edmunds, James D.; Hendry, Euan; Hibbins, Alastair P.; Sambles, J. Roy
2010-10-01
The electromagnetic response of two-dimensional square arrays of perfectly conducting square patches, and their complementary structures, is modeled utilizing a modal matching technique and employing Babinet’s principle. This method allows for the introduction of progressively higher diffracted orders and waveguide modes to be included in the calculation, hence aiding understanding of the underlying causal mechanism for the observed response. At frequencies close to, but below, the onset of diffraction, a near-complete reflection condition is predicted, even for low filling fractions: conversely, for high filling fractions a near-complete transmission condition results. These resonance phenomena are associated with evanescent diffraction, which is sufficiently strong to reverse the step change in transmission upon establishment of electrical continuity; i.e., the connected structure demonstrates increased transmission with increasing filling fraction.
Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression
Chen, Yanguang
2016-01-01
In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271
Estimating V0[subscript 2]max Using a Personalized Step Test
ERIC Educational Resources Information Center
Webb, Carrie; Vehrs, Pat R.; George, James D.; Hager, Ronald
2014-01-01
The purpose of this study was to develop a step test with a personalized step rate and step height to predict cardiorespiratory fitness in 80 college-aged males and females using the self-reported perceived functional ability scale and data collected during the step test. Multiple linear regression analysis yielded a model (R = 0.90, SEE = 3.43…
Features of Talbot effect on phase diffraction grating
NASA Astrophysics Data System (ADS)
Brazhnikov, Denis G.; Danko, Volodymyr P.; Kotov, Myhaylo M.; Kovalenko, Andriy V.
2018-01-01
The features of the Talbot effect using the phase diffraction gratings have been considered. A phase grating, unlike an amplitude grating, gives a constant light intensity in the observation plane at a distance multiple to half of the Talbot length ZT. In this case, the subject of interest consists in so-called fractional Talbot effect with the periodic intensity distribution observed in planes shifted from the position nZT/2 (the so-called Fresnel images). Binary phase diffraction gratings with varying phase steps have been investigated. Gratings were made photographically on holographic plates PFG-01. The phase shift was obtained by modulating the emulsion refraction index of the plates. Two types of gratings were used: a square grating with a fill factor of 0.5 and a checkerwise grating (square areas with a bigger and lower refractive index alternate in a checkerboard pattern). By the example of these gratings, the possibility of obtaining in the observation plane an image of a set of equidistant spots with a size smaller than the size of the phase-shifting elements of the grating (the so-called Talbot focusing) has been shown. Clear images of spots with a sufficient signal-to-noise ratio have been obtained for a square grating. Their period was equal to the period of the grating. For a grating with a checkerwise distribution of the refractive index, the spots have been located in positions corresponding to the centres of cells. In addition, the quality of the resulting pattern strongly depended on the magnitude of a grating phase step. As a result of the work, the possibility to obtain Talbot focusing has been shown and the use of this effect to wavefront investigation with a gradient sensor has been demonstrated.
Step climbing capacity in patients with pulmonary hypertension.
Fox, Benjamin Daniel; Langleben, David; Hirsch, Andrew; Boutet, Kim; Shimony, Avi
2013-01-01
Patients with pulmonary hypertension (PH) typically have exercise intolerance and limitation in climbing steps. To explore the exercise physiology of step climbing in PH patients, on a laboratory-based step test. We built a step oximetry system from an 'aerobics' step equipped with pressure sensors and pulse oximeter linked to a computer. Subjects mounted and dismounted from the step until their maximal exercise capacity or 200 steps was achieved. Step-count, SpO(2) and heart rate were monitored throughout exercise and recovery. We derived indices of exercise performance, desaturation and heart rate. A 6-min walk test and serum NT-proBrain Natriuretic Peptide (BNP) level were measured. Lung function tests and hemodynamic parameters were extracted from the medical record. Eighty-six subjects [52 pulmonary arterial hypertension (PAH), 14 chronic thromboembolic PH (CTEPH), 20 controls] were recruited. Exercise performance (climbing time, height gained, velocity, energy expenditure, work-rate and climbing index) on the step test was significantly worse with PH and/or worsening WHO functional class (ANOVA, p < 0.001). There was a good correlation between exercise performance on the step and 6-min walking distance-climb index (r = -0.77, p < 0.0001). The saturation deviation (mean of SpO(2) values <95 %) on the step test correlated with diffusion capacity of the lung (ρ = -0.49, p = 0.001). No correlations were found between the step test indices and other lung function tests, hemodynamic parameters or NT-proBNP levels. Patients with PAH/CTEPH have significant limitation in step climbing ability that correlates with functional class and 6-min walking distance. This is a significant impediment to their daily activities.
Alves, Junia O; Botelho, Bruno G; Sena, Marcelo M; Augusti, Rodinei
2013-10-01
Direct infusion electrospray ionization mass spectrometry in the positive ion mode [ESI(+)-MS] is used to obtain fingerprints of aqueous-methanolic extracts of two types of olive oils, extra virgin (EV) and ordinary (OR), as well as of samples of EV olive oil adulterated by the addition of OR olive oil and other edible oils: corn (CO), sunflower (SF), soybean (SO) and canola (CA). The MS data is treated by the partial least squares discriminant analysis (PLS-DA) protocol aiming at discriminating the above-mentioned classes formed by the genuine olive oils, EV (1) and OR (2), as well as the EV adulterated samples, i.e. EV/SO (3), EV/CO (4), EV/SF (5), EV/CA (6) and EV/OR (7). The PLS-DA model employed is built with 190 and 70 samples for the training and test sets, respectively. For all classes (1-7), EV and OR olive oils as well as the adulterated samples (in a proportion varying from 0.5 to 20.0% w/w) are properly classified. The developed methodology required no ions identification and demonstrated to be fast, as each measurement lasted about 3 min including the extraction step and MS analysis, and reliable, because high sensitivities (rate of true positives) and specificities (rate of true negatives) were achieved. Finally, it can be envisaged that this approach has potential to be applied in quality control of EV olive oils. Copyright © 2013 John Wiley & Sons, Ltd.
Boyer, Laurent; Baumstarck, Karine; Iordanova, Teodora; Fernandez, Jessica; Jean, Philippe; Auquier, Pascal
2014-03-01
This study aimed to develop a self-administered, multidimensional, poverty-related quality of life (PQoL) questionnaire for individuals seeking care in emergency departments (EDs): the PQoL-17. The development of the PQoL was undertaken in three steps: item generation, item reduction, and validation. The content of the PQoL was derived from 80 interviews with patients seeking care in EDs. Using item response and classical test theories, item reduction was performed in 3 EDs on 300 patients and validation was completed in 10 EDs on 619 patients. The PQoL contains 17 items describing seven dimensions (self-esteem/vitality, psychological well-being, relationships with family, relationships with friends, autonomy, physical well-being/access to care, and future perception). The seven-factor structure accounted for 75.1% of the total variance. This model showed a good fit (indices from the LISREL model: root mean square error of approximation, 0.055; comparative fit index, 0.97; general fit index, 0.96; standardized root mean square residual, 0.058). Each item achieved the 0.40 standard for item internal consistency, and Cronbach α coefficients were >0.70. Significant associations with socioeconomic and clinical indicators showed good discriminant and external validity. Infit statistics ranged from 0.82 to 1.16. The PQoL-17 presents satisfactory psychometric properties and can be completed quickly, thereby fulfilling the goal of brevity sought in EDs. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Crews, Chiaki C. E.; O'Flynn, Daniel; Sidebottom, Aiden; Speller, Robert D.
2015-06-01
The prevalence of counterfeit and substandard medicines has been growing rapidly over the past decade, and fast, nondestructive techniques for their detection are urgently needed to counter this trend. In this study, energy-dispersive X-ray diffraction (EDXRD) combined with chemometrics was assessed for its effectiveness in quantitative analysis of compressed powder mixtures. Although EDXRD produces lower-resolution diffraction patterns than angular-dispersive X-ray diffraction (ADXRD), it is of interest for this application as it carries the advantage of allowing the analysis of tablets within their packaging, due to the higher energy X-rays used. A series of caffeine, paracetamol and microcrystalline cellulose mixtures were prepared with compositions between 0 - 100 weight% in 20 weight% steps (22 samples in total, including a centroid mixture), and were pressed into tablets. EDXRD spectra were collected in triplicate, and a principal component analysis (PCA) separated these into their correct positions in the ternary mixture design. A partial least-squares (PLS) regression model calibrated using this training set was validated using both segmented cross-validation, and with a test set of six samples (mixtures in 8:1:1 and 5⅓:2⅓:2⅓ ratios) - the latter giving a root-mean square error of prediction (RMSEP) of 1.30, 2.25 and 2.03 weight% for caffeine, paracetamol and cellulose respectively. These initial results are promising, with RMSEP values on a par with those reported in the ADXRD literature.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
Hou, X; Chen, X; Zhang, M; Yan, A
2016-01-01
Plasmodium falciparum, the most fatal parasite that causes malaria, is responsible for over one million deaths per year. P. falciparum dihydroorotate dehydrogenase (PfDHODH) has been validated as a promising drug development target for antimalarial therapy since it catalyzes the rate-limiting step for DNA and RNA biosynthesis. In this study, we investigated the quantitative structure-activity relationships (QSAR) of the antimalarial activity of PfDHODH inhibitors by generating four computational models using a multilinear regression (MLR) and a support vector machine (SVM) based on a dataset of 255 PfDHODH inhibitors. All the models display good prediction quality with a leave-one-out q(2) >0.66, a correlation coefficient (r) >0.85 on both training sets and test sets, and a mean square error (MSE) <0.32 on training sets and <0.37 on test sets, respectively. The study indicated that the hydrogen bonding ability, atom polarizabilities and ring complexity are predominant factors for inhibitors' antimalarial activity. The models are capable of predicting inhibitors' antimalarial activity and the molecular descriptors for building the models could be helpful in the development of new antimalarial drugs.
Nonlinear External Kink Computing with NIMROD
NASA Astrophysics Data System (ADS)
Bunkers, K. J.; Sovinec, C. R.
2016-10-01
Vertical displacement events (VDEs) during disruptions often include non-axisymmetric activity, including external kink modes, which are driven unstable as contact with the wall eats into the q-profile. The NIMROD code is being applied to study external-kink-unstable tokamak profiles in toroidal and cylindrical geometries. Simulations with external kinks show the plasma swallowing a vacuum bubble, similar to. NIMROD reproduces external kinks in both geometries, using an outer vacuum region (modeled as a plasma with a large resistivity), but as the boundary between the vacuum and plasma regions becomes more 3D, the resistivity becomes a 3D function, and it becomes more difficult for algebraic solves to converge. To help allow non-axisymmetric, nonlinear VDE calculations to proceed without restrictively small time-steps, several computational algorithms have been tested. Flexible GMRES, using a Fourier and real space representation for the toroidal angle has shown improvements. Off-diagonal preconditioning and a multigrid approach were tested and showed little improvement. A least squares finite element method (LSQFEM) has also helped improve the algebraic solve. This effort is supported by the U.S. Dept. of Energy, Award Numbers DE-FG02-06ER54850 and DE-FC02-08ER54975.
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
Effects of a low-resistance, interval bicycling intervention in Parkinson's Disease.
Uygur, Mehmet; Bellumori, Maria; Knight, Christopher A
2017-12-01
Previous studies have shown that people with Parkinson's disease (PD) benefit from a variety of exercise modalities with respect to symptom management and function. Among the possible exercise modalities, speedwork has been identified as a promising strategy, with direct implications for the rate and amplitude of nervous system involvement. Considering that previous speed-based exercise for PD has often been equipment, personnel and/or facility dependent, and often time intensive, our purpose was to develop a population-specific exercise program that could be self-administered with equipment that is readily found in fitness centers or perhaps the home. Fourteen individuals with PD (Hoehn-Yahr (H-Y) stage of 3.0 or less) participated in twelve 30-min sessions of low-resistance interval training on a stationary recumbent bicycle. Motor examination section of the Unified Parkinson's Disease Rating Scale (UPDRS), 10-meter walk (10mW), timed-up-and-go (TUG), functional reach, four-square step test (4SST), nine-hole peg test (9HPT) and simple reaction time scores all exhibited significant improvements (p < 0.05). These results add further support to the practice of speedwork for people with PD and outline a population-amenable program with high feasibility.
Lepot, Mathieu; Aubin, Jean-Baptiste; Bertrand-Krajewski, Jean-Luc
2013-01-01
Many field investigations have used continuous sensors (turbidimeters and/or ultraviolet (UV)-visible spectrophotometers) to estimate with a short time step pollutant concentrations in sewer systems. Few, if any, publications compare the performance of various sensors for the same set of samples. Different surrogate sensors (turbidity sensors, UV-visible spectrophotometer, pH meter, conductivity meter and microwave sensor) were tested to link concentrations of total suspended solids (TSS), total and dissolved chemical oxygen demand (COD), and sensors' outputs. In the combined sewer at the inlet of a wastewater treatment plant, 94 samples were collected during dry weather, 44 samples were collected during wet weather, and 165 samples were collected under both dry and wet weather conditions. From these samples, triplicate standard laboratory analyses were performed and corresponding sensors outputs were recorded. Two outlier detection methods were developed, based, respectively, on the Mahalanobis and Euclidean distances. Several hundred regression models were tested, and the best ones (according to the root mean square error criterion) are presented in order of decreasing performance. No sensor appears as the best one for all three investigated pollutants.
HYDROSTATIC PRESSURE AND TEMPERATURE IN RELATION TO STIMULATION AND CYCLOSIS IN NITELLA FLEXILIS
Harvey, E. Newton
1942-01-01
Nitella flexilis cells are not stimulated to "shock stoppage" of cyclosis by suddenly evacuating the air over the water or on sudden readmission of air, or on suddenly striking a piston in the water-filled chamber in which they are kept with a ball whose energy is 7.6 joules, provided the Nitella cell is not moved by currents against the side of the chamber. Sudden increases in hydrostatic pressure from zero to 1000 lbs. or 0 to 5000 lbs. per square inch or 5000 to 9000 lbs. per square inch usually do not stimulate to "shock stoppage" of cyclosis, but some cells are stimulated. Sudden decreases of pressure are more likely to stimulate, again with variation depending on the cell. In the absence of stimulation, the cyclosis velocity at 23°C. slows as the pressure is increased in steps of 1000 lbs. per square inch. In some cells a regular slowing is observed, in others there is little slowing until 4000 to 6000 lbs. per square inch, when a rapid slowing appears, with only 50 per cent to 30 per cent of the original velocity at 9000 lbs. per square inch. The cyclosis does not completely stop at 10000 lbs. per square inch. The pressure effect is reversible unless the cells have been kept too long at the high pressure. At low temperatures (10°C.) and at temperatures near and above (32°–38°C.) the optimum temperature for maximum cyclosis (35–36°C.) pressures of 3000 to 6000 lbs. per square inch cause only further slowing of cyclosis, with no reversal of the temperature effect, such as has been observed in pressure-temperature studies on the luminescence of luminous bacteria. Sudden increase in temperature may cause shock stoppage of cyclosis as well as sudden decrease in temperature. PMID:19873318
Total variation superiorized conjugate gradient method for image reconstruction
NASA Astrophysics Data System (ADS)
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
2010-05-01
decontaminate chemical and biological agents from sensitive equipment (avionics, electronics, electrical , and environmental systems and equipment...fabricated 2 x 2 in. square, 3/32 in. thick aluminum shims, augmented with electrical tape for added thickness as needed, were used in these tests to make...test coupons, thin custom-fabricated 2x2 in. square x 3/32 in. thick aluminum shims, augmented with electrical tape for added thickness as needed
Hydrology and subsidence potential of proposed coal-lease tracts in Delta County, Colorado
Brooks, Tom
1983-01-01
Potential subsidence from underground coal mining and associated hydrologic impacts were investigated at two coal-lease tracts in Delta County, Colorado. Alteration of existing flow systems could affect water users in the surrounding area. The Mesaverde Formation transmits little ground water because of the neglibile transmissivity of the 1,300 feet of fine-grained sandstone, coal , and shale comprising the formation. The transmissivities of coal beds within the lower Mesaverde Formation ranged from 1.5 to 16.7 feet squared per day, and the transmissivity of the upper Mesaverde Formation, based on a single test, was 0.33 foot squared per day. Transmissivities of the alluvium ranged from 108 to 230 feet squared per day. The transmissivity of unconsolidated Quaternary deposits, determined from an aquifer test, was about 1,900 feet squared per day. Mining beneath Stevens Gulch and East Roatcap Creek could produce surface expressions of subsidence. Subsidence fractures could partly drain alluvial valley aquifers or streamflow in these mines. (USGS)
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
Chi-square analysis of the reduction of ATP levels in L-02 hepatocytes by hexavalent chromium.
Yuan, Yang; Peng, Li; Gong-Hua, Hu; Lu, Dai; Xia-Li, Zhong; Yu, Zhou; Cai-Gao, Zhong
2012-06-01
This study explored the reduction of adenosine triphosphate (ATP) levels in L-02 hepatocytes by hexavalent chromium (Cr(VI)) using chi-square analysis. Cells were treated with 2, 4, 8, 16, or 32 μM Cr(VI) for 12, 24, or 36 h. Methyl thiazolyl tetrazolium (MTT) experiments and measurements of intracellular ATP levels were performed by spectrophotometry or bioluminescence assays following Cr(VI) treatment. The chi-square test was used to determine the difference between cell survival rate and ATP levels. For the chi-square analysis, the results of the MTT or ATP experiments were transformed into a relative ratio with respect to the control (%). The relative ATP levels increased at 12 h, decreased at 24 h, and increased slightly again at 36 h following 4, 8, 16, 32 μM Cr(VI) treatment, corresponding to a "V-shaped" curve. Furthermore, the results of the chi-square analysis demonstrated a significant difference of the ATP level in the 32-μM Cr(VI) group (P < 0.05). The results suggest that the chi-square test can be applied to analyze the interference effects of Cr(VI) on ATP levels in L-02 hepatocytes. The decreased ATP levels at 24 h indicated disruption of mitochondrial energy metabolism and the slight increase of ATP levels at 36 h indicated partial recovery of mitochondrial function or activated glycolysis in L-02 hepatocytes.
Validation of the Narrowing Beam Walking Test in Lower Limb Prosthesis Users.
Sawers, Andrew; Hafner, Brian
2018-04-11
To evaluate the content, construct, and discriminant validity of the Narrowing Beam Walking Test (NBWT), a performance-based balance test for lower limb prosthesis users. Cross-sectional study. Research laboratory and prosthetics clinic. Unilateral transtibial and transfemoral prosthesis users (N=40). Not applicable. Content validity was examined by quantifying the percentage of participants receiving maximum or minimum scores (ie, ceiling and floor effects). Convergent construct validity was examined using correlations between participants' NBWT scores and scores or times on existing clinical balance tests regularly administered to lower limb prosthesis users. Known-groups construct validity was examined by comparing NBWT scores between groups of participants with different fall histories, amputation levels, amputation etiologies, and functional levels. Discriminant validity was evaluated by analyzing the area under each test's receiver operating characteristic (ROC) curve. No minimum or maximum scores were recorded on the NBWT. NBWT scores demonstrated strong correlations (ρ=.70‒.85) with scores/times on performance-based balance tests (timed Up and Go test, Four Square Step Test, and Berg Balance Scale) and a moderate correlation (ρ=.49) with the self-report Activities-specific Balance Confidence scale. NBWT performance was significantly lower among participants with a history of falls (P=.003), transfemoral amputation (P=.011), and a lower mobility level (P<.001). The NBWT also had the largest area under the ROC curve (.81) and was the only test to exhibit an area that was statistically significantly >.50 (ie, chance). The results provide strong evidence of content, construct, and discriminant validity for the NBWT as a performance-based test of balance ability. The evidence supports its use to assess balance impairments and fall risk in unilateral transtibial and transfemoral prosthesis users. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
1991-12-01
850F FOR 2 HRS STEP 2 - 665F FOR 2 HRS STEP 3 - WARM WATER QUENCH STEP 4 - 230F FOR 24 HRS TABLE G5 TENSILE RESULTS FOR IN905XL FORGING COMPANY TEST...HRS STEP 2 - 665F FOR 2 HRS STEP 3 - WARM WATER QUENCH STEP 4 - 230F FOR 24 HRS 12 TABLE G6 COMPRESSION RESULTS FOR IN905XL FORGING COMPANY TEST...LONG 58.0 11.4 DYNAMICS (*) (*): HEAT TREATED TO THE FOLLOWING SCHEDULE: STEP 1 - 850F FOR 2 HRS STEP 2 - 665F FOR 2 HRS STEP 3 - WARM WATER QUENCH
2003-10-30
KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, workers discuss the next step in moving the orbital maneuvering system (OMS) pod behind them. The OMS pod will be installed on Atlantis. Two OMS pods are attached to the upper aft fuselage left and right sides. Fabricated primarily of graphite epoxy composite and aluminum, each pod is 21.8 feet long and 11.37 feet wide at its aft end and 8.41 feet wide at its forward end, with a surface area of approximately 435 square feet. Each pod houses the Reaction Control System propulsion components used for inflight maneuvering and is attached to the aft fuselage with 11 bolts.
Study of thermal insulation for airborne liquid hydrogen fuel tanks
NASA Technical Reports Server (NTRS)
Ruccia, F. E.; Lindstrom, R. S.; Lucas, R. M.
1978-01-01
A concept for a fail-safe thermal protection system was developed. From screening tests, approximately 30 foams, adhesives, and reinforcing fibers using 0.3-meter square liquid nitrogen cold plate, CPR 452 and Stafoam AA1602, both reinforced with 10 percent by weight of 1/16 inch milled OCF Style 701 Fiberglas, were selected for further tests. Cyclic tests with these materials in 2-inch thicknesses bonded on a 0.6-meter square cold plate with Crest 7410 adhesive systems, were successful. Zero permeability gas barriers were identified and found to be compatible with the insulating concept.
LANDSAT demonstration/application and GIS integration in south central Alaska
NASA Technical Reports Server (NTRS)
Burns, A. W.; Derrenbacher, W.
1981-01-01
Automated geographic information systems were developed for two sites in Southcentral Alaska to serve as tests for both the process of integrating classified LANDSAT data into a comprehensive environmental data base and the process of using automated information in land capability/suitability analysis and environmental planning. The Big Lake test site, located approximately 20 miles north of the City of Anchorage, comprises an area of approximately 150 square miles. The Anchorage Hillside test site, lying approximately 5 miles southeast of the central part of the city, extends over an area of some 25 square miles. Map construction and content is described.
Wu, F; Callisaya, M; Laslett, L L; Wills, K; Zhou, Y; Jones, G; Winzenberg, T
2016-07-01
This was the first study investigating both linear associations between lower limb muscle strength and balance in middle-aged women and the potential for thresholds for the associations. There was strong evidence that even in middle-aged women, poorer LMS was associated with reduced balance. However, no evidence was found for thresholds. Decline in balance begins in middle age, yet, the role of muscle strength in balance is rarely examined in this age group. We aimed to determine the association between lower limb muscle strength (LMS) and balance in middle-aged women and investigate whether cut-points of LMS exist that might identify women at risk of poorer balance. Cross-sectional analysis of 345 women aged 36-57 years was done. Associations between LMS and balance tests (timed up and go (TUG), step test (ST), functional reach test (FRT), and lateral reach test (LRT)) were assessed using linear regression. Nonlinear associations were explored using locally weighted regression smoothing (LOWESS) and potential cut-points identified using nonlinear least-squares estimation. Segmented regression was used to estimate associations above and below the identified cut-points. Weaker LMS was associated with poorer performance on the TUG (β -0.008 (95 % CI: -0.010, -0.005) second/kg), ST (β 0.031 (0.011, 0.051) step/kg), FRT (β 0.071 (0.047, 0.096) cm/kg), and LRT (β 0.028 (0.011, 0.044) cm/kg), independent of confounders. Potential nonlinear associations were evident from LOWESS results; significant cut-points of LMS were identified for all balance tests (29-50 kg). However, excepting ST, cut-points did not persist after excluding potentially influential data points. In middle-aged women, poorer LMS is associated with reduced balance. Therefore, improving muscle strength in middle-age may be a useful strategy to improve balance and reduce falls risk in later life. Middle-aged women with low muscle strength may be an effective target group for future randomized controlled trials. Australian New Zealand Clinical Trials Registry (ANZCTR) NCT00273260.
Real‐time monitoring and control of the load phase of a protein A capture step
Rüdt, Matthias; Brestrich, Nina; Rolinger, Laura
2016-01-01
ABSTRACT The load phase in preparative Protein A capture steps is commonly not controlled in real‐time. The load volume is generally based on an offline quantification of the monoclonal antibody (mAb) prior to loading and on a conservative column capacity determined by resin‐life time studies. While this results in a reduced productivity in batch mode, the bottleneck of suitable real‐time analytics has to be overcome in order to enable continuous mAb purification. In this study, Partial Least Squares Regression (PLS) modeling on UV/Vis absorption spectra was applied to quantify mAb in the effluent of a Protein A capture step during the load phase. A PLS model based on several breakthrough curves with variable mAb titers in the HCCF was successfully calibrated. The PLS model predicted the mAb concentrations in the effluent of a validation experiment with a root mean square error (RMSE) of 0.06 mg/mL. The information was applied to automatically terminate the load phase, when a product breakthrough of 1.5 mg/mL was reached. In a second part of the study, the sensitivity of the method was further increased by only considering small mAb concentrations in the calibration and by subtracting an impurity background signal. The resulting PLS model exhibited a RMSE of prediction of 0.01 mg/mL and was successfully applied to terminate the load phase, when a product breakthrough of 0.15 mg/mL was achieved. The proposed method has hence potential for the real‐time monitoring and control of capture steps at large scale production. This might enhance the resin capacity utilization, eliminate time‐consuming offline analytics, and contribute to the realization of continuous processing. Biotechnol. Bioeng. 2017;114: 368–373. © 2016 The Authors. Biotechnology and Bioengineering published by Wiley Periodicals, Inc. PMID:27543789
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Teren, Andrej; Zachariae, Silke; Beutner, Frank; Ubrich, Romy; Sandri, Marcus; Engel, Christoph; Löffler, Markus; Gielen, Stephan
2016-07-01
Cardiorespiratory fitness is a well-established independent predictor of cardiovascular health. However, the relevance of alternative exercise and non-exercise tests for cardiorespiratory fitness assessment in large cohorts has not been studied in detail. We aimed to evaluate the YMCA-step test and the Veterans Specific Activity Questionnaire (VSAQ) for the estimation of cardiorespiratory fitness in the general population. One hundred and five subjects answered the VSAQ, performed the YMCA-step test and a maximal cardiopulmonary exercise test (CPX) and gave BORG ratings for both exercise tests (BORGSTEP, BORGCPX). Correlations of peak oxygen uptake on a treadmill (VO2_PEAK) with VSAQ, BORGSTEP, one-minute, post-exercise heartbeat count, and peak oxygen uptake during the step test (VO2_STEP) were determined. Moreover, the incremental values of the questionnaire and the step test in addition to other fitness-related parameters were evaluated using block-wise hierarchical regression analysis. Eighty-six subjects completed the step test according to the protocol. For completers, correlations of VO2_PEAK with the age- and gender-adjusted VSAQ, heartbeat count and VO2_STEP were 0.67, 0.63 and 0.49, respectively. However, using hierarchical regression analysis, age, gender and body mass index already explained 68.8% of the variance of VO2_PEAK, while the additional benefit of VSAQ was rather low (3.4%). The inclusion of BORGSTEP, heartbeat count and VO2_STEP increased R(2) by a further 2.2%, 3.3% and 5.6%, respectively, yielding a total R(2) of 83.3%. Neither VSAQ nor the YMCA-step test contributes sufficiently to the assessment of cardiorespiratory fitness in population-based studies. © The European Society of Cardiology 2015.
Chemistry and kinetics of the pyrophoric plutonium hydride-air reaction
Haschke, John M.; Dinh, Long N.
2016-12-18
The chemistry and kinetics of the pyrophoric reaction of the plutonium hydride solid solution (PuH x, 1.9 ≤ x ≤ 3) are derived from pressure-time and gas analysis data obtained after exposure of PuH 2.7 to air in a closed system. The reaction is described in this paper by two sequential steps that result in reaction of all O 2, partial reaction of N 2, and formation of H 2. Hydrogen formed by indiscriminate reaction of N 2 and O 2 at their 3.71:1 M ratio in air during the initial step is accommodated as PuH 3 inside a productmore » layer of Pu 2O 3 and PuN. H 2 is formed by reaction of O 2 and partial reaction of N 2 with PuH 3 during the second step. Both steps of reaction are described by general equations for all values of x. The rate of the first step is proportional to the square of the O 2 pressure, but independent of temperature, x, and N 2 pressure. The second step is a factor of ten slower than step one with its rate controlled by diffusion of O 2 through a boundary layer of product H 2 and unreacted N 2. Finally, rates and enthalpies of reaction are presented and anticipated effects of reactant configuration on the heat flux are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haschke, John M.; Dinh, Long N.
The chemistry and kinetics of the pyrophoric reaction of the plutonium hydride solid solution (PuH x, 1.9 ≤ x ≤ 3) are derived from pressure-time and gas analysis data obtained after exposure of PuH 2.7 to air in a closed system. The reaction is described in this paper by two sequential steps that result in reaction of all O 2, partial reaction of N 2, and formation of H 2. Hydrogen formed by indiscriminate reaction of N 2 and O 2 at their 3.71:1 M ratio in air during the initial step is accommodated as PuH 3 inside a productmore » layer of Pu 2O 3 and PuN. H 2 is formed by reaction of O 2 and partial reaction of N 2 with PuH 3 during the second step. Both steps of reaction are described by general equations for all values of x. The rate of the first step is proportional to the square of the O 2 pressure, but independent of temperature, x, and N 2 pressure. The second step is a factor of ten slower than step one with its rate controlled by diffusion of O 2 through a boundary layer of product H 2 and unreacted N 2. Finally, rates and enthalpies of reaction are presented and anticipated effects of reactant configuration on the heat flux are discussed.« less
Maximum step length: relationships to age and knee and hip extensor capacities.
Schulz, Brian W; Ashton-Miller, James A; Alexander, Neil B
2007-07-01
Maximum Step Length may be used to identify older adults at increased risk for falls. Since leg muscle weakness is a risk factor for falls, we tested the hypotheses that maximum knee and hip extension speed, strength, and power capacities would significantly correlate with Maximum Step Length and also that the "step out and back" Maximum Step Length [Medell, J.L., Alexander, N.B., 2000. A clinical measure of maximal and rapid stepping in older women. J. Gerontol. A Biol. Sci. Med. Sci. 55, M429-M433.] would also correlate with the Maximum Step Length of its two sub-tasks: stepping "out only" and stepping "back only". These sub-tasks will be referred to as versions of Maximum Step Length. Unimpaired younger (N=11, age=24[3]years) and older (N=10, age=73[5]years) women performed the above three versions of Maximum Step Length. Knee and hip extension speed, strength, and power capacities were determined on a separate day and regressed on Maximum Step Length and age group. Version and practice effects were quantified and subjective impressions of test difficulty recorded. Hypotheses were tested using linear regressions, analysis of variance, and Fisher's exact test. Maximum Step Length explained 6-22% additional variance in knee and hip extension speed, strength, and power capacities after controlling for age group. Within- and between-block and test-retest correlation values were high (>0.9) for all test versions. Shorter Maximum Step Lengths are associated with reduced knee and hip extension speed, strength, and power capacities after controlling for age. A single out-and-back step of maximal length is a feasible, rapid screening measure that may provide insight into underlying functional impairment, regardless of age.
Satorra, Albert; Neudecker, Heinz
2015-12-01
This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.
An Embedded Device for Real-Time Noninvasive Intracranial Pressure Estimation.
Matthews, Jonathan M; Fanelli, Andrea; Heldt, Thomas
2018-01-01
The monitoring of intracranial pressure (ICP) is indicated for diagnosing and guiding therapy in many neurological conditions. Current monitoring methods, however, are highly invasive, limiting their use to the most critically ill patients only. Our goal is to develop and test an embedded device that performs all necessary mathematical operations in real-time for noninvasive ICP (nICP) estimation based on a previously developed model-based approach that uses cerebral blood flow velocity (CBFV) and arterial blood pressure (ABP) waveforms. The nICP estimation algorithm along with the required preprocessing steps were implemented on an NXP LPC4337 microcontroller unit (MCU). A prototype device using the MCU was also developed, complete with display, recording functionality, and peripheral interfaces for ABP and CBFV monitoring hardware. The device produces an estimate of mean ICP once per minute and performs the necessary computations in 410 ms, on average. Real-time nICP estimates differed from the original batch-mode MATLAB implementation of theestimation algorithm by 0.63 mmHg (root-mean-square error). We have demonstrated that real-time nICP estimation is possible on a microprocessor platform, which offers the advantages of low cost, small size, and product modularity over a general-purpose computer. These attributes take a step toward the goal of real-time nICP estimation at the patient's bedside in a variety of clinical settings.
High-Spatial-Resolution OH and CH2O PLIF Visualization in a Dual-Mode Scramjet Combustor
NASA Technical Reports Server (NTRS)
Geipel, Clayton M.
2017-01-01
A high-spatial-resolution planar laser-induced fluorescence (PLIF) imaging system was constructed and used to image a cavity-stabilized, premixed ethylene-air flame. The flame was created within a continuous flow, electrically-heated supersonic combustion facility consisting of a Mach 2 nozzle, an isolator with flush-wall fuel injectors, a combustor with a cavity flameholder of height 9 mm and optical access, and an extender. Tests were conducted at total temperature 1200 K, total pressure 300 kPa, equivalence ratio near 0.4 in the combustor, and Mach number near 0.6 in the combustor. A frequency-doubled Nd:YAG laser pumped a dye laser, which produced light at 283.55 nm. The beam was shaped into a light sheet with full width half-maximum 25 microns, which illuminated a streamwise plane that bisected the cavity. An intensified camera system imaged OH in this plane with a square 6.67 mm field of view and in-plane resolution 39 microns. Images were taken between the backward-facing step and 120 mm downstream of the step. OH structures as small as 110 microns were observed. CH2O was excited using 352.48 nm light; the smallest observed CH2O structures were approximately 200 microns wide. Approximately 15,000 images per species were processed and used to compute composite images.
Shekhar, M G; Mohan, R
2011-03-01
To determine the prevalence of traumatic dental injuries to primary incisors in 3-5 year-old preschool children and to study the relationship between dental injuries and age, gender and terminal plane relation. A cross-sectional study was conducted in 1,126 preschool children aged three to five years enrolled in eleven private and public nursery schools, randomly selected in Chennai, India. Data regarding the age, gender, cause and type of trauma and terminal plane relation were recorded. Maxillary and mandibular primary incisors were examined for traumatic injuries and were recorded according to the method described by Andreasen & Andreasen (1994). Data were analyzed through descriptive analysis and chi-square test. Traumatic injuries to primary incisors were identified in 6.2% of children. No significant gender differences in prevalence were seen (p > 0.05). Enamel fractures (57.3%) dominated amongst the type of injuries. Majority of children who sustained traumatic dental injuries to their primary incisors were associated with mesial step molar relation. Mesial step molar relation may be considered one of the possible predisposing factors to trauma in primary dentition. Further, there is need to intensify oral health education targeted at both parents and teachers at nursery schools to inform them about consequences of primary teeth injuries on permanent dentition and emphasize the importance of prevention of dental injuries in children.
Sex ratio of equine offspring is affected by the ages of the mare and stallion.
Santos, Marianna Machado; Maia, Leonardo Lara; Nobre, Daniel Magalhães; Oliveira Neto, José Ferraz; Garcia, Tiago Rezende; Lage, Maria Coeli Gomes Reis; de Melo, Maria Isabel Vaz; Viana, Walmir Santos; Palhares, Maristela Silveira; da Silva Filho, José Monteiro; Santos, Renato Lima; Valle, Guilherme Ribeiro
2015-10-15
The aim of this study was to determine the influence of parental age on the sex ratio of offspring in horses. Two trials were performed. In the first trial, the data from a randomly obtained population with a 1:1 sex ratio of 59,950 Mangalarga Marchador horses born in Brazil from 1990 to 2011 were analyzed. The sex ratios of the offspring were compared among groups according to the mare and the stallion ages (from 3 to 25 years). In the first step of the analysis, the mares and stallions were grouped according to age in 5-year intervals. In the second step, the groups were based on the parental age gap at conception. In the third step, the group of the mares and stallions with similar ages from the second step was subdivided, and the different parental age subgroups that were divided into 5-year intervals were compared. In the fourth step, the sex ratio of the offspring was determined according to the ages of the mares and the stallions at conception. The second trial was based on the data from 253 horses of several breeds that were born after natural gestation into a herd from 1989 to 2010, and the offspring of groups that were younger or older than 15 years were compared. The data from both trials were analyzed using a chi-square test (P ≤ 0.01 for the first trial; and P ≤ 0.05 for the second trial) for the comparisons of the sex ratios. In the first trial, the Spearman test (P ≤ 0.01) was used to verify the correlations between the parental age and the offspring sex ratio. In the first trial, the offspring sex ratio decreased as the mare or stallion age increased, and the decrease was more marked for the mares than for the stallions. In the second trial, the mares older than 15 years had more fillies than the younger mares, but the stallion age had no effect on the sex of the offspring. The first trial, with a large number of horses, revealed the pattern of the distribution of the sex ratios of offspring according to the parental age in horses, whereas the second trial, with a more restricted number of horses, confirmed the influence of the age of the mare on the offspring sex ratio. We concluded that the parental age affected the offspring sex ratio in horses and that this effect was stronger for the mares than for the stallions. Copyright © 2015 Elsevier Inc. All rights reserved.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
Study of monolithic integrated solar blind GaN-based photodetectors
NASA Astrophysics Data System (ADS)
Wang, Ling; Zhang, Yan; Li, Xiaojuan; Xie, Jing; Wang, Jiqiang; Li, Xiangyang
2018-02-01
Monolithic integrated solar blind devices on the GaN-based epilayer, which can directly readout voltage signal, were fabricated and studied. Unlike conventional GaN-based photodiodes, the integrated devices can finish those steps: generation, accumulation of carriers and conversion of carriers to voltage. In the test process, the resetting voltage was square wave with the frequency of 15 and 110 Hz, its maximal voltage of ˜2.5 V. Under LEDs illumination, the maximum of voltage swing is about 2.5 V, and the rise time of voltage swing from 0 to 2.5 V is only about 1.6 ms. However, in dark condition, the node voltage between detector and capacitance nearly decline to zero with time when the resetting voltage was equal to zero. It is found that the leakage current in the circuit gives rise to discharge of the integrated charge. Storage mode operation can offer gain, which is advantage to detection of weak photo signal.
Riahi, Siavash; Hadiloo, Farshad; Milani, Seyed Mohammad R; Davarkhah, Nazila; Ganjali, Mohammad R; Norouzi, Parviz; Seyfi, Payam
2011-05-01
The accuracy in predicting different chemometric methods was compared when applied on ordinary UV spectra and first order derivative spectra. Principal component regression (PCR) and partial least squares with one dependent variable (PLS1) and two dependent variables (PLS2) were applied on spectral data of pharmaceutical formula containing pseudoephedrine (PDP) and guaifenesin (GFN). The ability to derivative in resolved overlapping spectra chloropheniramine maleate was evaluated when multivariate methods are adopted for analysis of two component mixtures without using any chemical pretreatment. The chemometrics models were tested on an external validation dataset and finally applied to the analysis of pharmaceuticals. Significant advantages were found in analysis of the real samples when the calibration models from derivative spectra were used. It should also be mentioned that the proposed method is a simple and rapid way requiring no preliminary separation steps and can be used easily for the analysis of these compounds, especially in quality control laboratories. Copyright © 2011 John Wiley & Sons, Ltd.
Porous Foam Based Wick Structures for Loop Heat Pipes
NASA Technical Reports Server (NTRS)
Silk, Eric A.
2012-01-01
As part of an effort to identify cost efficient fabrication techniques for Loop Heat Pipe (LHP) construction, NASA Goddard Space Flight Center's Cryogenics and Fluids Branch collaborated with the U.S. Naval Academy s Aerospace Engineering Department in Spring 2012 to investigate the viability of carbon foam as a wick material within LHPs. The carbon foam was manufactured by ERG Aerospace and machined to geometric specifications at the U.S. Naval Academy s Materials, Mechanics and Structures Machine Shop. NASA GSFC s Fractal Loop Heat Pipe (developed under SBIR contract #NAS5-02112) was used as the validation LHP platform. In a horizontal orientation, the FLHP system demonstrated a heat flux of 75 Watts per square centimeter with deionized water as the working fluid. Also, no failed start-ups occurred during the 6 week performance testing period. The success of this study validated that foam can be used as a wick structure. Furthermore, given the COTS status of foam materials this study is one more step towards development of a low cost LHP.
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
Hybrid context aware recommender systems
NASA Astrophysics Data System (ADS)
Jain, Rajshree; Tyagi, Jaya; Singh, Sandeep Kumar; Alam, Taj
2017-10-01
Recommender systems and context awareness is currently a vital field of research. Most hybrid recommendation systems implement content based and collaborative filtering techniques whereas this work combines context and collaborative filtering. The paper presents a hybrid context aware recommender system for books and movies that gives recommendations based on the user context as well as user or item similarity. It also addresses the issue of dimensionality reduction using weighted pre filtering based on dynamically entered user context and preference of context. This unique step helps to reduce the size of dataset for collaborative filtering. Bias subtracted collaborative filtering is used so as to consider the relative rating of a particular user and not the absolute values. Cosine similarity is used as a metric to determine the similarity between users or items. The unknown ratings are calculated and evaluated using MSE (Mean Squared Error) in test and train datasets. The overall process of recommendation has helped to personalize recommendations and give more accurate results with reduced complexity in collaborative filtering.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
Conducting High Cycle Fatigue Strength Step Tests on Gamma TiAl
NASA Technical Reports Server (NTRS)
Lerch, Brad; Draper, Sue; Pereira, J. Mike
2002-01-01
High cycle fatigue strength testing of gamma TiAl by the step test method is investigated. A design of experiments was implemented to determine if the coaxing effect occurred during testing. Since coaxing was not observed, step testing was deemed a suitable method to define the fatigue strength at 106 cycles.
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
Schulz, Brian W.; Jongprasithporn, Manutchanok; Hart-Hughes, Stephanie J.; Bulat, Tatjana
2017-01-01
Background Maximum step length is a brief clinical test involving stepping out and back as far as possible with the arms folded across the chest. This test has been shown to predict fall risk, but the biomechanics of this test are not fully understood. Knee and hip kinetics (moments and powers) are greater for longer steps and for younger subjects, but younger subjects also step farther. Methods To separate effects of step length, age, and fall history on joint kinetics; 14 healthy younger, 14 older non-fallers, and 11 older fallers (27(5), 72(5), 75(6) years respectively) all stepped to the same relative target distances of 20-80% of their height. Knee and hip kinetics and knee co-contraction were calculated. Findings Hip and knee kinetics and knee co-contraction all increased with step length, but older non-fallers and fallers utilized greater stepping hip and less stepping knee extensor kinetics. Fallers had greater stepping knee co-contraction than non-fallers. Stance knee co-contraction of non-fallers was similar to young for shorter steps and similar to fallers for longer steps. Interpretation Age had minimal effects and fall history had no effects on joint kinetics of steps to similar distances. Effects of age and fall history on knee co-contraction may contribute to age-related kinetic differences and shorter maximal step lengths of older non-fallers and fallers, but step length correlated with every variable tested. Thus, declines in maximum step length could indicate declines in hip and knee extensor kinetics and impaired performance on similar tasks like recovering from a trip. PMID:23978310
Crack Growth of D6 Steel in Air and High Pressure Oxygen
NASA Technical Reports Server (NTRS)
Bixler, W. D.; Engstrom, W. L.
1971-01-01
Fracture and subcritical flaw growth characteristics were experimentally determined for electroless nickel plated D6 steel in dry air and high pressure oxygen environments as applicable to the Lunar Module/Environmental Control System (LM/ECS) descent gaseous oxygen (GOX) tank. The material tested included forgings, plate, and actual LM/ECS descent GOX tank material. Parent metal and TIG (tungsten inert gas) welds were tested. Tests indicate that proof testing the tanks at 4000 pounds per square inch or higher will insure safe operation at 3060 pounds per square inch. Although significant flaw growth can occur during proofing, subsequent growth of flaws during normal tank operation is negligible.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
Ninio, J
1998-07-01
The capacity of visual working memory was investigated using abstract images that were slightly distorted NxN (with generally N=8) square lattices of black or white randomly selected elements. After viewing an image, or a sequence of images, the subjects viewed couples of images containing the test image and a distractor image derived from the first one by changing the black or white value of q randomly selected elements. The number q was adjusted in each experiment to the difficulty of the task and the abilities of the subject. The fraction of recognition errors, given q and N was used to evaluate the number M of bits memorized by the subject. For untrained subjects, this number M varied in a biphasic manner as a function of the time t of presentation of the test image: it was on average 13 bits for 1 s, 16 bits for 2 to 5 s, and 20 bits for 8 s. The slow pace of acquisition, from 1 to 8 s, seems due to encoding difficulties, and not to channel capacity limitations. Beyond 8 s, M(t), accurately determined for one subject, followed a square root law, in agreement with 19th century observations on the memorization of lists of digits. When two consecutive 8x8 images were viewed and tested in the same order, the number of memorized bits was downshifted by a nearly constant amount, independent of t, and equal on average to 6-7 bits. Across the subjects, the shift was independent of M. When two consecutive test images were related, the recognition errors decreased for both images, whether the testing was performed in the presentation or the reverse order. Studies involving three subjects, indicate that, when viewing m consecutive images, the average amount of information captured per image varies with m in a stepwise fashion. The first two step boundaries were around m=3 and m=9-12. The data are compatible with a model of organization of working memory in several successive layers containing increasing numbers of units, the more remote a unit, the lower the rate at which it may acquire encoded information. Copyright 1998 Elsevier Science B.V.
Cell phones to collect pregnancy data from remote areas in Liberia.
Lori, Jody R; Munro, Michelle L; Boyd, Carol J; Andreatta, Pamela
2012-09-01
To report findings on knowledge and skill acquisition following a 3-day training session in the use of short message service (SMS) texting with non- and low-literacy traditional midwives. A pre- and post-test study design was used to assess knowledge and skill acquisition with 99 traditional midwives on the use of SMS texting for real-time, remote data collection in rural Liberia, West Africa. Paired sample t-tests were conducted to establish if overall mean scores varied significantly from pre-test to immediate post-test. Analysis of variance was used to compare means across groups. The nonparametric McNemar's test was used to determine significant differences between the pre-test and post-test values of each individual step involved in SMS texting. Pearson's chi-square test of independence was used to examine the association between ownership of cell phones within a family and achievement of the seven tasks. The mean increase in cell phone knowledge scores was 3.67, with a 95% confidence interval ranging from 3.39 to 3.95. Participants with a cell phone in the family did significantly better on three of the seven tasks in the pre-test: "turns cell on without help" (χ(2) (1) = 9.15, p= .003); "identifies cell phone coverage" (χ(2) (1) = 5.37, p= .024); and "identifies cell phone is charged" (χ(2) (1) = 4.40, p= .042). A 3-day cell phone training session with low- and nonliterate traditional midwives in rural Liberia improved their ability to use mobile technology for SMS texting. Mobile technology can improve data collection accessibility and be used for numerous health care and public health issues. Cell phone accessibility holds great promise for collecting health data in low-resource areas of the world. © 2012 Sigma Theta Tau International.
NASA Astrophysics Data System (ADS)
Guglielmino, F.; Nunnari, G.; Puglisi, G.; Spata, A.
2009-04-01
We propose a new technique, based on the elastic theory, to efficiently produce an estimate of three-dimensional surface displacement maps by integrating sparse Global Position System (GPS) measurements of deformations and Differential Interferometric Synthetic Aperture Radar (DInSAR) maps of movements of the Earth's surface. The previous methodologies known in literature, for combining data from GPS and DInSAR surveys, require two steps: the first, in which sparse GPS measurements are interpolated in order to fill in GPS displacements at the DInSAR grid, and the second, to estimate the three-dimensional surface displacement maps by using a suitable optimization technique. One of the advantages of the proposed approach is that both these steps are unified. We propose a linear matrix equation which accounts for both GPS and DInSAR data whose solution provide simultaneously the strain tensor, the displacement field and the rigid body rotation tensor throughout the entire investigated area. The mentioned linear matrix equation is solved by using the Weighted Least Square (WLS) thus assuring both numerical robustness and high computation efficiency. The proposed methodology was tested on both synthetic and experimental data, these last from GPS and DInSAR measurements carried out on Mt. Etna. The goodness of the results has been evaluated by using standard errors. These tests also allow optimising the choice of specific parameters of this algorithm. This "open" structure of the method will allow in the near future to take account of other available data sets, such as additional interferograms, or other geodetic data (e.g. levelling, tilt, etc.), in order to achieve even higher accuracy.
Romay-Barja, Maria; Jarrin, Inma; Ncogo, Policarpo; Nseng, Gloria; Sagrado, Maria Jose; Santana-Morales, Maria A; Aparicio, Pilar; Aparcio, Pilar; Valladares, Basilio; Riloha, Matilde; Benito, Agustin
2015-01-01
Malaria remains a major cause of morbidity and mortality among children under five years old in Equatorial Guinea. However, little is known about the community management of malaria and treatment-seeking patterns. We aimed to assess symptoms of children with reported malaria and treatment-seeking behaviour of their caretakers in rural and urban areas in the Bata District. A cross-sectional study was conducted in the district of Bata and 440 houses were selected from 18 rural villages and 26 urban neighbourhoods. Differences between rural and urban caregivers and children with reported malaria were assessed through the chi-squared test for independence of categorical variables and the t-Student or the non-parametric Mann-Whitney test for normally or not-normally distributed continuous variables, respectively. Differences between rural and urban households were observed in caregiver treatment-seeking patterns. Fever was the main symptom associated with malaria in both areas. Malaria was treated first at home, particularly in rural areas. The second step was to seek treatment outside the home, mainly at hospital and Health Centre for rural households and at hospital and private clinic for urban ones. Artemether monotherapy was the antimalarial treatment prescribed most often. Households waited for more than 24 hours before seeking treatment outside and delays were longest in rural areas. The total cost of treatment was higher in urban than in rural areas in Bata. The delays in seeking treatment, the type of malaria therapy received and the cost of treatment are the principal problems found in Bata District. Important steps for reducing malaria morbidity and mortality in this area are to provide sufficient supplies of effective antimalarial drugs and to improve malaria treatment skills in households and in both public and private sectors.
Evaluation of quasi-square wave inverter as a power source for induction motors
NASA Technical Reports Server (NTRS)
Guynes, B. V.; Haggard, R. L.; Lanier, J. R., Jr.
1977-01-01
The relative merits of quasi-square wave inverter-motor technology versus a sine wave inverter-motor system were investigated. The empirical results of several tests on various sizes of wye-wound induction motors are presented with mathematical analysis to support the conclusions of the study. It was concluded that, within the limitations presented, the quasi-square wave inverter-motor system is superior to the more complex sine wave system for most induction motor applications in space.
A root-mean-square approach for predicting fatigue crack growth under random loading
NASA Technical Reports Server (NTRS)
Hudson, C. M.
1981-01-01
A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.
Statistical hypothesis tests of some micrometeorological observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
SethuRaman, S.; Tichler, J.
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less
Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors
NASA Astrophysics Data System (ADS)
Rottmann, J.; Keall, P.; Berbeco, R.
2013-06-01
Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).
Geohydrology of the Winchester Subbasin, Riverside County, California
Kaehler, Charles A.; Burton, Carmen A.; Rees, Terry F.; Christensen, Allen H.
1998-01-01
Aquifer-test results indicate that the transmissivity is about 950 feet squared per day in the eastern part of the Winchester subbasin near the boundary with the Hemet subbasin and about 72 feet squared per day in the western part of the subbasin near the boundary with th
Children's Strategies in Imagining Spatio-Geometrical Transformations.
ERIC Educational Resources Information Center
McGillicuddy-De Lisi, Ann V.; De Lisi, Richard
1981-01-01
Seventy-five children, 6 to 13 years of age, were assigned to one of five groups on the basis of Piagetian tests of spatial-geometrical knowledge. Subjects imagined and executed three transformations of geometric figures: square-enlargement, diamond enlargement and transformation of a small diamond into a large square. (CM)
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strain-based method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2 percent, 0.28 percent, and 0.09 percent, respectively; and maximum slope errors in roll and pitch directions are 0.28 percent and -3.2 percent, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8 percent of the photogrammetry data and are accurate to within 2.2 percent in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction..
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2015-01-01
A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strainbased method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2%, 0.28%, and 0.09%, respectively; and maximum slope errors in roll and pitch directions are 0.28% and -3.2%, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8% of the photogrammetry data and are accurate to within 2.2% in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
Aerobic Fitness Does Not Contribute to Prediction of Orthostatic Intolerance
NASA Technical Reports Server (NTRS)
Convertino, Victor A.; Sather, Tom M.; Goldwater, Danielle J.; Alford, William R.
1986-01-01
Several investigations have suggested that orthostatic tolerance may be inversely related to aerobic fitness (VO (sub 2max)). To test this hypothesis, 18 males (age 29 to 51 yr) underwent both treadmill VO(sub 2max) determination and graded lower body negative pressures (LBNP) exposure to tolerance. VO(2max) was measured during the last minute of a Bruce treadmill protocol. LBNP was terminated based on pre-syncopal symptoms and LBNP tolerance (peak LBNP) was expressed as the cumulative product of LBNP and time (torr-min). Changes in heart rate, stroke volume cardiac output, blood pressure and impedance rheographic indices of mid-thigh-leg initial accumulation were measured at rest and during the final minute of LBNP. For all 18 subjects, mean (plus or minus SE) fluid accumulation index and leg venous compliance index at peak LBNP were 139 plus or minus 3.9 plus or minus 0.4 ml-torr-min(exp -2) x 10(exp 3), respectively. Pearson product-moment correlations and step-wise linear regression were used to investigate relationships with peak LBNP. Variables associated with endurance training, such as VO(sub 2max) and percent body fat were not found to correlate significantly (P is less than 0.05) with peak LBNP and did not add sufficiently to the prediction of peak LBNP to be included in the step-wise regression model. The step-wise regression model included only fluid accumulation index leg venous compliance index, and blood volume and resulted in a squared multiple correlation coefficient of 0.978. These data do not support the hypothesis that orthostatic tolerance as measured by LBNP is lower in individuals with high aerobic fitness.
Impacts of lung and tumor volumes on lung dosimetry for nonsmall cell lung cancer.
Lei, Weijie; Jia, Jing; Cao, Ruifen; Song, Jing; Hu, Liqin
2017-09-01
The purpose of this study was to determine the impacts of lung and tumor volumes on normal lung dosimetry in three-dimensional conformal radiotherapy (3DCRT), step-and-shoot intensity-modulated radiotherapy (ssIMRT), and single full-arc volumetric-modulated arc therapy (VMAT) in treatment of nonsmall cell lung cancers (NSCLC). All plans were designed to deliver a total dose of 66 Gy in 33 fractions to PTV for the 32 NSCLC patients with various total (bilateral) lung volumes, planning target volumes (PTVs), and PTV locations. The ratio of the lung volume (total lung volume excluding the PTV volume) to the PTV volume (LTR) was evaluated to represent the impacts in three steps. (a) The least squares method was used to fit mean lung doses (MLDs) to PTVs or LTRs with power-law function in the population cohort (N = 32). (b) The population cohort was divided into three groups by LTRs based on first step and then by PTVs, respectively. The MLDs were compared among the three techniques in each LTR group (LG) and each PTV group (PG). (c) The power-law correlation was tested by using the adaptive radiation therapy (ART) planning data of individual patients in the individual cohort (N = 4). Different curves of power-law function with high R 2 values were observed between averaged LTRs and averaged MLDs for 3DCRT, ssIMRT, and VMAT, respectively. In the individual cohort, high R 2 values of fitting curves were also observed in individual patients in ART, although the trend was highly patient-specific. There was a more obvious correlation between LTR and MLD than that between PTV and MLD. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Schaefer, C; Lecomte, C; Clicq, D; Merschaert, A; Norrant, E; Fotiadu, F
2013-09-01
The final step of an active pharmaceutical ingredient (API) manufacturing synthesis process consists of a crystallization during which the API and residual solvent contents have to be quantified precisely in order to reach a predefined seeding point. A feasibility study was conducted to demonstrate the suitability of on-line NIR spectroscopy to control this step in line with new version of the European Medicines Agency (EMA) guideline [1]. A quantitative method was developed at laboratory scale using statistical design of experiments (DOE) and multivariate data analysis such as principal component analysis (PCA) and partial least squares (PLS) regression. NIR models were built to quantify the API in the range of 9-12% (w/w) and to quantify the residual methanol in the range of 0-3% (w/w). To improve the predictive ability of the models, the development procedure encompassed: outliers elimination, optimum model rank definition, spectral range and spectral pre-treatment selection. Conventional criteria such as, number of PLS factors, R(2), root mean square errors of calibration, cross-validation and prediction (RMSEC, RMSECV, RMSEP) enabled the selection of three model candidates. These models were tested in the industrial pilot plant during three technical campaigns. Results of the most suitable models were evaluated against to the chromatographic reference methods. Maximum relative bias of 2.88% was obtained about API target content. Absolute bias of 0.01 and 0.02% (w/w) respectively were achieved at methanol content levels of 0.10 and 0.13% (w/w). The repeatability was assessed as sufficient for the on-line monitoring of the 2 analytes. The present feasibility study confirmed the possibility to use on-line NIR spectroscopy as a PAT tool to monitor in real-time both the API and the residual methanol contents, in order to control the seeding of an API crystallization at industrial scale. Furthermore, the successful scale-up of the method proved its capability to be implemented in the manufacturing plant with the launch of the new API process. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
Novak, Richard M.; Metch, Barbara; Buchbinder, Susan; Cabello, Robinson; Donastorg, Yeycy; Figoroa, John-Peter; Adbul-Jauwad, Hend; Joseph, Patrice; Koenig, Ellen; Metzger, David; Sobieszycz, Magda; Tyndall, Mark; Zorilla, Carmen
2013-01-01
Objectives Report of risk behavior, HIV incidence, and pregnancy rates among women participating in the Step Study, a phase IIB trial of MRKAd5 HIV-1 gag/pol/nef vaccine in HIV-negative individuals who were at high risk of HIV-1. Design Prospective multicenter, double-blinded, placebo-controlled trial Methods Women were from North American (NA) and Caribbean and South America (CSA) sites. Risk behavior was collected at screening and 6-month intervals. Differences in characteristics between groups were tested with Chi-square, two-sided Fisher’s exact tests, and Wilcoxon rank sum tests. Generalized estimating equation models were used to assess behavioral change. Results Among 1134 enrolled women, the median number of male partners was 18; 73.8% reported unprotected vaginal sex, 15.9% unprotected anal sex and 10.8% evidence of a sexually transmitted infection in the 6 months prior to baseline. With 3344 person-years (p–y) of follow up, there were 15 incident HIV infections: incidence rate was 0.45 per 100/p-y (95% CI 0.25, 0.74). Crack cocaine use in both regions (relative risk [RR]=2.4 [1.7,3.3]) and in CSA, unprotected anal sex (RR=6.4 [3.8. 10.7]) and drug use (RR=4.1 [2.1, 8.0]) were baseline risk behaviors associated with HIV acquisition. There was a marked reduction in risk behaviors after study enrollment with some recurrence in unprotected vaginal sex. Of 963 non-sterilized women, 304 (31.6%) became pregnant. Conclusions Crack cocaine use and unprotected anal sex are important risk criteria to identify high-risk women for HIV efficacy trials. Pregnancy during the trial was a common occurrence and needs to be considered in trial planning for prevention trials in women. PMID:23807272
Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao
2018-06-01
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1986-01-01
The technique of obtaining second order, oscillation free, total variation diminishing (TVD), scalar difference schemes by adding a limited diffusion flux (smoothing) to a second order centered scheme is explored. It is shown that such schemes do not always converge to the correct physical answer. The approach presented here is to construct schemes that numerically satisfy the second law of thermodynamics on a cell by cell basis. Such schemes can only converge to the correct physical solution and in some cases can be shown to be TVD. An explicit scheme with this property and second order spatial accuracy was found to have an extremely restrictive time step limitation (Delta t less than Delta x squared). Switching to an implicit scheme removed the time step limitation.
NASA Astrophysics Data System (ADS)
Chandramohan, S.; Seo, Tae Hoon; Janardhanam, V.; Hong, Chang-Hee; Suh, Eun-Kyung
2017-10-01
Charge transfer doping is a renowned route to modify the electrical and electronic properties of graphene. Understanding the stability of potentially important charge-transfer materials for graphene doping is a crucial first step. Here we present a systematic comparison on the doping efficiency and stability of single layer graphene using molybdenum trioxide (MoO3), gold chloride (AuCl3), and bis(trifluoromethanesulfonyl)amide (TFSA). Chemical dopants proved to be very effective, but MoO3 offers better thermal stability and device fabrication compatibility. Single layer graphene films with sheet resistance values between 100 and 200 ohm/square were consistently produced by implementing a two-step growth followed by doping without compromising the optical transmittance.
Taylor, Natalie; Long, Janet C; Debono, Deborah; Williams, Rachel; Salisbury, Elizabeth; O'Neill, Sharron; Eykman, Elizabeth; Braithwaite, Jeffrey; Chin, Melvin
2016-03-12
Lynch syndrome is an inherited disorder associated with a range of cancers, and found in 2-5 % of colorectal cancers. Lynch syndrome is diagnosed through a combination of significant family and clinical history and pathology. The definitive diagnostic germline test requires formal patient consent after genetic counselling. If diagnosed early, carriers of Lynch syndrome can undergo increased surveillance for cancers, which in turn can prevent late stage cancers, optimise treatment and decrease mortality for themselves and their relatives. However, over the past decade, international studies have reported that only a small proportion of individuals with suspected Lynch syndrome were referred for genetic consultation and possible genetic testing. The aim of this project is to use behaviour change theory and implementation science approaches to increase the number and speed of healthcare professional referrals of colorectal cancer patients with a high-likelihood risk of Lynch syndrome to appropriate genetic counselling services. The six-step Theoretical Domains Framework Implementation (TDFI) approach will be used at two large, metropolitan hospitals treating colorectal cancer patients. Steps are: 1) form local multidisciplinary teams to map current referral processes; 2) identify target behaviours that may lead to increased referrals using discussion supported by a retrospective audit; 3) identify barriers to those behaviours using the validated Influences on Patient Safety Behaviours Questionnaire and TDFI guided focus groups; 4) co-design interventions to address barriers using focus groups; 5) co-implement interventions; and 6) evaluate intervention impact. Chi square analysis will be used to test the difference in the proportion of high-likelihood risk Lynch syndrome patients being referred for genetic testing before and after intervention implementation. A paired t-test will be used to assess the mean time from the pathology test results to referral for high-likelihood Lynch syndrome patients pre-post intervention. Run charts will be used to continuously monitor change in referrals over time, based on scheduled monthly audits. This project is based on a tested and refined implementation strategy (TDFI approach). Enhancing the process of identifying and referring people at high-likelihood risk of Lynch syndrome for genetic counselling will improve outcomes for patients and their relatives, and potentially save public money.
Advanced recovery systems wind tunnel test report
NASA Technical Reports Server (NTRS)
Geiger, R. H.; Wailes, W. K.
1990-01-01
Pioneer Aerospace Corporation (PAC) conducted parafoil wind tunnel testing in the NASA-Ames 80 by 120 test sections of the National Full-Scale Aerodynamic Complex, Moffett Field, CA. The investigation was conducted to determine the aerodynamic characteristics of two scale ram air wings in support of air drop testing and full scale development of Advanced Recovery Systems for the Next Generation Space Transportation System. Two models were tested during this investigation. Both the primary test article, a 1/9 geometric scale model with wing area of 1200 square feet and secondary test article, a 1/36 geometric scale model with wing area of 300 square feet, had an aspect ratio of 3. The test results show that both models were statically stable about a model reference point at angles of attack from 2 to 10 degrees. The maximum lift-drag ratio varied between 2.9 and 2.4 for increasing wing loading.
NASA Technical Reports Server (NTRS)
Wilcox, Floyd J., Jr.; Birch, Trevor J.; Allen, Jerry M.
2004-01-01
A wind-tunnel investigation of a square cross-section missile configuration has been conducted to obtain force and moment measurements, surface pressure measurements, and vapor screen flow visualization photographs for comparison with computational fluid dynamics studies conducted under the auspices of The Technical Cooperation Program (TTCP). Tests were conducted on three configurations which included: (1) body alone, (2) body plus tail fins mounted on the missile corners, and (3) body plus tail fins mounted on the missile side. This test was conducted in test section #2 of the NASA Langley Unitary Plan Wind Tunnel at Mach numbers of 2.50 and 4.50 and at a Reynolds number of 4 million per ft. The data were obtained over an angle of attack range from -4 deg. to 24 deg. and roll angles from 0 deg. to 45 deg., i.e., from a diamond shape (as viewed from the rear) at a roll angle of 0 deg. to a square shape at 45 deg.