Consistency of Standard Setting in an Augmented State Testing System
ERIC Educational Resources Information Center
Lissitz, Robert W.; Wei, Hua
2008-01-01
In this article we address the issue of consistency in standard setting in the context of an augmented state testing program. Information gained from the external NRT scores is used to help make an informed decision on the determination of cut scores on the state test. The consistency of cut scores on the CRT across grades is maintained by forcing…
Overhead tray for cable test system
NASA Technical Reports Server (NTRS)
Saltz, K. T.
1976-01-01
System consists of overhead slotted tray, series of compatible adapter cables, and automatic test set which consists of control console and cable-switching console. System reduces hookup time and also reduces cost of fabricating and storing test cables.
Evaluation of Density Functionals and Basis Sets for Carbohydrates
USDA-ARS?s Scientific Manuscript database
Correlated ab initio wave function calculations using MP2/aug-cc-pVTZ model chemistry have been performed for three test sets of gas phase saccharide conformations to provide reference values for their relative energies. The test sets consist of 15 conformers of alpha and beta-D-allopyranose, 15 of ...
40 CFR 86.135-12 - Dynamometer procedure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... control of pre-selectable power settings may be set anytime prior to the beginning of the emissions test... Heavy-Duty Vehicles; Test Procedures § 86.135-12 Dynamometer procedure. (a) Overview. The dynamometer run consists of two tests, a “cold” start test, after a minimum 12-hour and a maximum 36-hour soak...
40 CFR 86.135-12 - Dynamometer procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... control of pre-selectable power settings may be set anytime prior to the beginning of the emissions test... Heavy-Duty Vehicles; Test Procedures § 86.135-12 Dynamometer procedure. (a) Overview. The dynamometer run consists of two tests, a “cold” start test, after a minimum 12-hour and a maximum 36-hour soak...
M-52 spray booth qualification test
NASA Technical Reports Server (NTRS)
1990-01-01
The procedures, performance, and results obtained from the M-52 spray booth qualification test are documented. The test was conducted at Thiokol Corporation, Space Operations, M-52 Inert Parts Preparation facility. The purpose of this testing sequence was to ensure the spray booth would produce flight qualified hardware. The testing sequence was conducted in two series. The first series was conducted under CTP-0142, Revision 1. The second series was conducted in accordance with CTP-0142, Revision 2. The test sequence started with CTP-0142, Revision 1. The series consisted of the contamination removal test and the performance test. The contamination removal test was used to assess the Teflon level in the spray booth. The performance test consisted of painting and Chemloking a forward dome inside the spray booth per flight procedures. During the performance test, two sets of witness panels (case/insulation and steel/epoxy/steel) were prepared and pull tested. The CTP-0142, Revision 2, series of testing consisted of re-testing the steel/epoxy/steel witness panels. The pull tests analysis indicates the results of the tensile tests were comparable to the systems tunnel witness panel database. The exposed panel set and the control panel set average tensile values were above the 1-basis lower limits established on the systems tunnel witness panel database. It is recommended that the M-52 spray booth be qualified for producing flight hardware.
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2018-06-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on (in)consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use new statistical measures that can detect discordances between data sets when present. We use an algorithmic procedure based on these new measures that is able to identify in some cases whether an inconsistency is due to problems related to systematic effects in the data or to the underlying model. Some recent published tensions between data sets are also examined using our formalism, including the Hubble constant measurements, Planck and Large-Scale-Structure. (Work supported in part by NSF under Grant No. AST-1517768).
Line-of-Sight Data Link Test Set
1976-06-01
spheric layer model for layer refraction or a surface reflectivity model for ground reflection paths. Measurement of the channel impulse response...the model is exercised over a path consisting of only a constant direct component. The test would consist of measuring the modem demodulator bit...direct and a fading direct component. The test typically would consist of measuring the bit error-rate over a range of average signal-to-noise
2005-05-01
RESULTS ................................................................................................................49 5.3.1 Phase I Test Data...Ins 3 2.2a Wrinkling failure test article 7 2.2b Test results for a) Set A “0° core”, and b) Set B “90° core” 7 2.2c Test ... results for c) Set E “Isotropic (Foam) Core” 8 2.3 P = T because test data not yet reanalyzed with established CFs 9 2.4 P and T are now different
Cosmological consistency tests of gravity theory and cosmic acceleration
NASA Astrophysics Data System (ADS)
Ishak-Boushaki, Mustapha B.
2017-01-01
Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.
A Comparison of Three Types of Test Development Procedures Using Classical and Latent Trait Methods.
ERIC Educational Resources Information Center
Benson, Jeri; Wilson, Michael
Three methods of item selection were used to select sets of 38 items from a 50-item verbal analogies test and the resulting item sets were compared for internal consistency, standard errors of measurement, item difficulty, biserial item-test correlations, and relative efficiency. Three groups of 1,500 cases each were used for item selection. First…
Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping
1997-01-01
A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm.
Cai, Bin; Dolly, Steven; Kamal, Gregory; Yaddanapudi, Sridhar; Sun, Baozhou; Goddu, S Murty; Mutic, Sasa; Li, Hua
2018-04-28
To investigate the feasibility of using kV flat panel detector on linac for consistency evaluations of kV X-ray generator performance. An in-house designed aluminum (Al) array phantom with six 9×9 cm 2 square regions having various thickness was proposed and used in this study. Through XML script-driven image acquisition, kV images with various acquisition settings were obtained using the kV flat panel detector. Utilizing pre-established baseline curves, the consistency of X-ray tube output characteristics including tube voltage accuracy, exposure accuracy and exposure linearity were assessed through image quality assessment metrics including ROI mean intensity, ROI standard deviation (SD) and noise power spectrums (NPS). The robustness of this method was tested on two linacs for a three-month period. With the proposed method, tube voltage accuracy can be verified through conscience check with a 2% tolerance and 2 kVp intervals for forty different kVp settings. The exposure accuracy can be tested with a 4% consistency tolerance for three mAs settings over forty kVp settings. The exposure linearity tested with three mAs settings achieved a coefficient of variation (CV) of 0.1. We proposed a novel approach that uses the kV flat panel detector available on linac for X-ray generator test. This approach eliminates the inefficiencies and variability associated with using third party QA detectors while enabling an automated process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
SU-E-T-468: Implementation of the TG-142 QA Process for Seven Linacs with Enhanced Beam Conformance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woollard, J; Ayan, A; DiCostanzo, D
2015-06-15
Purpose: To develop a TG-142 compliant QA process for 7 Varian TrueBeam linear accelerators (linacs) with enhanced beam conformance and dosimetrically matched beam models. To ensure consistent performance of all 7 linacs, the QA process should include a common set of baseline values for use in routine QA on all linacs. Methods: The TG 142 report provides recommended tests, tolerances and frequencies for quality assurance of medical accelerators. Based on the guidance provided in the report, measurement tests were developed to evaluate each of the applicable parameters listed for daily, monthly and annual QA. These tests were then performed onmore » each of our 7 new linacs as they came on line at our institution. Results: The tolerance values specified in TG-142 for each QA test are either absolute tolerances (i.e. ±2mm) or require a comparison to a baseline value. The results of our QA tests were first used to ensure that all 7 linacs were operating within the suggested tolerance values provided in TG −142 for those tests with absolute tolerances and that the performance of the linacs was adequately matched. The QA test results were then used to develop a set of common baseline values for those QA tests that require comparison to a baseline value at routine monthly and annual QA. The procedures and baseline values were incorporated into a spreadsheets for use in monthly and annual QA. Conclusion: We have developed a set of procedures for daily, monthly and annual QA of our linacs that are consistent with the TG-142 report. A common set of baseline values was developed for routine QA tests. The use of this common set of baseline values for comparison at monthly and annual QA will ensure consistent performance of all 7 linacs.« less
PUMP SETS NO. 5 AND NO. 4. Each pump set ...
PUMP SETS NO. 5 AND NO. 4. Each pump set consists of a Worthington Pump and a General Electric motor - Edwards Air Force Base, Air Force Rocket Propulsion Laboratory, Flame Deflector Water System, Test Area 1-120, north end of Jupiter Boulevard, Boron, Kern County, CA
Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.
Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland
2009-06-09
The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.
GOES Type III Loop Heat Pipe Life Test Results
NASA Technical Reports Server (NTRS)
Ottenstein, Laura
2011-01-01
The GOES Type III Loop Heat Pipe (LHP) was built as a life test unit for the loop heat pipes on the GOES N-Q series satellites. This propylene LHP was built by Dynatherm Corporation in 2000 and tested continuously for approximately 14 months. It was then put into storage for 3 years. Following the storage period, the LHP was tested at Swales Aerospace to verify that the loop performance hadn t changed. Most test results were consistent with earlier results. At the conclusion of testing at Swales, the LHP was transferred to NASA/GSFC for continued periodic testing. The LHP has been set up for testing in the Thermal Lab at GSFC since 2006. A group of tests consisting of start-ups, power cycles, and a heat transport limit test have been performed every six to nine months since March 2006. Tests results have shown no change in the loop performance over the five years of testing. This presentation will discuss the test hardware, test set-up, and tests performed. Test results to be presented include sample plots from individual tests, along with conductance measurements for all tests performed.
Prediction of beta-turns from amino acid sequences using the residue-coupled model.
Guruprasad, K; Shukla, S
2003-04-01
We evaluated the prediction of beta-turns from amino acid sequences using the residue-coupled model with an enlarged representative protein data set selected from the Protein Data Bank. Our results show that the probability values derived from a data set comprising 425 protein chains yielded an overall beta-turn prediction accuracy 68.74%, compared with 94.7% reported earlier on a data set of 30 proteins using the same method. However, we noted that the overall beta-turn prediction accuracy using probability values derived from the 30-protein data set reduces to 40.74% when tested on the data set comprising 425 protein chains. In contrast, using probability values derived from the 425 data set used in this analysis, the overall beta-turn prediction accuracy yielded consistent results when tested on either the 30-protein data set (64.62%) used earlier or a more recent representative data set comprising 619 protein chains (64.66%) or on a jackknife data set comprising 476 representative protein chains (63.38%). We therefore recommend the use of probability values derived from the 425 representative protein chains data set reported here, which gives more realistic and consistent predictions of beta-turns from amino acid sequences.
Image recognition and consistency of response
NASA Astrophysics Data System (ADS)
Haygood, Tamara M.; Ryan, John; Liu, Qing Mary A.; Bassett, Roland; Brennan, Patrick C.
2012-02-01
Purpose: To investigate the connection between conscious recognition of an image previously encountered in an experimental setting and consistency of response to the experimental question.
Materials and Methods: Twenty-four radiologists viewed 40 frontal chest radiographs and gave their opinion as to the position of a central venous catheter. One-to-three days later they again viewed 40 frontal chest radiographs and again gave their opinion as to the position of the central venous catheter. Half of the radiographs in the second set were repeated images from the first set and half were new. The radiologists were asked of each image whether it had been included in the first set. For this study, we are evaluating only the 20 repeated images. We used the Kruskal-Wallis test and Fisher's exact test to determine the relationship between conscious recognition of a previously interpreted image and consistency in interpretation of the image.
Results. There was no significant correlation between recognition of the image and consistency in response regarding the position of the central venous catheter. In fact, there was a trend in the opposite direction, with radiologists being slightly more likely to give a consistent response with respect to images they did not recognize than with respect to those they did recognize.
Conclusion: Radiologists' recognition of previously-encountered images in an observer-performance study does not noticeably color their interpretation on the second encounter.
1985-11-01
User Interface that consists of a set of callable execution time routines available to an application program for form processing . IISS Function Screen...provisions for test consists of the normal testing techniques that are accomplished during the construction process . They consist of design and code...application presents a form * to the user which must be filled in with information for processing by that application. The application then
Airborne Turbulence Detection System Certification Tool Set
NASA Technical Reports Server (NTRS)
Hamilton, David W.; Proctor, Fred H.
2006-01-01
A methodology and a corresponding set of simulation tools for testing and evaluating turbulence detection sensors has been presented. The tool set is available to industry and the FAA for certification of radar based airborne turbulence detection systems. The tool set consists of simulated data sets representing convectively induced turbulence, an airborne radar simulation system, hazard tables to convert the radar observable to an aircraft load, documentation, a hazard metric "truth" algorithm, and criteria for scoring the predictions. Analysis indicates that flight test data supports spatial buffers for scoring detections. Also, flight data and demonstrations with the tool set suggest the need for a magnitude buffer.
Working Memory in L2 Reading: Does Capacity Predict Performance?
ERIC Educational Resources Information Center
Harrington, Michael; Sawyer, Mark
A study was conducted at the International University of Japan to see if second language (L2) working capacity correlates with L2 reading ability in advanced English-as-a-Second-Language (ESL) learners. The study consisted of a set of memory tests (Simple Digit, Simple Word, and Complex Span Test) and a set of measures of reading skills given to…
The influence of storage duration on the setting time of type 1 alginate impression material
NASA Astrophysics Data System (ADS)
Rahmadina, A.; Triaminingsih, S.; Irawan, B.
2017-08-01
Alginate is one of the most commonly used dental impression materials; however, its setting time is subject to change depending on storage conditions and duration. This creates problems because consumer carelessness can affect alginate shelf life and quality. In the present study, the setting times of two groups of type I alginate with different expiry dates was tested. The first group consisted of 11 alginate specimens that had not yet passed the expiry date, and the second group consisted of alginates that had passed the expiry date. The alginate powder was mixed with distilled water, poured into a metal ring, and tested with a polished rod of poly-methyl methacrylate. Statistical analysis showed a significant difference (p<0.05) between the setting times of the alginate that had not passed the expiry date (157 ± 3 seconds) and alginate that had passed the expiry date (144 ± 2 seconds). These findings indicate that storage duration can affect alginate setting time.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Drilling test. 33.34 Section 33.34 Mineral... MINING PRODUCTS DUST COLLECTORS FOR USE IN CONNECTION WITH ROCK DRILLING IN COAL MINES Test Requirements § 33.34 Drilling test. (a) A drilling test shall consist of drilling a set of 10 test holes, without...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Drilling test. 33.34 Section 33.34 Mineral... MINING PRODUCTS DUST COLLECTORS FOR USE IN CONNECTION WITH ROCK DRILLING IN COAL MINES Test Requirements § 33.34 Drilling test. (a) A drilling test shall consist of drilling a set of 10 test holes, without...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Drilling test. 33.34 Section 33.34 Mineral... MINING PRODUCTS DUST COLLECTORS FOR USE IN CONNECTION WITH ROCK DRILLING IN COAL MINES Test Requirements § 33.34 Drilling test. (a) A drilling test shall consist of drilling a set of 10 test holes, without...
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Reed, K. A.
2018-02-01
A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.
Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.
Heislbetz, Sandra; Rauhut, Guntram
2010-03-28
A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.
Handbook for Driving Knowledge Testing.
ERIC Educational Resources Information Center
Pollock, William T.; McDole, Thomas L.
Materials intended for driving knowledge test development for use by operational licensing and education agencies are presented. A pool of 1,313 multiple choice test items is included, consisting of sets of specially developed and tested items covering principles of safe driving, legal regulations, and traffic control device knowledge pertinent to…
A test of Hořava gravity: the dark energy
NASA Astrophysics Data System (ADS)
Park, Mu-In
2010-01-01
Recently Hořava proposed a renormalizable gravity theory with higher spatial derivatives in four dimensions which reduces to Einstein gravity with a non-vanishing cosmological constant in IR but with improved UV behaviors. Here, I consider a non-trivial test of the new gravity theory in FRW universe by considering an IR modification which breaks ``softly'' the detailed balance condition in the original Hořava model. I separate the dark energy parts from the usual Einstein gravity parts in the Friedman equations and obtain the formula of the equations of state parameter. The IR modified Hořava gravity seems to be consistent with the current observational data but we need some more refined data sets to see whether the theory is really consistent with our universe. From the consistency of our theory, I obtain some constraints on the allowed values of w0 and wa in the Chevallier, Polarski, and Linder's parametrization and this may be tested in the near future, by sharpening the data sets.
NASA Astrophysics Data System (ADS)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
40 CFR 1066.410 - Dynamometer test procedure.
Code of Federal Regulations, 2014 CFR
2014-07-01
... drive mode. (For purposes of this paragraph (g), the term four-wheel drive includes other multiple drive... Dynamometer test procedure. (a) Dynamometer testing may consist of multiple drive cycles with both cold-start...-setting part identifies the driving schedules and the associated sample intervals, soak periods, engine...
Seal material development test program
NASA Technical Reports Server (NTRS)
1971-01-01
A program designed to characterize an experimental fluoroelastomer material designated AF-E-124D, is examined. Tests conducted include liquid nitrogen load compression tests, flexure tests and valve seal tests, ambient and elevated temperature compression set tests, and cleaning and flushing fluid exposure tests. The results of these tests indicate the AF-E-124D is a good choice for a cryogenic seal, since it exhibits good low temperature sealing characteristics and resistance to permanent set. The status of this material as an experimental fluorelastomer is stressed and recommended. Activity includes definition and control of critical processing to ensure consistent material properties. Design, fabrication and test of this and other materials is recommended in valve and static seal applications.
Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton
Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less
Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional
Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton; ...
2017-05-24
Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
..., that are designed to ensure that its stress testing processes are effective in meeting the requirements... specific methodological practices. Consistent with this approach, this guidance sets general supervisory... use any specific methodological practices for their stress tests. Companies may use various practices...
Realistic metrics and methods for testing household biomass cookstoves are required to develop standards needed by international policy makers, donors, and investors. Application of consistent test practices allows emissions and energy efficiency performance to be benchmarked and...
Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set
NASA Astrophysics Data System (ADS)
Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.
2017-05-01
A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
40 CFR 1065.415 - Durability demonstration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... than in-use operation, subject to any pre-approval requirements established in the applicable standard.... Perform emission tests following the provisions of the standard setting part and this part, as applicable. Perform emission tests to determine deterioration factors consistent with good engineering judgment...
Handwritten word preprocessing for database adaptation
NASA Astrophysics Data System (ADS)
Oprean, Cristina; Likforman-Sulem, Laurence; Mokbel, Chafic
2013-01-01
Handwriting recognition systems are typically trained using publicly available databases, where data have been collected in controlled conditions (image resolution, paper background, noise level,...). Since this is not often the case in real-world scenarios, classification performance can be affected when novel data is presented to the word recognition system. To overcome this problem, we present in this paper a new approach called database adaptation. It consists of processing one set (training or test) in order to adapt it to the other set (test or training, respectively). Specifically, two kinds of preprocessing, namely stroke thickness normalization and pixel intensity normalization are considered. The advantage of such approach is that we can re-use the existing recognition system trained on controlled data. We conduct several experiments with the Rimes 2011 word database and with a real-world database. We adapt either the test set or the training set. Results show that training set adaptation achieves better results than test set adaptation, at the cost of a second training stage on the adapted data. Accuracy of data set adaptation is increased by 2% to 3% in absolute value over no adaptation.
Comparison of University Students' Understanding of Graphs in Different Contexts
ERIC Educational Resources Information Center
Planinic, Maja; Ivanjek, Lana; Susac, Ana; Milin-Sipus, Zeljka
2013-01-01
This study investigates university students' understanding of graphs in three different domains: mathematics, physics (kinematics), and contexts other than physics. Eight sets of parallel mathematics, physics, and other context questions about graphs were developed. A test consisting of these eight sets of questions (24 questions in all) was…
Comparison of eigenvectors for coupled seismo-electromagnetic layered-Earth modelling
NASA Astrophysics Data System (ADS)
Grobbe, N.; Slob, E. C.; Thorbecke, J. W.
2016-07-01
We study the accuracy and numerical stability of three eigenvector sets for modelling the coupled poroelastic and electromagnetic layered-Earth response. We use a known eigenvector set, its flux-normalized version and a newly derived flux-normalized set. The new set is chosen such that the system is properly uncoupled when the coupling between the poroelastic and electromagnetic fields vanishes. We carry out two different numerical stability tests: the first test focuses on the internal system, eigenvector and eigenvalue consistency; the second test investigates the stability and preciseness of the flux-normalized systems by looking at identity relations. We find that the known set shows the largest deviation for both tests, whereas the new set performs best. In two additional numerical modelling experiments, these numerical inaccuracies are shown to generate numerical noise levels comparable to small signals, such as signals coming from the important interface conversion responses, especially when the coupling coefficient is small. When coupling vanishes completely, the known set does not produce proper results. The new set produces numerically stable and accurate results in all situations. We therefore strongly recommend to use this newly derived set for future layered-Earth seismo-electromagnetic modelling experiments.
Yurteri-Kaplan, Ladin A; Andriani, Leslie; Kumar, Anagha; Saunders, Pamela A; Mete, Mihriye M; Sokol, Andrew I
To develop a valid and reliable survey to measure surgical team members' perceptions regarding their institution's requirements for successful minimally invasive surgery (MIS). Questionnaire development and validation study (Canadian Task Force classification II-2). Three hospital types: rural, urban/academic, and community/academic. Minimally invasive staff (team members). Development and validation of a minimally invasive surgery survey (MISS). Using the Safety Attitudes questionnaire as a guide, we developed questions assessing study participants' attitudes regarding the requirements for successful MIS. The questions were closed-ended and responses based on a 5-point Likert scale. The large pool of questions was then given to 4 focus groups made up of 3 to 6 individuals. Each focus group consisted of individuals from a specific profession (e.g., surgeons, anesthesiologists, nurses, and surgical technicians). Questions were revised based on focus group recommendations, resulting in a final 52-question set. The question set was then distributed to MIS team members. Individuals were included if they had participated in >10 MIS cases and worked in the MIS setting in the past 3 months. Participants in the trial population were asked to repeat the questionnaire 4 weeks later to evaluate internal consistency. Participants' demographics, including age, gender, specialty, profession, and years of experience, were captured in the questionnaire. Factor analysis with varimax rotation was performed to determine domains (questions evaluating similar themes). For internal consistency and reliability, domains were tested using interitem correlations and Cronbach's α. Cronbach's α > .6 was considered internally consistent. Kendall's correlation coefficient τ closer to 1 and with p < .05 was considered significant for the test-retest reliability. Two hundred fifty participants answered the initial question set. Of those, 53 were eliminated because they did not meet inclusion criteria or failed to answer all questions, leaving 197 participants. Most participants were women (68% vs 32%), and 42% were between the ages 30 and 39 years. Factor analysis identified 6 domains: collaboration, error reporting, job proficiency/efficiency, problem-solving, job satisfaction, and situational awareness. Interitem correlations testing for redundancy for each domain ranged from .2 to .7, suggesting similar themed questions while avoiding redundancy. Cronbach's α, testing internal consistency, was .87. Sixty-two participants from the original cohort repeated the question set at 4 weeks. Forty-three were analyzed for test-retest reliability after excluding those who did not meet inclusion criteria. The final questions showed high test-retest reliability (τ = .3-.7, p < .05). The final questionnaire was made up of 29 questions from the original 52 question set. The MISS is a reliable and valid tool that can be used to measure how surgical team members conceptualize the requirements for successful MIS. The MISS revealed that participants identified 6 important domains of a successful workenvironment: collaboration, error reporting, job proficiency/efficiency, problem-solving, job satisfaction, and situational awareness. The questionnaire can be used to understand and align various surgical team members' goals and expectations and may help improve quality of care in the MIS setting. Copyright © 2017 American Association of Gynecologic Laparoscopists. Published by Elsevier Inc. All rights reserved.
Asphalt and Wood Shingling. Roofing Workbook and Tests.
ERIC Educational Resources Information Center
Brown, Arthur
This combination workbook and set of tests contains materials on asphalt and wood shingling that have been designed to be used by those studying to enter the roofing and waterproofing trade. It consists of seven instructional units and seven accompanying objective tests. Covered in the individual units are the following topics: shingling…
ATS-PD: An Adaptive Testing System for Psychological Disorders
ERIC Educational Resources Information Center
Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio
2017-01-01
The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…
Diagnostic Testing Package DX v 2.0 Technical Specification. Methodology Project.
ERIC Educational Resources Information Center
McArthur, David
This paper contains the technical specifications, schematic diagrams, and program printout for a computer software package for the development and administration of diagnostic tests. The second version of the Diagnostic Testing Package DX consists of a PASCAL-based set of modules located in two main programs: (1) EDITTEST creates, modifies, and…
NASA Technical Reports Server (NTRS)
Kessel, Kurt R.
2015-01-01
Test specimen configuration was provided by Parker Chomerics. The EMI gasket used in this project was Cho-Seal 6503E. Black oxide alloy steel socket head bolts were used to hold the plates together. Non-conductive spacers were used to control the amount of compression on the gaskets. The following test fixture specifications were provided by Parker Chomerics. The CHO-TP09 test plate sets selected for this project consist of two aluminum plates manufactured to the specifications detailed in CHO-TP09. The first plate, referred to as the test frame, is illustrated in Figure 1. The test frame is designed with a cutout in the center and two alternating bolt patterns. One pattern is used to bolt the test frame to the corresponding test cover plate (Figure 2), forming a test plate set. The second pattern accepts the hardware used to mount the fully assembled test plate set to the main adapter plate (Figure 3).
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program DEKFIS (discrete extended Kalman filter/smoother), formulated for aircraft and helicopter state estimation and data consistency, is described. DEKFIS is set up to pre-process raw test data by removing biases, correcting scale factor errors and providing consistency with the aircraft inertial kinematic equations. The program implements an extended Kalman filter/smoother using the Friedland-Duffy formulation.
Development of a grinding-specific performance test set-up.
Olesen, C G; Larsen, B H; Andresen, E L; de Zee, M
2015-01-01
The aim of this study was to develop a performance test set-up for America's Cup grinders. The test set-up had to mimic the on-boat grinding activity and be capable of collecting data for analysis and evaluation of grinding performance. This study included a literature-based analysis of grinding demands and a test protocol developed to accommodate the necessary physiological loads. This study resulted in a test protocol consisting of 10 intervals of 20 revolutions each interspersed with active resting periods of 50 s. The 20 revolutions are a combination of both forward and backward grinding and an exponentially rising resistance. A custom-made grinding ergometer was developed with computer-controlled resistance and capable of collecting data during the test. The data collected can be used to find measures of grinding performance such as peak power, time to complete and the decline in repeated grinding performance.
Evaluation of setting time and flow properties of self-synthesize alginate impressions
NASA Astrophysics Data System (ADS)
Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia
2018-02-01
Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.
Consistency of response and image recognition, pulmonary nodules
Liu, M A Q; Galvan, E; Bassett, R; Murphy, W A; Matamoros, A; Marom, E M
2014-01-01
Objective: To investigate the effect of recognition of a previously encountered radiograph on consistency of response in localized pulmonary nodules. Methods: 13 radiologists interpreted 40 radiographs each to locate pulmonary nodules. A few days later, they again interpreted 40 radiographs. Half of the images in the second set were new. We asked the radiologists whether each image had been in the first set. We used Fisher's exact test and Kruskal–Wallis test to evaluate the correlation between recognition of an image and consistency in its interpretation. We evaluated the data using all possible recognition levels—definitely, probably or possibly included vs definitely, probably or possibly not included by collapsing the recognition levels into two and by eliminating the “possibly included” and “possibly not included” scores. Results: With all but one of six methods of looking at the data, there was no significant correlation between consistency in interpretation and recognition of the image. When the possibly included and possibly not included scores were eliminated, there was a borderline statistical significance (p = 0.04) with slightly greater consistency in interpretation of recognized than that of non-recognized images. Conclusion: We found no convincing evidence that radiologists' recognition of images in an observer performance study affects their interpretation on a second encounter. Advances in knowledge: Conscious recognition of chest radiographs did not result in a greater degree of consistency in the tested interpretation than that in the interpretation of images that were not recognized. PMID:24697724
Checking the Mathematical Consistency of Geometric Figures
ERIC Educational Resources Information Center
Yeo, Joseph
2017-01-01
In many countries, teachers often have to set their own questions for tests and examinations: some of them even set their own questions for assignments for students. These teachers do not usually select questions from textbooks used by the students because the latter would have seen the questions. If the teachers take the questions from other…
This document provides a general set of guidelines that may be consistently applied for collecting, evaluation, and reporting the costs of technologies tested under the ETV Program. Because of the diverse nature of the technologies and industries covered in this program, each ETV...
A non-parametric consistency test of the ΛCDM model with Planck CMB data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghamousa, Amir; Shafieloo, Arman; Hamann, Jan, E-mail: amir@aghamousa.com, E-mail: jan.hamann@unsw.edu.au, E-mail: shafieloo@kasi.re.kr
Non-parametric reconstruction methods, such as Gaussian process (GP) regression, provide a model-independent way of estimating an underlying function and its uncertainty from noisy data. We demonstrate how GP-reconstruction can be used as a consistency test between a given data set and a specific model by looking for structures in the residuals of the data with respect to the model's best-fit. Applying this formalism to the Planck temperature and polarisation power spectrum measurements, we test their global consistency with the predictions of the base ΛCDM model. Our results do not show any serious inconsistencies, lending further support to the interpretation ofmore » the base ΛCDM model as cosmology's gold standard.« less
Wing configuration on Wind Tunnel Testing of an Unmanned Aircraft Vehicle
NASA Astrophysics Data System (ADS)
Daryanto, Yanto; Purwono, Joko; Subagyo
2018-04-01
Control surface of an Unmanned Aircraft Vehicle (UAV) consists of flap, aileron, spoiler, rudder, and elevator. Every control surface has its own special functionality. Some particular configurations in the flight mission often depend on the wing configuration. Configuration wing within flap deflection for takeoff setting deflection of flap 20° but during landing deflection of flap set on the value 40°. The aim of this research is to get the ultimate CLmax for take-off flap deflection setting. It is shown from Wind Tunnel Testing result that the 20° flap deflection gives optimum CLmax with moderate drag coefficient. The results of Wind Tunnel Testing representing by graphic plots show good performance as well as the stability of UAV.
Giguère, Charles-Édouard; Potvin, Stéphane
2017-01-01
Substance use disorders (SUDs) are significant risk factors for psychiatric relapses and hospitalizations in psychiatric populations. Unfortunately, no instrument has been validated for the screening of SUDs in psychiatric emergency settings. The Drug Abuse Screening Test (DAST) is widely used in the addiction field, but is has not been validated in that particular context. The objective of the current study is to examine the psychometric properties of the DAST administered to psychiatric populations evaluated in an emergency setting. The DAST was administered to 912 psychiatric patients in an emergency setting, of which 119 had a SUD (excluding those misusing alcohol only). The internal consistency, the construct validity, the test-retest reliability and the predictive validity (using SUD diagnoses) of the DAST were examined. The convergent validity was also examined, using a validated impulsivity scale. Regarding the internal consistency of the DAST, the Cronbach's alpha was 0.88. The confirmatory factor analysis showed that the DAST has one underlying factor. The test-retest reliability analysis produced a correlation coefficient of 0.86. ROC curve analyses produced an area under the curve of 0.799. Interestingly, a sex effect was observed. Finally, the convergent validity analysis showed that the DAST total score is specifically correlated with the sensation seeking dimension of impulsivity. The results of this validation study shows that the DAST preserves its excellent psychometric properties in psychiatric populations evaluated in an emergency setting. These results should encourage the use of the DAST in this unstable clinical situation. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Papenberg, Martin; Musch, Jochen
2017-01-01
In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…
A Novel Technique for Detecting Antibiotic-Resistant Typhoid from Rapid Diagnostic Tests
Nic Fhogartaigh, Caoimhe; Dance, David A. B.; Davong, Viengmon; Tann, Pisey; Phetsouvanh, Rattanaphone; Turner, Paul; Newton, Paul N.
2015-01-01
Fluoroquinolone-resistant typhoid is increasing. An antigen-detecting rapid diagnotic test (RDT) can rapidly diagnose typhoid from blood cultures. A simple, inexpensive molecular technique performed with DNA from positive RDTs accurately identified gyrA mutations consistent with phenotypic susceptibility testing results. Field diagnosis combined with centralized molecular resistance testing could improve typhoid management and surveillance in low-resource settings. PMID:25762768
Larson, Bruce; Schnippel, Kathryn; Ndibongo, Buyiswa; Long, Lawrence; Fox, Matthew P; Rosen, Sydney
2012-01-01
Integrating POC CD4 testing technologies into HIV counseling and testing (HCT) programs may improve post-HIV testing linkage to care and treatment. As evaluations of these technologies in program settings continue, estimates of the costs of POC CD4 tests to the service provider will be needed and estimates have begun to be reported. Without a consistent and transparent methodology, estimates of the cost per CD4 test using POC technologies are likely to be difficult to compare and may lead to erroneous conclusions about costs and cost-effectiveness. This paper provides a step-by-step approach for estimating the cost per CD4 test from a provider's perspective. As an example, the approach is applied to one specific POC technology, the Pima Analyzer. The costing approach is illustrated with data from a mobile HCT program in Gauteng Province of South Africa. For this program, the cost per test in 2010 was estimated at $23.76 (material costs = $8.70; labor cost per test = $7.33; and equipment, insurance, and daily quality control = $7.72). Labor and equipment costs can vary widely depending on how the program operates and the number of CD4 tests completed over time. Additional costs not included in the above analysis, for on-going training, supervision, and quality control, are likely to increase further the cost per test. The main contribution of this paper is to outline a methodology for estimating the costs of incorporating POC CD4 testing technologies into an HCT program. The details of the program setting matter significantly for the cost estimate, so that such details should be clearly documented to improve the consistency, transparency, and comparability of cost estimates.
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Using Biowin, Bayes, and batteries to predict ready biodegradability.
Boethling, Robert S; Lynch, David G; Jaworska, Joanna S; Tunkel, Jay L; Thom, Gary C; Webb, Simon
2004-04-01
Whether or not a given chemical substance is readily biodegradable is an important piece of information in risk screening for both new and existing chemicals. Despite the relatively low cost of Organization for Economic Cooperation and Development tests, data are often unavailable and biodegradability must be estimated. In this paper, we focus on the predictive value of selected Biowin models and model batteries using Bayesian analysis. Posterior probabilities, calculated based on performance with the model training sets using Bayes' theorem, were closely matched by actual performance with an expanded set of 374 premanufacture notice (PMN) substances. Further analysis suggested that a simple battery consisting of Biowin3 (survey ultimate biodegradation model) and Biowin5 (Ministry of International Trade and Industry [MITI] linear model) would have enhanced predictive power in comparison to individual models. Application of the battery to PMN substances showed that performance matched expectation. This approach significantly reduced both false positives for ready biodegradability and the overall misclassification rate. Similar results were obtained for a set of 63 pharmaceuticals using a battery consisting of Biowin3 and Biowin6 (MITI nonlinear model). Biodegradation data for PMNs tested in multiple ready tests or both inherent and ready biodegradation tests yielded additional insights that may be useful in risk screening.
An Application-Based Discussion of Construct Validity and Internal Consistency Reliability.
ERIC Educational Resources Information Center
Taylor, Dianne L.; Campbell, Kathleen T.
Several techniques for conducting studies of measurement integrity are explained and illustrated using a heuristic data set from a study of teachers' participation in decision making (D. L. Taylor, 1991). The sample consisted of 637 teachers. It is emphasized that validity and reliability are characteristics of data, and do not inure to tests as…
Functional form diagnostics for Cox's proportional hazards model.
León, Larry F; Tsai, Chih-Ling
2004-03-01
We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.
A new test set for validating predictions of protein-ligand interaction.
Nissink, J Willem M; Murray, Chris; Hartshorn, Mike; Verdonk, Marcel L; Cole, Jason C; Taylor, Robin
2002-12-01
We present a large test set of protein-ligand complexes for the purpose of validating algorithms that rely on the prediction of protein-ligand interactions. The set consists of 305 complexes with protonation states assigned by manual inspection. The following checks have been carried out to identify unsuitable entries in this set: (1) assessing the involvement of crystallographically related protein units in ligand binding; (2) identification of bad clashes between protein side chains and ligand; and (3) assessment of structural errors, and/or inconsistency of ligand placement with crystal structure electron density. In addition, the set has been pruned to assure diversity in terms of protein-ligand structures, and subsets are supplied for different protein-structure resolution ranges. A classification of the set by protein type is available. As an illustration, validation results are shown for GOLD and SuperStar. GOLD is a program that performs flexible protein-ligand docking, and SuperStar is used for the prediction of favorable interaction sites in proteins. The new CCDC/Astex test set is freely available to the scientific community (http://www.ccdc.cam.ac.uk). Copyright 2002 Wiley-Liss, Inc.
Predicting Item Difficulty in a Reading Comprehension Test with an Artificial Neural Network.
ERIC Educational Resources Information Center
Perkins, Kyle; And Others
This paper reports the results of using a three-layer backpropagation artificial neural network to predict item difficulty in a reading comprehension test. Two network structures were developed, one with and one without a sigmoid function in the output processing unit. The data set, which consisted of a table of coded test items and corresponding…
ERIC Educational Resources Information Center
Mowsesian, Richard; Hays, William L.
The Graduate Record Examination (GRE) Aptitude Test has been in use since 1938. In 1975 the GRE Aptitude Test was broadened to include an experimental set of items designed to tap a respondent's recognition of logical relationships and consistency of interrelated statements, and to make inferences from abstract relationships. To test the…
Research: Testing of a Novel Portable Body Temperature Conditioner Using a Thermal Manikin.
Heller, Daniel; Heller, Alex; Moujaes, Samir; Williams, Shelley J; Hoffmann, Ryan; Sarkisian, Paul; Khalili, Kaveh; Rockenfeller, Uwe; Browder, Timothy D; Kuhls, Deborah A; Fildes, John J
2016-01-01
A battery-operated active cooling/heating device was developed to maintain thermoregulation of trauma victims in austere environments while awaiting evacuation to a hospital for further treatment. The use of a thermal manikin was adopted for this study in order to simulate load testing and evaluate the performance of this novel portable active cooling/heating device for both continuous (external power source) and battery power. The performance of the portable body temperature conditioner (PBTC) was evaluated through cooling/heating fraction tests to analyze the heat transfer between a thermal manikin and circulating water blanket to show consistent performance while operating under battery power. For the cooling/heating fraction tests, the ambient temperature was set to 15°C ± 1°C (heating) and 30°C ± 1°C (cooling). The PBTC water temperature was set to 37°C for the heating mode tests and 15°C for the cooling mode tests. The results showed consistent performance of the PBTC in terms of cooling/heating capacity while operating under both continuous and battery power. The PBTC functioned as intended and shows promise as a portable warming/cooling device for operation in the field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poon, Justin; Sabondjian, Eric; Sankreacha, Raxa
Purpose: A robust Quality Assurance (QA) program is essential for prostate brachytherapy ultrasound systems due to the importance of imaging accuracy during treatment and planning. Task Group 128 of the American Association of Physicists in Medicine has recommended a set of QA tests covering grayscale visibility, depth of penetration, axial and lateral resolution, distance measurement, area measurement, volume measurement, and template/electronic grid alignment. Making manual measurements on the ultrasound system can be slow and inaccurate, so a MATLAB program was developed for automation of the described tests. Methods: Test images were acquired using a BK Medical Flex Focus 400 ultrasoundmore » scanner and 8848 transducer with the CIRS Brachytherapy QA Phantom – Model 045A. For each test, the program automatically segments the inputted image(s), makes the appropriate measurements, and indicates if the test passed or failed. The program was tested by analyzing two sets of images, where the measurements from the first set were used as baseline values. Results: The program successfully analyzed the images for each test and determined if any action limits were exceeded. All tests passed – the measurements made by the program were consistent and met the requirements outlined by Task Group 128. Conclusions: The MATLAB program we have developed can be used for automated QA of an ultrasound system for prostate brachytherapy. The GUI provides a user-friendly way to analyze images without the need for any manual measurement, potentially removing intra- and inter-user variability for more consistent results.« less
Automated video-based detection of nocturnal convulsive seizures in a residential care setting.
Geertsema, Evelien E; Thijs, Roland D; Gutter, Therese; Vledder, Ben; Arends, Johan B; Leijten, Frans S; Visser, Gerhard H; Kalitzin, Stiliyan N
2018-06-01
People with epilepsy need assistance and are at risk of sudden death when having convulsive seizures (CS). Automated real-time seizure detection systems can help alert caregivers, but wearable sensors are not always tolerated. We determined algorithm settings and investigated detection performance of a video algorithm to detect CS in a residential care setting. The algorithm calculates power in the 2-6 Hz range relative to 0.5-12.5 Hz range in group velocity signals derived from video-sequence optical flow. A detection threshold was found using a training set consisting of video-electroencephalogaphy (EEG) recordings of 72 CS. A test set consisting of 24 full nights of 12 new subjects in residential care and additional recordings of 50 CS selected randomly was used to estimate performance. All data were analyzed retrospectively. The start and end of CS (generalized clonic and tonic-clonic seizures) and other seizures considered desirable to detect (long generalized tonic, hyperkinetic, and other major seizures) were annotated. The detection threshold was set to the value that obtained 97% sensitivity in the training set. Sensitivity, latency, and false detection rate (FDR) per night were calculated in the test set. A seizure was detected when the algorithm output exceeded the threshold continuously for 2 seconds. With the detection threshold determined in the training set, all CS were detected in the test set (100% sensitivity). Latency was ≤10 seconds in 78% of detections. Three/five hyperkinetic and 6/9 other major seizures were detected. Median FDR was 0.78 per night and no false detections occurred in 9/24 nights. Our algorithm could improve safety unobtrusively by automated real-time detection of CS in video registrations, with an acceptable latency and FDR. The algorithm can also detect some other motor seizures requiring assistance. © 2018 The Authors. Epilepsia published by Wiley Periodicals, Inc. on behalf of International League Against Epilepsy.
DOT National Transportation Integrated Search
2006-08-01
This study consists of continued field evaluations of treatments to four pavements suffering from distress due to alkali-silica reaction (ASR). One set of treatments was evaluated on existing pavements in Delaware, California, and Nevada that already...
Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.
2011-01-01
A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…
Ultrasonic inspection of a glued laminated timber fabricated with defects
Robert Emerson; David Pollock; David McLean; Kenneth Fridley; Robert Ross; Roy Pellerin
2001-01-01
The Federal Highway Administration (FHWA) set up a validation test to compare the effectiveness of various nondestructive inspection techniques for detecting artificial defects in glulam members. The validation test consisted of a glulam beam fabricated with artificial defects known to FHWA personnel but not originally known to the scientists performing the validation...
Development and validation of a Response Bias Scale (RBS) for the MMPI-2.
Gervais, Roger O; Ben-Porath, Yossef S; Wygant, Dustin B; Green, Paul
2007-06-01
This study describes the development of a Minnesota Multiphasic Personality Inventory (MMPI-2) scale designed to detect negative response bias in forensic neuropsychological or disability assessment settings. The Response Bias Scale (RBS) consists of 28 MMPI-2 items that discriminated between persons who passed or failed the Word Memory Test (WMT), Computerized Assessment of Response Bias (CARB), and/or Test of Memory Malingering (TOMM) in a sample of 1,212 nonhead-injury disability claimants. Incremental validity of the RBS was evaluated by comparing its ability to detect poor performance on four separate symptom validity tests with that of the F and F(P) scales and the Fake Bad Scale (FBS). The RBS consistently outperformed F, F(P), and FBS. Study results suggest that the RBS may be a useful addition to existing MMPI-2 validity scales and indices in detecting symptom complaints predominantly associated with cognitive response bias and overreporting in forensic neuropsychological and disability assessment settings.
The Utrecht questionnaire (U-CEP) measuring knowledge on clinical epidemiology proved to be valid.
Kortekaas, Marlous F; Bartelink, Marie-Louise E L; de Groot, Esther; Korving, Helen; de Wit, Niek J; Grobbee, Diederick E; Hoes, Arno W
2017-02-01
Knowledge on clinical epidemiology is crucial to practice evidence-based medicine. We describe the development and validation of the Utrecht questionnaire on knowledge on Clinical epidemiology for Evidence-based Practice (U-CEP); an assessment tool to be used in the training of clinicians. The U-CEP was developed in two formats: two sets of 25 questions and a combined set of 50. The validation was performed among postgraduate general practice (GP) trainees, hospital trainees, GP supervisors, and experts. Internal consistency, internal reliability (item-total correlation), item discrimination index, item difficulty, content validity, construct validity, responsiveness, test-retest reliability, and feasibility were assessed. The questionnaire was externally validated. Internal consistency was good with a Cronbach alpha of 0.8. The median item-total correlation and mean item discrimination index were satisfactory. Both sets were perceived as relevant to clinical practice. Construct validity was good. Both sets were responsive but failed on test-retest reliability. One set took 24 minutes and the other 33 minutes to complete, on average. External GP trainees had comparable results. The U-CEP is a valid questionnaire to assess knowledge on clinical epidemiology, which is a prerequisite for practicing evidence-based medicine in daily clinical practice. Copyright © 2016 Elsevier Inc. All rights reserved.
van der Stap, Djamilla K.D.; Rider, Lisa G.; Alexanderson, Helene; Huber, Adam M.; Gualano, Bruno; Gordon, Patrick; van der Net, Janjaap; Mathiesen, Pernille; Johnson, Liam G.; Ernste, Floranne C.; Feldman, Brian M.; Houghton, Kristin M.; Singh-Grewal, Davinder; Kutzbach, Abraham Garcia; Munters, Li Alemo; Takken, Tim
2015-01-01
OBJECTIVES Currently there are no evidence-based recommendations regarding which fitness and strength tests to use for patients with childhood or adult idiopathic inflammatory myopathies (IIM). This hinders clinicians and researchers in choosing the appropriate fitness- or muscle strength-related outcome measures for these patients. Through a Delphi survey, we aimed to identify a candidate core-set of fitness and strength tests for children and adults with IIM. METHODS Fifteen experts participated in a Delphi survey that consisted of five stages to achieve a consensus. Using an extensive search of published literature and through the expertise of the experts, a candidate core-set based on expert opinion and clinimetric properties was developed. Members of the International Myositis Assessment and Clinical Studies Group (IMACS) were invited to review this candidate core-set during the final stage, which led to a final candidate core-set. RESULTS A core-set of fitness- and strength-related outcome measures was identified for children and adults with IIM. For both children and adults, different tests were identified and selected for maximal aerobic fitness, submaximal aerobic fitness, anaerobic fitness, muscle strength tests and muscle function tests. CONCLUSIONS The core-set of fitness and strength-related outcome measures provided by this expert consensus process will assist practitioners and researchers in deciding which tests to use in IIM patients. This will improve the uniformity of fitness and strength tests across studies, thereby facilitating the comparison of study results and therapeutic exercise program outcomes among patients with IIM. PMID:26568594
Miller, Sarah; Fritzon, Katarina
2007-01-01
Fire-setting and self-harm behaviours among women in high security special hospitals may be understood using Shye's Action System Theory (AST) in which four functional modes are recognized: 'adaptive', 'expressive', 'integrative', and 'conservative'. To test for relationships between different forms of fire-setting and self-harm behaviours and AST modes among women in special hospital, and for consistency within modes across the two behaviours. Clinical case files evidencing both fire-setting and self-harm behaviours (n = 50) were analysed for content, focusing on incident characteristics. A total of 29 fire-setting and 22 self-harm variables were analysed using Smallest Space Analysis (SSA). Chi-square and Spearman's rho (rho) analyses were used to determine functional consistency across behavioural modes. Most women showed one predominant AST mode in fire-setting (n = 39) and self-harm (n = 35). Significant positive correlations were found between integrative and adaptive modes of functioning. The lack of correlation between conservative and expressive modes reflects the differing behaviours used in each activity. Despite this, significant cross-tabulations revealed that each woman had parallel fire-setting and self-harm styles. Findings suggest that, for some women, setting fires and self harm fulfil a similar underlying function. Support is given to AST as a way of furthering understanding of damaging behaviours, whether self- or other-inflicted. Copyright 2007 John Wiley & Sons, Ltd.
Measuring wildland fire leadership: the crewmember perceived leadership scale
Alexis L. Waldron; David P. Schary; Bradley J. Cardinal
2015-01-01
The aims of this research were to develop and test a scale used to measure leadership in wildland firefighting using two samples of USA wildland firefighters. The first collection of data occurred in the spring and early summer and consisted of an online survey. The second set of data was collected towards late summer and early fall (autumn). The second set of...
ERIC Educational Resources Information Center
Lane, Kathleen Lynne; Parks, Robin J.; Kalberg, Jemma Robertson; Carter, Erik W.
2007-01-01
This article presents findings of two studies, one conducted with middle school students (n = 500) in a rural setting and a second conducted with middle school students (n = 528) in an urban setting, of the reliability and validity of the "Student Risk Screening Scale" (SRSS; Drummond, 1994). Results revealed high internal consistency, test-retest…
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.; Chang, Lei
In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…
Predicting future forestland area: a comparison of econometric approaches.
SoEun Ahn; Andrew J. Plantinga; Ralph J. Alig
2000-01-01
Predictions of future forestland area are an important component of forest policy analyses. In this article, we test the ability of econometric land use models to accurately forecast forest area. We construct a panel data set for Alabama consisting of county and time-series observation for the period 1964 to 1992. We estimate models using restricted data sets-namely,...
Fostering Social Agency in Multimedia Learning: Examining the Impact of an Animated Agent's Voice
ERIC Educational Resources Information Center
Atkinson, Robert K.; Mayer, Richard E.; Merrill, Mary Margaret
2005-01-01
Consistent with social agency theory, we hypothesized that learners who studied a set of worked-out examples involving proportional reasoning narrated by an animated agent with a human voice would perform better on near and far transfer tests and rate the speaker more positively compared to learners who studied the same set of examples narrated by…
40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.
Code of Federal Regulations, 2010 CFR
2010-07-01
... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...
ERIC Educational Resources Information Center
Matton, Nadine; Vautier, Stephane; Raufaste, Eric
2009-01-01
Mean gain scores for cognitive ability tests between two sessions in a selection setting are now a robust finding, yet not fully understood. Many authors do not attribute such gain scores to an increase in the target abilities. Our approach consists of testing a longitudinal SEM model suitable to this view. We propose to model the scores' changes…
The Formation of Chondrules: Petrologic Tests of the Shock Wave Model
NASA Technical Reports Server (NTRS)
Connolly, H. C., Jr.; Love, S. G.
1998-01-01
Chondrules are mm-sized spheroidal igneous components of chondritic meteorites. They consist of olivine and orthopyroxene set in a glassy mesostasis with varying minor amounts of metals, sulfieds, oxides and carbon phases.
NASA Astrophysics Data System (ADS)
Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.
1998-09-01
Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.
Piette, Elizabeth R; Moore, Jason H
2018-01-01
Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.
Consistency of blood pressure differences between the left and right arms.
Eguchi, Kazuo; Yacoub, Mona; Jhalani, Juhee; Gerin, William; Schwartz, Joseph E; Pickering, Thomas G
2007-02-26
It is unclear to what extent interarm blood pressure (BP) differences are reproducible vs the result of random error. The present study was designed to resolve this issue. We enrolled 147 consecutive patients from a hypertension clinic. Three sets of 3 BP readings were recorded, first using 2 oscillometric devices simultaneously in the 2 arms (set 1); next, 3 readings were taken sequentially for each arm using a standard mercury sphygmomanometer (set 2); finally, the readings as performed for set 1 were repeated (set 3). The protocol was repeated at a second visit for 91 patients. Large interarm systolic BP differences were consistently seen in 2 patients with obstructive arterial disease. In the remaining patients, the systolic BP and the diastolic BP, respectively, were slightly higher in the right arm than in the left arm by 2 to 3 mm Hg and by 1 mm Hg for all 3 sets (P<.01 for all). For the systolic BP and the diastolic BP, respectively, the numbers of patients who had a mean interarm difference of more than 5 mm Hg were 11 (7.5%) and 4 (2.7%) across all 3 sets of readings. Among patients who repeated the test, none had a consistent interarm BP difference of more than 5 mm Hg across the 2 visits. The interarm BP difference was consistent only when obstructive arterial disease was present. Although BP in the right arm tended to be higher than in the left arm, clinically meaningful interarm differences were not reproducible in the absence of obstructive arterial disease and are attributable to random variation.
NASA Astrophysics Data System (ADS)
Achitouv, I.; Rasera, Y.; Sheth, R. K.; Corasaniti, P. S.
2013-12-01
The excursion set approach provides a framework for predicting how the abundance of dark matter halos depends on the initial conditions. A key ingredient of this formalism is the specification of a critical overdensity threshold (barrier) which protohalos must exceed if they are to form virialized halos at a later time. However, to make its predictions, the excursion set approach explicitly averages over all positions in the initial field, rather than the special ones around which halos form, so it is not clear that the barrier has physical motivation or meaning. In this Letter we show that once the statistical assumptions which underlie the excursion set approach are considered a drifting diffusing barrier model does provide a good self-consistent description both of halo abundance as well as of the initial overdensities of the protohalo patches.
Reliability and validity of the Dutch pediatric Voice Handicap Index.
Veder, Laura; Pullens, Bas; Timmerman, Marieke; Hoeve, Hans; Joosten, Koen; Hakkesteegt, Marieke
2017-05-01
The pediatric voice handicap index (pVHI) has been developed to provide a better insight into the parents' perception of their child's voice related quality of life. The purpose of the present study was to validate the Dutch pVHI by evaluating its internal consistency and reliability. Furthermore, we determined the optimal cut-off point for a normal pVHI score. All items of the English pVHI were translated into Dutch. Parents of children in our dysphonic and control group were asked to fill out the questionnaire. For the test re-test analysis we used a different study group who filled out the pVHI twice as part of a large follow up study. Internal consistency was analyzed through Cronbach's α coefficient. The test-retest reliability was assessed by determining Pearson's correlation coefficient. Mann-Whitney test was used to compare the scores of the questionnaire of the control group with the dysphonic group. By calculating receiver operating characteristic (ROC) curves, sensitivity and specificity we were able to set a cut-off point. We obtained data from 122 asymptomatic children and from 79 dysphonic children. The scores of the questionnaire significantly differed between both groups. The internal consistency showed an overall Cronbach α coefficient of 0.96 and an excellent test-retest reliability of the total pVHI questionnaire with a Pearson's correlation coefficient of 0.90. A cut-off point for the total pVHI questionnaire was set at 7 points with a specificity of 85% and sensitivity of 100%. A cut-off point for the VAS score was set at 13 with a specificity of 93% and sensitivity of 97%. The Dutch pVHI is a valid and reliable tool for the assessment of children with voice problems. By setting a cut-off point for the score of the total pVHI questionnaire of 7 points and the VAS score of 13, the pVHI might be used as a screening tool to assess dysphonic complaints and the pVHI might be a useful and complementary tool to identify children with dysphonia. Copyright © 2017 Elsevier B.V. All rights reserved.
McCoy, Dana Charles; Sudfeld, Christopher R; Bellinger, David C; Muhihi, Alfa; Ashery, Geofrey; Weary, Taylor E; Fawzi, Wafaie; Fink, Günther
2017-02-09
Low-cost, cross-culturally comparable measures of the motor, cognitive, and socioemotional skills of children under 3 years remain scarce. In the present paper, we aim to develop a new caregiver-reported early childhood development (ECD) scale designed to be implemented as part of household surveys in low-resourced settings. We evaluate the acceptability, test-retest reliability, internal consistency, and discriminant validity of the new ECD items, subscales, and full scale in a sample of 2481 18- to 36-month-old children from peri-urban and rural Tanzania. We also compare total and subscale scores with performance on the Bayley Scales of Infant Development (BSID-III) in a subsample of 1036 children. Qualitative interviews from 10 mothers and 10 field workers are used to inform quantitative data. Adequate levels of acceptability and internal consistency were found for the new scale and its motor, cognitive, and socioemotional subscales. Correlations between the new scale and the BSID-III were high (r > .50) for the motor and cognitive subscales, but low (r < .20) for the socioemotional subscale. The new scale discriminated between children's skills based on age, stunting status, caregiver-reported disability, and adult stimulation. Test-retest reliability scores were variable among a subset of items tested. Results of this study provide empirical support from a low-income country setting for the acceptability, reliability, and validity of a new caregiver-reported ECD scale. Additional research is needed to test these and other caregiver reported items in children in the full 0 to 3 year range across multiple cultural and linguistic settings.
Unit: Minerals and Crystals, First Trial Materials, Inspection Set.
ERIC Educational Resources Information Center
Australian Science Education Project, Toorak, Victoria.
This unit, one of a series being developed for Australian secondary school science courses, consists of a teacher's guide, two student booklets, a test booklet, and a student workbook which also contains answers to questions raised in the student booklets, and a answer sheet containing comments on the answers to the questions in the test booklet.…
NASA Technical Reports Server (NTRS)
Jaffe, P.; Weaver, R. W.; Lee, R. E.
1981-01-01
The 12 continental remote sites were decommissioned. Testing was consolidated into a five-site network consisting of the four Southern California sites and a new Florida site. 16 kW of new state-of-the-art modules were deployed at the five sites. Testing of the old modules continued at the Goldstone site but as a low-priority item. Array testing of modules is considered. Additional new testing capabilities were added. A battery-powered array data logger is discussed. A final set of failure and degradation data was obtained from the modules.
Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh P.; Kohrt, Brandon A.; Jordans, Mark J. D.; Haroz, Emily; Wissow, Lawrence
2017-01-01
Systematic processes are needed to develop valid measurement instruments for disruptive behavior disorders (DBDs) in cross-cultural settings. We employed a four-step process in Nepal to identify and select items for a culturally valid assessment instrument: 1) We extracted items from validated scales and local free-list interviews. 2) Parents, teachers, and peers (n=30) rated the perceived relevance and importance of behavior problems. 3) Highly rated items were piloted with children (n=60) in Nepal. 4) We evaluated internal consistency of the final scale. We identified 49 symptoms from 11 scales, and 39 behavior problems from free-list interviews (n=72). After dropping items for low ratings of relevance and severity and for poor item-test correlation, low frequency, and/or poor acceptability in pilot testing, 16 items remained for the Disruptive Behavior International Scale—Nepali version (DBIS-N). The final scale had good internal consistency (α=0.86). A 4-step systematic approach to scale development including local participation yielded an internally consistent scale that included culturally relevant behavior problems. PMID:28093575
Measuring the Reliability of Picture Story Exercises like the TAT
Gruber, Nicole; Kreuzpointner, Ludwig
2013-01-01
As frequently reported, psychometric assessments on Picture Story Exercises, especially variations of the Thematic Apperception Test, mostly reveal inadequate scores for internal consistency. We demonstrate that the reason for this apparent shortcoming is not caused by the coding system itself but from the incorrect use of internal consistency coefficients, especially Cronbach’s α. This problem could be eliminated by using the category-scores as items instead of the picture-scores. In addition to a theoretical explanation we prove mathematically why the use of category-scores produces an adequate internal consistency estimation and examine our idea empirically with the origin data set of the Thematic Apperception Test by Heckhausen and two additional data sets. We found generally higher values when using the category-scores as items instead of picture-scores. From an empirical and theoretical point of view, the estimated reliability is also superior to each category within a picture as item measuring. When comparing our suggestion with a multifaceted Rasch-model we provide evidence that our procedure better fits the underlying principles of PSE. PMID:24348902
Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G
2002-07-01
When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Pacific Marine Energy Center - South Energy Test Site, Wave Measurements
Annette von Jouanne
2016-06-06
TRIAXYS data from the NNMREC-SETS, for Nov. 2014 - Jan. 2015, and May 2015 - Dec. 2015. The data consists of: Date, Time, significant wave height (1 hour average), significant wave period (1 hour average).
W. Cohen; H. Andersen; S. Healey; G. Moisen; T. Schroeder; C. Woodall; G. Domke; Z. Yang; S. Stehman; R. Kennedy; C. Woodcock; Z. Zhu; J. Vogelmann; D. Steinwand; C. Huang
2014-01-01
The authors are developing a REDD+ MRV system that tests different biomass estimation frameworks and components. Design-based inference from a costly fi eld plot network was compared to sampling with LiDAR strips and a smaller set of plots in combination with Landsat for disturbance monitoring. Biomass estimation uncertainties associated with these different data sets...
Ultrasonic monitoring of the setting of silicone elastomeric impression materials.
Kanazawa, Tomoe; Murayama, Ryosuke; Furuichi, Tetsuya; Imai, Arisa; Suda, Shunichi; Kurokawa, Hiroyasu; Takamizawa, Toshiki; Miyazaki, Masashi
2017-01-31
This study used an ultrasonic measurement device to monitor the setting behavior of silicone elastomeric impression materials, and the influence of temperature on setting behavior was determined. The ultrasonic device consisted of a pulser-receiver, transducers, and an oscilloscope. The two-way transit time through the mixing material was divided by two to account for the down-and-back travel path; then it was multiplied by the sonic velocity. Analysis of variance and the Tukey honest significant difference test were used. In the early stages of the setting process, most of the ultrasonic energy was absorbed by the elastomers and the second echoes were relatively weak. As the elastomers hardened, the sonic velocities increased until they plateaued. The changes in sonic velocities varied among the elastomers tested, and were affected by temperature conditions. The ultrasonic method used in this study has considerable potential for determining the setting processes of elastomeric impression materials.
Lim, Chun Yi; Law, Mary; Khetani, Mary; Rosenbaum, Peter; Pollock, Nancy
2018-08-01
To estimate the psychometric properties of a culturally adapted version of the Young Children's Participation and Environment Measure (YC-PEM) for use among Singaporean families. This is a prospective cohort study. Caregivers of 151 Singaporean children with (n = 83) and without (n = 68) developmental disabilities, between 0 and 7 years, completed the YC-PEM (Singapore) questionnaire with 3 participation scales (frequency, involvement, and change desired) and 1 environment scale for three settings: home, childcare/preschool, and community. Setting-specific estimates of internal consistency, test-retest reliability, and construct validity were obtained. Internal consistency estimates varied from .59 to .92 for the participation scales and .73 to .79 for the environment scale. Test-retest reliability estimates from the YC-PEM conducted on two occasions, 2-3 weeks apart, varied from .39 to .89 for the participation scales and from .65 to .80 for the environment scale. Moderate to large differences were found in participation and perceived environmental support between children with and without a disability. YC-PEM (Singapore) scales have adequate psychometric properties except for low internal consistency for the childcare/preschool participation frequency scale and low test-retest reliability for home participation frequency scale. The YC-PEM (Singapore) may be used for population-level studies involving young children with and without developmental disabilities.
Lenselink, Charlotte H.; de Bie, Roosmarie P.; van Hamont, Dennis; Bakkers, Judith M. J. E.; Quint, Wim G. V.; Massuger, Leon F. A. G.; Bekkers, Ruud L. M.; Melchers, Willem J. G.
2009-01-01
This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF10-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening. PMID:19553570
Lenselink, Charlotte H; de Bie, Roosmarie P; van Hamont, Dennis; Bakkers, Judith M J E; Quint, Wim G V; Massuger, Leon F A G; Bekkers, Ruud L M; Melchers, Willem J G
2009-08-01
This study assesses human papillomavirus (HPV) detection and genotyping in self-sampled genital smears applied to an indicating FTA elute cartridge (FTA cartridge). The study group consisted of 96 women, divided into two sample sets. All samples were analyzed by the HPV SPF(10)-Line Blot 25. Set 1 consisted of 45 women attending the gynecologist; all obtained a self-sampled cervicovaginal smear, which was applied to an FTA cartridge. HPV results were compared to a cervical smear (liquid based) taken by a trained physician. Set 2 consisted of 51 women who obtained a self-sampled cervicovaginal smear at home, which was applied to an FTA cartridge and to a liquid-based medium. DNA was obtained from the FTA cartridges by simple elution as well as extraction. Of all self-obtained samples of set 1, 62.2% tested HPV positive. The overall agreement between self- and physician-obtained samples was 93.3%, in favor of the self-obtained samples. In sample set 2, 25.5% tested HPV positive. The overall agreement for high-risk HPV presence between the FTA cartridge and liquid-based medium and between DNA elution and extraction was 100%. This study shows that HPV detection and genotyping in self-obtained cervicovaginal samples applied to an FTA cartridge is highly reliable. It shows a high level of overall agreement with HPV detection and genotyping in physician-obtained cervical smears and liquid-based self-samples. DNA can be obtained by simple elution and is therefore easy, cheap, and fast. Furthermore, the FTA cartridge is a convenient medium for collection and safe transport at ambient temperatures. Therefore, this method may contribute to a new way of cervical cancer screening.
Consistency in performance evaluation reports and medical records.
Lu, Mingshan; Ma, Ching-to Albert
2002-12-01
In the health care market managed care has become the latest innovation for the delivery of services. For efficient implementation, the managed care organization relies on accurate information. So clinicians are often asked to report on patients before referrals are approved, treatments authorized, or insurance claims processed. What are clinicians responses to solicitation for information by managed care organizations? The existing health literature has already pointed out the importance of provider gaming, sincere reporting, nudging, and dodging the rules. We assess the consistency of clinicians reports on clients across administrative data and clinical records. For about 1,000 alcohol abuse treatment episodes, we compare clinicians reports across two data sets. The first one, the Maine Addiction Treatment System (MATS), was an administrative data set; the state government used it for program performance monitoring and evaluation. The second was a set of medical record abstracts, taken directly from the clinical records of treatment episodes. A clinician s reporting practice exhibits an inconsistency if the information reported in MATS differs from the information reported in the medical record in a statistically significant way. We look for evidence of inconsistencies in five categories: admission alcohol use frequency, discharge alcohol use frequency, termination status, admission employment status, and discharge employment status. Chi-square tests, Kappa statistics, and sensitivity and specificity tests are used for hypothesis testing. Multiple imputation methods are employed to address the problem of missing values in the record abstract data set. For admission and discharge alcohol use frequency measures, we find, respectively, strong and supporting evidence for inconsistencies. We find equally strong evidence for consistency in reports of admission and discharge employment status, and mixed evidence on report consistency on termination status. Patterns of inconsistency may be due to both altruistic and self-interest motives. Payment contracts based on performance may be subject to provider mis-reporting, which could seriously undermine its purpose. However, further analysis is needed to determine how much of the inconsistencies observed are results of clinician gaming in reporting. Increasing system accountability is becoming more and more important for health care policy makers. Results of this study will lead to a better understanding of physician reporting behavior. Our work in this paper on the data sets confirms the statistical significance of strategic reporting in alcohol addiction treatment. It will be of interest to confirm our finding in other data sets. Our on-going research will model the motives behind strategic reporting. We will hypothesize that both altruistic and financial incentives are present. Our empirical identification strategy will use Maine s Performance-Based Contracting system and client insurance sources to test how these incentives affect the direction of clinician s strategic reporting.
Noise tests of a mixer nozzle-externally blown flap system
NASA Technical Reports Server (NTRS)
Goodykoontz, J. H.; Dorsch, R. G.; Groesbeck, D. E.
1973-01-01
Noise tests were conducted on a large scale model of an externally blown flap lift augmentation system, employing a mixer nozzle. The mixer nozzle consisted of seven flow passages with a total equivalent diameter of 40 centimeters. With the flaps in the 30 - 60 deg setting, the noise level below the wing was less with the mixer nozzle than when a standard circular nozzle was used. At the 10 - 20 deg flap setting, the noise levels were about the same when either nozzle was used. With retracted flaps, the noise level was higher when the mixer nozzle was used.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Establishment of Religion in Primary and Secondary Schools.
ERIC Educational Resources Information Center
Underwood, Julie K.
1989-01-01
A modified analysis of the "Lemon" test as set forth in Supreme Court opinions is explained, and relevant lower court cases are reviewed. Determines that the modified standard is heightened and consistently applied within K-12 education activities. (MLF)
Portable detection system of vegetable oils based on laser induced fluorescence
NASA Astrophysics Data System (ADS)
Zhu, Li; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan; Mu, Taotao
2015-11-01
Food safety, especially edible oils, has attracted more and more attention recently. Many methods and instruments have emerged to detect the edible oils, which include oils classification and adulteration. It is well known than the adulteration is based on classification. Then, in this paper, a portable detection system, based on laser induced fluorescence, is proposed and designed to classify the various edible oils, including (olive, rapeseed, walnut, peanut, linseed, sunflower, corn oils). 532 nm laser modules are used in this equipment. Then, all the components are assembled into a module (100*100*25mm). A total of 700 sets of fluorescence data (100 sets of each type oil) are collected. In order to classify different edible oils, principle components analysis and support vector machine have been employed in the data analysis. The training set consisted of 560 sets of data (80 sets of each oil) and the test set consisted of 140 sets of data (20 sets of each oil). The recognition rate is up to 99%, which demonstrates the reliability of this potable system. With nonintrusive and no sample preparation characteristic, the potable system can be effectively applied for food detection.
Verification of the Sentinel-4 focal plane subsystem
NASA Astrophysics Data System (ADS)
Williges, Christian; Uhlig, Mathias; Hilbert, Stefan; Rossmann, Hannes; Buchwinkler, Kevin; Babben, Steffen; Sebastian, Ilse; Hohn, Rüdiger; Reulke, Ralf
2017-09-01
The Sentinel-4 payload is a multi-spectral camera system, designed to monitor atmospheric conditions over Europe from a geostationary orbit. The German Aerospace Center, DLR Berlin, conducted the verification campaign of the Focal Plane Subsystem (FPS) during the second half of 2016. The FPS consists, of two Focal Plane Assemblies (FPAs), two Front End Electronics (FEEs), one Front End Support Electronic (FSE) and one Instrument Control Unit (ICU). The FPAs are designed for two spectral ranges: UV-VIS (305 nm - 500 nm) and NIR (750 nm - 775 nm). In this publication, we will present in detail the set-up of the verification campaign of the Sentinel-4 Qualification Model (QM). This set up will also be used for the upcoming Flight Model (FM) verification, planned for early 2018. The FPAs have to be operated at 215 K +/- 5 K, making it necessary to exploit a thermal vacuum chamber (TVC) for the test accomplishment. The test campaign consists mainly of radiometric tests. This publication focuses on the challenge to remotely illuminate both Sentinel-4 detectors as well as a reference detector homogeneously over a distance of approximately 1 m from outside the TVC. Selected test analyses and results will be presented.
Ballistic Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development
NASA Technical Reports Server (NTRS)
Pereira, J. Michael; Revilock, Duane M.; Ruggeri, Charles R.; Emmerling, William C.; Altobelli, Donald J.
2012-01-01
An experimental program is underway to develop a consistent set of material property and impact test data, and failure analysis, for a variety of materials that can be used to develop improved impact failure and deformation models. Unique features of this set of data are that all material property information and impact test results are obtained using identical materials, the test methods and procedures are extensively documented and all of the raw data is available. This report describes ballistic impact testing which has been conducted on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade.
Space Shuttle Orbital Drag Parachute Design
NASA Technical Reports Server (NTRS)
Meyerson, Robert E.
2001-01-01
The drag parachute system was added to the Space Shuttle Orbiter's landing deceleration subsystem beginning with flight STS-49 in May 1992. The addition of this subsystem to an existing space vehicle required a detailed set of ground tests and analyses. The aerodynamic design and performance testing of the system consisted of wind tunnel tests, numerical simulations, pilot-in-the-loop simulations, and full-scale testing. This analysis and design resulted in a fully qualified system that is deployed on every flight of the Space Shuttle.
Staged-Fault Testing of Distance Protection Relay Settings
NASA Astrophysics Data System (ADS)
Havelka, J.; Malarić, R.; Frlan, K.
2012-01-01
In order to analyze the operation of the protection system during induced fault testing in the Croatian power system, a simulation using the CAPE software has been performed. The CAPE software (Computer-Aided Protection Engineering) is expert software intended primarily for relay protection engineers, which calculates current and voltage values during faults in the power system, so that relay protection devices can be properly set up. Once the accuracy of the simulation model had been confirmed, a series of simulations were performed in order to obtain the optimal fault location to test the protection system. The simulation results were used to specify the test sequence definitions for the end-to-end relay testing using advanced testing equipment with GPS synchronization for secondary injection in protection schemes based on communication. The objective of the end-to-end testing was to perform field validation of the protection settings, including verification of the circuit breaker operation, telecommunication channel time and the effectiveness of the relay algorithms. Once the end-to-end secondary injection testing had been completed, the induced fault testing was performed with three-end lines loaded and in service. This paper describes and analyses the test procedure, consisting of CAPE simulations, end-to-end test with advanced secondary equipment and staged-fault test of a three-end power line in the Croatian transmission system.
van der Togt, Remko; Bakker, Piet J M; Jaspers, Monique W M
2011-04-01
RFID offers great opportunities to health care. Nevertheless, prior experiences also show that RFID systems have not been designed and tested in response to the particular needs of health care settings and might introduce new risks. The aim of this study is to present a framework that can be used to assess the performance of RFID systems particularly in health care settings. We developed a framework describing a systematic approach that can be used for assessing the feasibility of using an RFID technology in a particular healthcare setting; more specific for testing the impact of environmental factors on the quality of RFID generated data and vice versa. This framework is based on our own experiences with an RFID pilot implementation in an academic hospital in The Netherlands and a literature review concerning RFID test methods and current insights of RFID implementations in healthcare. The implementation of an RFID system within the blood transfusion chain inside a hospital setting was used as a show case to explain the different phases of the framework. The framework consists of nine phases, including an implementation development plan, RFID and medical equipment interference tests, data accuracy- and data completeness tests to be run in laboratory, simulated field and real field settings. The potential risks that RFID technologies may bring to the healthcare setting should be thoroughly evaluated before they are introduced into a vital environment. The RFID performance assessment framework that we present can act as a reference model to start an RFID development, engineering, implementation and testing plan and more specific, to assess the potential risks of interference and to test the quality of the RFID generated data potentially influenced by physical objects in specific health care environments. Copyright © 2010 Elsevier Inc. All rights reserved.
A new method for eliciting three speaking styles in the laboratory
Harnsberger, James D.; Wright, Richard; Pisoni, David B.
2009-01-01
In this study, a method was developed to elicit three different speaking styles, reduced, citation, and hyperarticulated, using controlled sentence materials in a laboratory setting. In the first set of experiments, the reduced style was elicited by having twelve talkers read a sentence while carrying out a distractor task that involved recalling from short-term memory an individually-calibrated number of digits. The citation style corresponded to read speech in the laboratory. The hyperarticulated style was elicited by prompting talkers (twice) to reread the sentences more carefully. The results of perceptual tests with naïve listeners and an acoustic analysis showed that six of the twelve talkers produced a reduced style of speech for the test sentences in the distractor task relative to the same sentences in the citation style condition. In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. In the second set of experiments, the reduced style was elicited by increasing the number of digits in the distractor task by one (a heavier cognitive load). The procedures for eliciting citation and hyperarticulated sentences remained unchanged. Ten talkers were recorded in the second experiment. The results showed that six out of ten talkers differentiated all three styles as predicted (70% of all sentences recorded). In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. Overall, the results demonstrate that it is possible to elicit controlled sentence stimulus materials varying in speaking style in a laboratory setting, although the method requires further refinement to elicit these styles more consistently from individual participants. PMID:19562041
A new method for eliciting three speaking styles in the laboratory.
Harnsberger, James D; Wright, Richard; Pisoni, David B
2008-04-01
In this study, a method was developed to elicit three different speaking styles, reduced, citation, and hyperarticulated, using controlled sentence materials in a laboratory setting. In the first set of experiments, the reduced style was elicited by having twelve talkers read a sentence while carrying out a distractor task that involved recalling from short-term memory an individually-calibrated number of digits. The citation style corresponded to read speech in the laboratory. The hyperarticulated style was elicited by prompting talkers (twice) to reread the sentences more carefully. The results of perceptual tests with naïve listeners and an acoustic analysis showed that six of the twelve talkers produced a reduced style of speech for the test sentences in the distractor task relative to the same sentences in the citation style condition. In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. In the second set of experiments, the reduced style was elicited by increasing the number of digits in the distractor task by one (a heavier cognitive load). The procedures for eliciting citation and hyperarticulated sentences remained unchanged. Ten talkers were recorded in the second experiment. The results showed that six out of ten talkers differentiated all three styles as predicted (70% of all sentences recorded). In addition, all talkers consistently produced sentences in the citation and hyperarticulated styles. Overall, the results demonstrate that it is possible to elicit controlled sentence stimulus materials varying in speaking style in a laboratory setting, although the method requires further refinement to elicit these styles more consistently from individual participants.
DOT National Transportation Integrated Search
2014-10-01
This research program develops and validates structural design guidelines and details for concrete bridge decks with : corrosion-resistant reinforcing (CRR) bars. A two-phase experimental program was conducted where a control test set consistent : wi...
A pseudoinverse deformation vector field generator and its applications
Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.
2010-01-01
Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247
Data consistency checks for Jefferson Lab Experiment E00-002
NASA Astrophysics Data System (ADS)
Telfeyan, John; Niculescu, Gabriel; Niculescu, Ioana
2006-10-01
Jefferson Lab experiment E00-002 aims to measure inclusive electron-proton and electron-deuteron scattering cross section at low Q squared and moderately low Bjorken x. Data in this kinematic region will further our understanding of the transition between the perturbative and non-perturbative regimes of Quantum Chromodynamics (QCD). As part of the data analysis effort underway at James Madison University (JMU) a comprehensive set of checks and tests was implemented. These tests ensure the quality and consistency of the experimental data, as well as providing, where appropriate, correction factors between the experimental apparatus as used and its idealized computer-simulated representation. This contribution will outline this testing procedure as implemented in the JMU analysis, highlighting the most important features/results.
Test of mutually unbiased bases for six-dimensional photonic quantum systems
D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio
2013-01-01
In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a “qusix”), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution. PMID:24067548
Test of mutually unbiased bases for six-dimensional photonic quantum systems.
D'Ambrosio, Vincenzo; Cardano, Filippo; Karimi, Ebrahim; Nagali, Eleonora; Santamato, Enrico; Marrucci, Lorenzo; Sciarrino, Fabio
2013-09-25
In quantum information, complementarity of quantum mechanical observables plays a key role. The eigenstates of two complementary observables form a pair of mutually unbiased bases (MUBs). More generally, a set of MUBs consists of bases that are all pairwise unbiased. Except for specific dimensions of the Hilbert space, the maximal sets of MUBs are unknown in general. Even for a dimension as low as six, the identification of a maximal set of MUBs remains an open problem, although there is strong numerical evidence that no more than three simultaneous MUBs do exist. Here, by exploiting a newly developed holographic technique, we implement and test different sets of three MUBs for a single photon six-dimensional quantum state (a "qusix"), encoded exploiting polarization and orbital angular momentum of photons. A close agreement is observed between theory and experiments. Our results can find applications in state tomography, quantitative wave-particle duality, quantum key distribution.
Modeling individualized coefficient alpha to measure quality of test score data.
Liu, Molei; Hu, Ming; Zhou, Xiao-Hua
2018-05-23
Individualized coefficient alpha is defined. It is item and subject specific and is used to measure the quality of test score data with heterogenicity among the subjects and items. A regression model is developed based on 3 sets of generalized estimating equations. The first set of generalized estimating equation models the expectation of the responses, the second set models the response's variance, and the third set is proposed to estimate the individualized coefficient alpha, defined and used to measure individualized internal consistency of the responses. We also use different techniques to extend our method to handle missing data. Asymptotic property of the estimators is discussed, based on which inference on the coefficient alpha is derived. Performance of our method is evaluated through simulation study and real data analysis. The real data application is from a health literacy study in Hunan province of China. Copyright © 2018 John Wiley & Sons, Ltd.
Category-specific semantic deficits: the role of familiarity and property type reexamined.
Bunn, E M; Tyler, L K; Moss, H E
1998-07-01
Category-specific deficits for living things have been explained variously as an artifact due to differences in the familiarity of concepts in different categories (E. Funnell & J. Sheridan, 1992) or as the result of an underlying impairment to sensory knowledge (E. K. Warrington & T. Shallice, 1984). Efforts to test these hypotheses empirically have been hindered by the shortcomings of currently available stimulus materials. A new set of stimuli are described that the authors developed to overcome the limitations of existing sets. The set consists of color photographs, matched across categories for familiarity and visual complexity. This set was used to test the semantic knowledge of a classic patient, J.B.R. (E. K. Warrington & T. Shallice, 1984). The results suggest that J.B.R.'s deficit for living things cannot be explained in terms of familiarity effects and that the most severely affected categories are those whose identification is most dependent on sensory information.
Psychometric evaluation of 3-set 4P questionnaire.
Akerman, Eva; Fridlund, Bengt; Samuelson, Karin; Baigi, Amir; Ersson, Anders
2013-02-01
This is a further development of a specific questionnaire, the 3-set 4P, to be used for measuring former ICU patients' physical and psychosocial problems after intensive care and the need for follow-up. The aim was to psychometrically test and evaluate the 3-set 4P questionnaire in a larger population. The questionnaire consists of three sets: "physical", "psychosocial" and "follow-up". The questionnaires were sent by mail to all patients with more than 24-hour length of stay on four ICUs in Sweden. Construct validity was measured with exploratory factor analysis with Varimax rotation. This resulted in three factors for the "physical set", five factors for the "psychosocial set" and four factors for the "follow-up set" with strong factor loadings and a total explained variance of 62-77.5%. Thirteen questions in the SF-36 were used for concurrent validity showing Spearman's r(s) 0.3-0.6 in eight questions and less than 0.2 in five. Test-retest was used for stability reliability. In set follow-up the correlation was strong to moderate and in physical and psychosocial sets the correlations were moderate to fair. This may have been because the physical and psychosocial status changed rapidly during the test period. All three sets had good homogeneity. In conclusion, the 3-set 4P showed overall acceptable results, but it has to be further modified in different cultures before being considered a fully operational instrument for use in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.
IRT Analysis of General Outcome Measures in Grades 1-8. Technical Report # 0916
ERIC Educational Resources Information Center
Alonzo, Julie; Anderson, Daniel; Tindal, Gerald
2009-01-01
We present scaling outcomes for mathematics assessments used in the fall to screen students at risk of failing to learn the knowledge and skills described in the National Council of Teachers of Mathematics (NCTM) Focal Point Standards. At each grade level, the assessment consisted of a 48-item test with three 16-item sub-test sets aligned to the…
Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment.
Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre
2013-09-15
Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings.
NASA Astrophysics Data System (ADS)
Zhu, Xiaoliang; Du, Li; Liu, Bendong; Zhe, Jiang
2016-06-01
We present a method based on an electrochemical sensor array and a back propagation artificial neural network for detection and quantification of four properties of lubrication oil, namely water (0, 500 ppm, 1000 ppm), total acid number (TAN) (13.1, 13.7, 14.4, 15.6 mg KOH g-1), soot (0, 1%, 2%, 3%) and sulfur content (1.3%, 1.37%, 1.44%, 1.51%). The sensor array, consisting of four micromachined electrochemical sensors, detects the four properties with overlapping sensitivities. A total set of 36 oil samples containing mixtures of water, soot, and sulfuric acid with different concentrations were prepared for testing. The sensor array’s responses were then divided to three sets: training sets (80% data), validation sets (10%) and testing sets (10%). Several back propagation artificial neural network architectures were trained with the training and validation sets; one architecture with four input neurons, 50 and 5 neurons in the first and second hidden layer, and four neurons in the output layer was selected. The selected neural network was then tested using the four sets of testing data (10%). Test results demonstrated that the developed artificial neural network is able to quantitatively determine the four lubrication properties (water, TAN, soot, and sulfur content) with a maximum prediction error of 18.8%, 6.0%, 6.7%, and 5.4%, respectively, indicting a good match between the target and predicted values. With the developed network, the sensor array could be potentially used for online lubricant oil condition monitoring.
Virtual occlusal definition for orthognathic surgery.
Liu, X J; Li, Q Q; Zhang, Z; Li, T T; Xie, Z; Zhang, Y
2016-03-01
Computer-assisted surgical simulation is being used increasingly in orthognathic surgery. However, occlusal definition is still undertaken using model surgery with subsequent digitization via surface scanning or cone beam computed tomography. A software tool has been developed and a workflow set up in order to achieve a virtual occlusal definition. The results of a validation study carried out on 60 models of normal occlusion are presented. Inter- and intra-user correlation tests were used to investigate the reproducibility of the manual setting point procedure. The errors between the virtually set positions (test) and the digitized manually set positions (gold standard) were compared. The consistency in virtual set positions performed by three individual users was investigated by one way analysis of variance test. Inter- and intra-observer correlation coefficients for manual setting points were all greater than 0.95. Overall, the median error between the test and the gold standard positions was 1.06mm. Errors did not differ among teeth (F=0.371, P>0.05). The errors were not significantly different from 1mm (P>0.05). There were no significant differences in the errors made by the three independent users (P>0.05). In conclusion, this workflow for virtual occlusal definition was found to be reliable and accurate. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Device-independent tests of quantum channels
NASA Astrophysics Data System (ADS)
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
Device-independent tests of quantum channels.
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
Starlings uphold principles of economic rationality for delay and probability of reward.
Monteiro, Tiago; Vasconcelos, Marco; Kacelnik, Alex
2013-04-07
Rationality principles are the bedrock of normative theories of decision-making in biology and microeconomics, but whereas in microeconomics, consistent choice underlies the notion of utility; in biology, the assumption of consistent selective pressures justifies modelling decision mechanisms as if they were designed to maximize fitness. In either case, violations of consistency contradict expectations and attract theoretical interest. Reported violations of rationality in non-humans include intransitivity (i.e. circular preferences) and lack of independence of irrelevant alternatives (changes in relative preference between options when embedded in different choice sets), but the extent to which these observations truly represent breaches of rationality is debatable. We tested both principles with starlings (Sturnus vulgaris), training subjects either with five options differing in food delay (exp. 1) or with six options differing in reward probability (exp. 2), before letting them choose repeatedly one option out of several binary and trinary sets of options. The starlings conformed to economic rationality on both tests, showing strong stochastic transitivity and no violation of the independence principle. These results endorse the rational choice and optimality approaches used in behavioural ecology, and highlight the need for functional and mechanistic enquiring when apparent violations of such principles are observed.
Starlings uphold principles of economic rationality for delay and probability of reward
Monteiro, Tiago; Vasconcelos, Marco; Kacelnik, Alex
2013-01-01
Rationality principles are the bedrock of normative theories of decision-making in biology and microeconomics, but whereas in microeconomics, consistent choice underlies the notion of utility; in biology, the assumption of consistent selective pressures justifies modelling decision mechanisms as if they were designed to maximize fitness. In either case, violations of consistency contradict expectations and attract theoretical interest. Reported violations of rationality in non-humans include intransitivity (i.e. circular preferences) and lack of independence of irrelevant alternatives (changes in relative preference between options when embedded in different choice sets), but the extent to which these observations truly represent breaches of rationality is debatable. We tested both principles with starlings (Sturnus vulgaris), training subjects either with five options differing in food delay (exp. 1) or with six options differing in reward probability (exp. 2), before letting them choose repeatedly one option out of several binary and trinary sets of options. The starlings conformed to economic rationality on both tests, showing strong stochastic transitivity and no violation of the independence principle. These results endorse the rational choice and optimality approaches used in behavioural ecology, and highlight the need for functional and mechanistic enquiring when apparent violations of such principles are observed. PMID:23390098
NASA Astrophysics Data System (ADS)
Fisher, W. P., Jr.; Elbaum, B.; Coulter, A.
2010-07-01
Reliability coefficients indicate the proportion of total variance attributable to differences among measures separated along a quantitative continuum by a testing, survey, or assessment instrument. Reliability is usually considered to be influenced by both the internal consistency of a data set and the number of items, though textbooks and research papers rarely evaluate the extent to which these factors independently affect the data in question. Probabilistic formulations of the requirements for unidimensional measurement separate consistency from error by modelling individual response processes instead of group-level variation. The utility of this separation is illustrated via analyses of small sets of simulated data, and of subsets of data from a 78-item survey of over 2,500 parents of children with disabilities. Measurement reliability ultimately concerns the structural invariance specified in models requiring sufficient statistics, parameter separation, unidimensionality, and other qualities that historically have made quantification simple, practical, and convenient for end users. The paper concludes with suggestions for a research program aimed at focusing measurement research more on the calibration and wide dissemination of tools applicable to individuals, and less on the statistical study of inter-variable relations in large data sets.
Designing testing service at baristand industri Medan’s liquid waste laboratory
NASA Astrophysics Data System (ADS)
Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.
2018-03-01
Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.
Type 2 Diabetes Screening Test by Means of a Pulse Oximeter.
Moreno, Enrique Monte; Lujan, Maria Jose Anyo; Rusinol, Montse Torrres; Fernandez, Paqui Juarez; Manrique, Pilar Nunez; Trivino, Cristina Aragon; Miquel, Magda Pedrosa; Rodriguez, Marife Alvarez; Burguillos, M Jose Gonzalez
2017-02-01
In this paper, we propose a method for screening for the presence of type 2 diabetes by means of the signal obtained from a pulse oximeter. The screening system consists of two parts: the first analyzes the signal obtained from the pulse oximeter, and the second consists of a machine-learning module. The system consists of a front end that extracts a set of features form the pulse oximeter signal. These features are based on physiological considerations. The set of features were the input of a machine-learning algorithm that determined the class of the input sample, i.e., whether the subject had diabetes or not. The machine-learning algorithms were random forests, gradient boosting, and linear discriminant analysis as benchmark. The system was tested on a database of [Formula: see text] subjects (two samples per subject) collected from five community health centers. The mean receiver operating characteristic area found was [Formula: see text]% (median value [Formula: see text]% and range [Formula: see text]%), with a specificity = [Formula: see text]% for a threshold that gave a sensitivity = [Formula: see text]%. We present a screening method for detecting diabetes that has a performance comparable to the glycated haemoglobin (haemoglobin A1c HbA1c) test, does not require blood extraction, and yields results in less than 5 min.
Sub-Scale Testing and Development of the J-2X Fuel Turbopump Inducer
NASA Technical Reports Server (NTRS)
Sargent, Scott R.; Becht, David G.
2011-01-01
In the early stages of the J-2X upper stage engine program, various inducer configurations proposed for use in the fuel turbopump (FTP) were tested in water. The primary objectives of this test effort were twofold. First, to obtain a more comprehensive data set than that which existed in the Pratt & Whitney Rocketdyne (PWR) historical archives from the original J-2S program, and second, to supplement that data set with information regarding the cavitation induced vibrations for both the historical J-2S configuration as well as those tested for the J-2X program. The J-2X FTP inducer, which actually consists of an inducer stage mechanically attached to a kicker stage, underwent 4 primary iterations utilizing sub-scaled test articles manufactured and tested in PWR's Engineering Development Laboratory (EDL). The kicker remained unchanged throughout the test series. The four inducer configurations tested retained many of the basic design features of the J-2S inducer, but also included variations on leading edge blade thickness and blade angle distribution, primarily aimed at improving suction performance at higher flow coefficients. From these data sets, the effects of the tested design variables on hydrodynamic performance and cavitation instabilities were discerned. A limited comparison of impact to the inducer efficiency was determined as well.
Development of a direct observation Measure of Environmental Qualities of Activity Settings.
King, Gillian; Rigby, Patty; Batorowicz, Beata; McMain-Klein, Margot; Petrenchik, Theresa; Thompson, Laura; Gibson, Michelle
2014-08-01
The aim of this study was to develop an observer-rated measure of aesthetic, physical, social, and opportunity-related qualities of leisure activity settings for young people (with or without disabilities). Eighty questionnaires were completed by sets of raters who independently rated 22 community/home activity settings. The scales of the 32-item Measure of Environmental Qualities of Activity Settings (MEQAS; Opportunities for Social Activities, Opportunities for Physical Activities, Pleasant Physical Environment, Opportunities for Choice, Opportunities for Personal Growth, and Opportunities to Interact with Adults) were determined using principal components analyses. Test-retest reliability was determined for eight activity settings, rated twice (4-6wk interval) by a trained rater. The factor structure accounted for 80% of the variance. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy was 0.73. Cronbach's alphas for the scales ranged from 0.76 to 0.96, and interrater reliabilities (ICCs) ranged from 0.60 to 0.93. Test-retest reliabilities ranged from 0.70 to 0.90. Results suggest that the MEQAS has a sound factor structure and preliminary evidence of internal consistency, interrater, and test-retest reliability. The MEQAS is the first observer-completed measure of environmental qualities of activity settings. The MEQAS allows researchers to assess comprehensively qualities and affordances of activity settings, and can be used to design and assess environmental qualities of programs for young people. © 2014 Mac Keith Press.
Nosofsky, Robert M; Cox, Gregory E; Cao, Rui; Shiffrin, Richard M
2014-11-01
Experiments were conducted to test a modern exemplar-familiarity model on its ability to account for both short-term and long-term probe recognition within the same memory-search paradigm. Also, making connections to the literature on attention and visual search, the model was used to interpret differences in probe-recognition performance across diverse conditions that manipulated relations between targets and foils across trials. Subjects saw lists of from 1 to 16 items followed by a single item recognition probe. In a varied-mapping condition, targets and foils could switch roles across trials; in a consistent-mapping condition, targets and foils never switched roles; and in an all-new condition, on each trial a completely new set of items formed the memory set. In the varied-mapping and all-new conditions, mean correct response times (RTs) and error proportions were curvilinear increasing functions of memory set size, with the RT results closely resembling ones from hybrid visual-memory search experiments reported by Wolfe (2012). In the consistent-mapping condition, new-probe RTs were invariant with set size, whereas old-probe RTs increased slightly with increasing study-test lag. With appropriate choice of psychologically interpretable free parameters, the model accounted well for the complete set of results. The work provides support for the hypothesis that a common set of processes involving exemplar-based familiarity may govern long-term and short-term probe recognition across wide varieties of memory- search conditions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Evaluation of a Serum Lung Cancer Biomarker Panel.
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results.
Evaluation of a Serum Lung Cancer Biomarker Panel
Mazzone, Peter J; Wang, Xiao-Feng; Han, Xiaozhen; Choi, Humberto; Seeley, Meredith; Scherer, Richard; Doseeva, Victoria
2018-01-01
Background: A panel of 3 serum proteins and 1 autoantibody has been developed to assist with the detection of lung cancer. We aimed to validate the accuracy of the biomarker panel in an independent test set and explore the impact of adding a fourth serum protein to the panel, as well as the impact of combining molecular and clinical variables. Methods: The training set of serum samples was purchased from commercially available biorepositories. The testing set was from a biorepository at the Cleveland Clinic. All lung cancer and control subjects were >50 years old and had smoked a minimum of 20 pack-years. A panel of biomarkers including CEA (carcinoembryonic antigen), CYFRA21-1 (cytokeratin-19 fragment 21-1), CA125 (carbohydrate antigen 125), HGF (hepatocyte growth factor), and NY-ESO-1 (New York esophageal cancer-1 antibody) was measured using immunoassay techniques. The multiple of the median method, multivariate logistic regression, and random forest modeling was used to analyze the results. Results: The training set consisted of 604 patient samples (268 with lung cancer and 336 controls) and the testing set of 400 patient samples (155 with lung cancer and 245 controls). With a threshold established from the training set, the sensitivity and specificity of both the 4- and 5-biomarker panels on the testing set was 49% and 96%, respectively. Models built on the testing set using only clinical variables had an area under the receiver operating characteristic curve of 0.68, using the biomarker panel 0.81 and by combining clinical and biomarker variables 0.86. Conclusions: This study validates the accuracy of a panel of proteins and an autoantibody in a population relevant to lung cancer detection and suggests a benefit to combining clinical features with the biomarker results. PMID:29371783
Moxnes, John F; Moen, Aina E Fossum; Leegaard, Truls Michael
2015-10-05
Study the time development of methicillin-resistant Staphylococcus aureus (MRSA) and forecast future behaviour. The major question: Is the number of MRSA isolates in Norway increasing and will it continue to increase? Time trend analysis using non-stationary γ-Poisson distributions. Two data sets were analysed. The first data set (data set I) consists of all MRSA isolates collected in Oslo County from 1997 to 2010; the study area includes the Norwegian capital of Oslo and nearby surrounding areas, covering approximately 11% of the Norwegian population. The second data set (data set II) consists of all MRSA isolates collected in Health Region East from 2002 to 2011. Health Region East consists of Oslo County and four neighbouring counties, and is the most populated area of Norway. Both data sets I and II consist of all persons in the area and time period described in the Settings, from whom MRSA have been isolated. MRSA infections have been mandatory notifiable in Norway since 1995, and MRSA colonisation since 2004. In the time period studied, all bacterial samples in Norway have been sent to a medical microbiological laboratory at the regional hospital for testing. In collaboration with the regional hospitals in five counties, we have collected all MRSA findings in the South-Eastern part of Norway over long time periods. On an average, a linear or exponential increase in MRSA numbers was observed in the data sets. A Poisson process with increasing intensity did not capture the dispersion of the time series, but a γ-Poisson process showed good agreement and captured the overdispersion. The numerical model showed numerical internal consistency. In the present study, we find that the number of MRSA isolates is increasing in the most populated area of Norway during the time period studied. We also forecast a continuous increase until the year 2017. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Do wealth distributions follow power laws? Evidence from ‘rich lists’
NASA Astrophysics Data System (ADS)
Brzezinski, Michal
2014-07-01
We use data on the wealth of the richest persons taken from the 'rich lists' provided by business magazines like Forbes to verify if the upper tails of wealth distributions follow, as often claimed, a power-law behaviour. The data sets used cover the world's richest persons over 1996-2012, the richest Americans over 1988-2012, the richest Chinese over 2006-2012, and the richest Russians over 2004-2011. Using a recently introduced comprehensive empirical methodology for detecting power laws, which allows for testing the goodness of fit as well as for comparing the power-law model with rival distributions, we find that a power-law model is consistent with data only in 35% of the analysed data sets. Moreover, even if wealth data are consistent with the power-law model, they are usually also consistent with some rivals like the log-normal or stretched exponential distributions.
Mehta, Urvakhsh M; Thirthalli, Jagadisha; Naveen Kumar, C; Mahadevaiah, Mahesh; Rao, Kiran; Subbakrishna, Doddaballapura K; Gangadhar, Bangalore N; Keshavan, Matcheri S
2011-09-01
Social cognition is a cognitive domain that is under substantial cultural influence. There are no culturally appropriate standardized tools in India to comprehensively test social cognition. This study describes validation of tools for three social cognition constructs: theory of mind, social perception and attributional bias. Theory of mind tests included adaptations of, (a) two first order tasks [Sally-Anne and Smarties task], (b) two second order tasks [Ice cream van and Missing cookies story], (c) two metaphor-irony tasks and (d) the faux pas recognition test. Internal, Personal, and Situational Attributions Questionnaire (IPSAQ) and Social Cue Recognition Test were adapted to assess attributional bias and social perception, respectively. These tests were first modified to suit the Indian cultural context without changing the constructs to be tested. A panel of experts then rated the tests on likert scales as to (1) whether the modified tasks tested the same construct as in the original and (2) whether they were culturally appropriate. The modified tests were then administered to groups of actively symptomatic and remitted schizophrenia patients as well as healthy comparison subjects. All tests of the Social Cognition Rating Tools in Indian Setting had good content validity and known groups validity. In addition, the social cure recognition test in Indian setting had good internal consistency and concurrent validity. Copyright © 2011 Elsevier B.V. All rights reserved.
Poljak, Mario; Oštrbenk, Anja
2013-01-01
Human papillomavirus (HPV) testing has become an essential part of current clinical practice in the management of cervical cancer and precancerous lesions. We reviewed the most important validation studies of a next-generation real-time polymerase chain reaction-based assay, the RealTime High Risk HPV test (RealTime)(Abbott Molecular, Des Plaines, IL, USA), for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older published in peer-reviewed journals from 2009 to 2013. RealTime is designed to detect 14 high-risk HPV genotypes with concurrent distinction of HPV-16 and HPV-18 from 12 other HPV genotypes. The test was launched on the European market in January 2009 and is currently used in many laboratories worldwide for routine detection of HPV. We concisely reviewed validation studies of a next-generation real-time polymerase chain reaction (PCR)-based assay: the Abbott RealTime High Risk HPV test. Eight validation studies of RealTime in referral settings showed its consistently high absolute clinical sensitivity for both CIN2+ (range 88.3-100%) and CIN3+ (range 93.0-100%), as well as comparative clinical sensitivity relative to the currently most widely used HPV test: the Qiagen/Digene Hybrid Capture 2 HPV DNA Test (HC2). Due to the significantly different composition of the referral populations, RealTime absolute clinical specificity for CIN2+ and CIN3+ varied greatly across studies, but was comparable relative to HC2. Four validation studies of RealTime performance in cervical cancer screening settings showed its consistently high absolute clinical sensitivity for both CIN2+ and CIN3+, as well as comparative clinical sensitivity and specificity relative to HC2 and GP5+/6+ PCR. RealTime has been extensively evaluated in the last 4 years. RealTime can be considered clinically validated for triage in referral population settings and for use in primary cervical cancer screening in women 30 years and older.
Development and evaluation of endurance test system for ventricular assist devices.
Sumikura, Hirohito; Homma, Akihiko; Ohnuma, Kentaro; Taenaka, Yoshiyuki; Takewa, Yoshiaki; Mukaibayashi, Hiroshi; Katano, Kazuo; Tatsumi, Eisuke
2013-06-01
We developed a novel endurance test system that can arbitrarily set various circulatory conditions and has durability and stability for long-term continuous evaluation of ventricular assist devices (VADs), and we evaluated its fundamental performance and prolonged durability and stability. The circulation circuit of the present endurance test system consisted of a pulsatile pump with a small closed chamber (SCC), a closed chamber, a reservoir and an electromagnetic proportional valve. Two duckbill valves were mounted in the inlet and outlet of the pulsatile pump. The features of the circulation circuit are as follows: (1) the components of the circulation circuit consist of optimized industrial devices, giving durability; (2) the pulsatile pump can change the heart rate and stroke length (SL), as well as its compliance using the SCC. Therefore, the endurance test system can quantitatively reproduce various circulatory conditions. The range of reproducible circulatory conditions in the endurance test circuit was examined in terms of fundamental performance. Additionally, continuous operation for 6 months was performed in order to evaluate the durability and stability. The circulation circuit was able to set up a wide range of pressure and total flow conditions using the SCC and adjusting the pulsatile pump SL. The long-term continuous operation test demonstrated that stable, continuous operation for 6 months was possible without leakage or industrial device failure. The newly developed endurance test system demonstrated a wide range of reproducible circulatory conditions, durability and stability, and is a promising approach for evaluating the basic characteristics of VADs.
Determination of HART I Blade Structural Properties by Laboratory Testing
NASA Technical Reports Server (NTRS)
Jung, Sung N.; Lau, Benton H.
2012-01-01
The structural properties of higher harmonic Aeroacoustic Rotor Test (HART I) blades were measured using the original set of blades tested in the German-dutch wind tunnel (DNW) in 1994. the measurements include bending and torsion stiffness, geometric offsets, and mass and inertia properties of the blade. the measured properties were compared to the estimated values obtained initially from the blade manufacturer. The previously estimated blade properties showed consistently higher stiffness, up to 30 percent for the flap bending in the blade inboard root section.
Local binary pattern texture-based classification of solid masses in ultrasound breast images
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Sehgal, Chandra M.; Udupa, Jayaram K.
2012-03-01
Breast cancer is one of the leading causes of cancer mortality among women. Ultrasound examination can be used to assess breast masses, complementarily to mammography. Ultrasound images reveal tissue information in its echoic patterns. Therefore, pattern recognition techniques can facilitate classification of lesions and thereby reduce the number of unnecessary biopsies. Our hypothesis was that image texture features on the boundary of a lesion and its vicinity can be used to classify masses. We have used intensity-independent and rotation-invariant texture features, known as Local Binary Patterns (LBP). The classifier selected was K-nearest neighbors. Our breast ultrasound image database consisted of 100 patient images (50 benign and 50 malignant cases). The determination of whether the mass was benign or malignant was done through biopsy and pathology assessment. The training set consisted of sixty images, randomly chosen from the database of 100 patients. The testing set consisted of forty images to be classified. The results with a multi-fold cross validation of 100 iterations produced a robust evaluation. The highest performance was observed for feature LBP with 24 symmetrically distributed neighbors over a circle of radius 3 (LBP24,3) with an accuracy rate of 81.0%. We also investigated an approach with a score of malignancy assigned to the images in the test set. This approach provided an ROC curve with Az of 0.803. The analysis of texture features over the boundary of solid masses showed promise for malignancy classification in ultrasound breast images.
Validation of the German version of the Ford Insomnia Response to Stress Test.
Dieck, Arne; Helbig, Susanne; Drake, Christopher L; Backhaus, Jutta
2018-06-01
The purpose of this study was to assess the psychometric properties of a German version of the Ford Insomnia Response to Stress Test with groups with and without sleep problems. Three studies were analysed. Data set 1 was based on an initial screening for a sleep training program (n = 393), data set 2 was based on a study to test the test-retest reliability of the Ford Insomnia Response to Stress Test (n = 284) and data set 3 was based on a study to examine the influence of competitive sport on sleep (n = 37). Data sets 1 and 2 were used to test internal consistency, factor structure, convergent validity, discriminant validity and test-retest reliability of the Ford Insomnia Response to Stress Test. Content validity was tested using data set 3. Cronbach's alpha of the Ford Insomnia Response to Stress Test was good (α = 0.80) and test-retest reliability was satisfactory (r = 0.72). Overall, the one-factor model showed the best fit. Furthermore, significant positive correlations between the Ford Insomnia Response to Stress Test and impaired sleep quality, depression and stress reactivity were in line with the expectations regarding the convergent validity. Subjects with sleep problems had significantly higher scores in the Ford Insomnia Response to Stress Test than subjects without sleep problems (P < 0.01). Competitive athletes with higher scores in the Ford Insomnia Response to Stress Test had significantly lower sleep quality (P = 0.01), demonstrating that vulnerability for stress-induced sleep disturbances accompanies poorer sleep quality in stressful episodes. The findings show that the German version of the Ford Insomnia Response to Stress Test is a reliable and valid questionnaire to assess the vulnerability to stress-induced sleep disturbances. © 2017 European Sleep Research Society.
Considerations for setting up an order entry system for nuclear medicine tests.
Hara, Narihiro; Onoguchi, Masahisa; Nishida, Toshihiko; Honda, Minoru; Houjou, Osamu; Yuhi, Masaru; Takayama, Teruhiko; Ueda, Jun
2007-12-01
Integrating the Healthcare Enterprise-Japan (IHE-J) was established in Japan in 2001 and has been working to standardize health information and make it accessible on the basis of the fundamental Integrating Healthcare Enterprise (IHE) specifications. However, because specialized operations are used in nuclear medicine tests, online sharing of patient information and test order information from the order entry system as shown by the scheduled workflow (SWF) is difficult, making information inconsistent throughout the facility and uniform management of patient information impossible. Therefore, we examined the basic design (subsystem design) for order entry systems, which are considered an important aspect of information management for nuclear medicine tests and needs to be consistent with the system used throughout the rest of the facility. There are many items that are required by the subsystem when setting up an order entry system for nuclear medicine tests. Among these items, those that are the most important in the order entry system are constructed using exclusion settings, because of differences in the conditions for using radiopharmaceuticals and contrast agents and appointment frame settings for differences in the imaging method and test items. To establish uniform management of patient information for nuclear medicine tests throughout the facility, it is necessary to develop an order entry system with exclusion settings and appointment frames as standard features. Thereby, integration of health information with the Radiology Information System (RIS) or Picture Archiving Communication System (PACS) based on Digital Imaging Communications in Medicine (DICOM) standards and real-time health care assistance can be attained, achieving the IHE agenda of improving health care service and efficiently sharing information.
Test set up description and performances for HAWAII-2RG detector characterization at ESTEC
NASA Astrophysics Data System (ADS)
Crouzet, P.-E.; ter Haar, J.; de Wit, F.; Beaufort, T.; Butler, B.; Smit, H.; van der Luijt, C.; Martin, D.
2012-07-01
In the frame work of the European Space Agency's Cosmic Vision program, the Euclid mission has the objective to map the geometry of the Dark Universe. Galaxies and clusters of galaxies will be observed in the visible and near-infrared wavelengths by an imaging and spectroscopic channel. For the Near Infrared Spectrometer instrument (NISP), the state-of-the-art HAWAII-2RG detectors will be used, associated with the SIDECAR ASIC readout electronic which will perform the image frame acquisitions. To characterize and validate the performance of these detectors, a test bench has been designed, tested and validated. This publication describes the pre-tests performed to build the set up dedicated to dark current measurements and tests requiring reasonably uniform light levels (such as for conversion gain measurements). Successful cryogenic and vacuum tests on commercial LEDs and photodiodes are shown. An optimized feed through in stainless steel with a V-groove to pot the flex cable connecting the SIDECAR ASIC to the room temperature board (JADE2) has been designed and tested. The test set up for quantum efficiency measurements consisting of a lamp, a monochromator, an integrating sphere and set of cold filters, and which is currently under construction will ensure a uniform illumination across the detector with variations lower than 2%. A dedicated spot projector for intra-pixel measurements has been designed and built to reach a spot diameter of 5 μm at 920nm with 2nm of bandwidth [1].
Segmentation and Recognition of Continuous Human Activity
2001-01-01
This paper presents a methodology for automatic segmentation and recognition of continuous human activity . We segment a continuous human activity into...commencement or termination. We use single action sequences for the training data set. The test sequences, on the other hand, are continuous sequences of human ... activity that consist of three or more actions in succession. The system has been tested on continuous activity sequences containing actions such as
Test spaces and characterizations of quadratic spaces
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij
1996-10-01
We show that a test space consisting of nonzero vectors of a quadratic space E and of the set all maximal orthogonal systems in E is algebraic iff E is Dacey or, equivalently, iff E is orthomodular. In addition, we present another orthomodularity criteria of quadratic spaces, and using the result of Solèr, we show that they can imply that E is a real, complex, or quaternionic Hilbert space.
Model of ASTM Flammability Test in Microgravity: Iron Rods
NASA Technical Reports Server (NTRS)
Steinberg, Theodore A; Stoltzfus, Joel M.; Fries, Joseph (Technical Monitor)
2000-01-01
There is extensive qualitative results from burning metallic materials in a NASA/ASTM flammability test system in normal gravity. However, this data was shown to be inconclusive for applications involving oxygen-enriched atmospheres under microgravity conditions by conducting tests using the 2.2-second Lewis Research Center (LeRC) Drop Tower. Data from neither type of test has been reduced to fundamental kinetic and dynamic systems parameters. This paper reports the initial model analysis for burning iron rods under microgravity conditions using data obtained at the LERC tower and modeling the burning system after ignition. Under the conditions of the test the burning mass regresses up the rod to be detached upon deceleration at the end of the drop. The model describes the burning system as a semi-batch, well-mixed reactor with product accumulation only. This model is consistent with the 2.0-second duration of the test. Transient temperature and pressure measurements are made on the chamber volume. The rod solid-liquid interface melting rate is obtained from film records. The model consists of a set of 17 non-linear, first-order differential equations which are solved using MATLAB. This analysis confirms that a first-order rate, in oxygen concentration, is consistent for the iron-oxygen kinetic reaction. An apparent activation energy of 246.8 kJ/mol is consistent for this model.
Tabulation of data from the tip aerodynamics and acoustics test
NASA Technical Reports Server (NTRS)
Cross, Jeffrey L.; Tu, Wilson
1990-01-01
In a continuing effort to understand helicopter rotor tip aerodynamics and acoustics, researchers at Ames Research Center conducted a flight test. The test was performed using the NASA White Cobra and a set of highly instrumented blades. Tabular and graphic summaries of two data subsets from the Tip Aerodynamics and Acoustics Test are given. The data presented are for airloads, blade structural loads, blade vibrations, with summary tables of the aircraft states for each test point. The tabular data consist of the first 15 harmonics only, whereas the plots contain the entire measured frequency content.
Waveform generation in the EETS
NASA Astrophysics Data System (ADS)
Wilshire, J. P.
1985-05-01
Design decisions and analysis for the waveform generation portion of an electrical equipment test set are discussed. This test set is unlike conventional ATE in that it is portable and designed to operate in forward area sites for the USMC. It is also unique in that it provides for functional testing for 32 electronic units from the AV-88 Harrier II aircraft. Specific requirements for the waveform generator are discussed, including a wide frequency range, high resolution and accuracy, and low total harmonic distortion. Several approaches to meet these requirements are considered and a specific concept is presented in detail, which consists of a digitally produced waveform that feeds a deglitched analog conversion circuit. Rigorous mathematical analysis is presented to prove that this concept meets the requirements. Finally, design alternatives and enhancements are considered.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Parton Distributions based on a Maximally Consistent Dataset
NASA Astrophysics Data System (ADS)
Rojo, Juan
2016-04-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.
Airfoil Vibration Dampers program
NASA Technical Reports Server (NTRS)
Cook, Robert M.
1991-01-01
The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.
Olderbak, Sally; Wilhelm, Oliver; Olaru, Gabriel; Geiger, Mattis; Brenneman, Meghan W.; Roberts, Richard D.
2015-01-01
The Reading the Mind in the Eyes Test is a popular measure of individual differences in Theory of Mind that is often applied in the assessment of particular clinical populations (primarily, individuals on the autism spectrum). However, little is known about the test's psychometric properties, including factor structure, internal consistency, and convergent validity evidence. We present a psychometric analysis of the test followed by an evaluation of other empirically proposed and statistically identified structures. We identified, and cross-validated in a second sample, an adequate short-form solution that is homogeneous with adequate internal consistency, and is moderately related to Cognitive Empathy, Emotion Perception, and strongly related to Vocabulary. We recommend the use of this short-form solution in normal adults as a more precise measure over the original version. Future revisions of the test should seek to reduce the test's reliance on one's vocabulary and evaluate the short-form structure in clinical populations. PMID:26500578
Nakagami, Katsuyuki; Yamauchi, Toyoaki; Noguchi, Hiroyuki; Maeda, Tohru; Nakagami, Tomoko
2014-06-01
This study aimed to develop a reliable and valid measure of functional health literacy in a Japanese clinical setting. Test development consisted of three phases: generation of an item pool, consultation with experts to assess content validity, and comparison with external criteria (the Japanese Health Knowledge Test) to assess criterion validity. A trial version of the test was administered to 535 Japanese outpatients. Internal consistency reliability, calculated by Cronbach's alpha, was 0.81, and concurrent validity was moderate. Receiver Operating Characteristics and Item Response Theory were used to classify patients as having adequate, marginal, or inadequate functional health literacy. Both inadequate and marginal functional health literacy were associated with older age, lower income, lower educational attainment, and poor health knowledge. The time required to complete the test was 10-15 min. This test should enable health workers to better identify patients with inadequate health literacy. © 2013 Wiley Publishing Asia Pty Ltd.
Genetic Testing in Clinical Settings.
Franceschini, Nora; Frick, Amber; Kopp, Jeffrey B
2018-04-11
Genetic testing is used for screening, diagnosis, and prognosis of diseases consistent with a genetic cause and to guide drug therapy to improve drug efficacy and avoid adverse effects (pharmacogenomics). This In Practice review aims to inform about DNA-related genetic test availability, interpretation, and recommended clinical actions based on results using evidence from clinical guidelines, when available. We discuss challenges that limit the widespread use of genetic information in the clinical care setting, including a small number of actionable genetic variants with strong evidence of clinical validity and utility, and the need for improving the health literacy of health care providers and the public, including for direct-to-consumer tests. Ethical, legal, and social issues and incidental findings also need to be addressed. Because our understanding of genetic factors associated with disease and drug response is rapidly increasing and new genetic tests are being developed that could be adopted by clinicians in the short term, we also provide extensive resources for information and education on genetic testing. Copyright © 2018 National Kidney Foundation, Inc. All rights reserved.
Wang, Shupeng; Zhang, Zhihui; Ren, Luquan; Zhao, Hongwei; Liang, Yunhong; Zhu, Bing
2014-06-01
In this work, a miniaturized device based on a bionic piezoelectric actuator was developed to investigate the static tensile and dynamic fatigue properties of bulk materials. The device mainly consists of a bionic stepping piezoelectric actuator based on wedge block clamping, a pair of grippers, and a set of precise signal test system. Tensile and fatigue examinations share a set of driving system and a set of signal test system. In situ tensile and fatigue examinations under scanning electron microscope or metallographic microscope could be carried out due to the miniaturized dimensions of the device. The structure and working principle of the device were discussed and the effects of output difference between two piezoelectric stacks on the device were theoretically analyzed. The tensile and fatigue examinations on ordinary copper were carried out using this device and its feasibility was verified through the comparison tests with a commercial tensile examination instrument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shupeng; Zhang, Zhihui, E-mail: zhzh@jlu.edu.cn; Ren, Luquan
2014-06-15
In this work, a miniaturized device based on a bionic piezoelectric actuator was developed to investigate the static tensile and dynamic fatigue properties of bulk materials. The device mainly consists of a bionic stepping piezoelectric actuator based on wedge block clamping, a pair of grippers, and a set of precise signal test system. Tensile and fatigue examinations share a set of driving system and a set of signal test system. In situ tensile and fatigue examinations under scanning electron microscope or metallographic microscope could be carried out due to the miniaturized dimensions of the device. The structure and working principlemore » of the device were discussed and the effects of output difference between two piezoelectric stacks on the device were theoretically analyzed. The tensile and fatigue examinations on ordinary copper were carried out using this device and its feasibility was verified through the comparison tests with a commercial tensile examination instrument.« less
Bench Test Evaluation of Adaptive Servoventilation Devices for Sleep Apnea Treatment
Zhu, Kaixian; Kharboutly, Haissam; Ma, Jianting; Bouzit, Mourad; Escourrou, Pierre
2013-01-01
Rationale: Adaptive servoventilation devices are marketed to overcome sleep disordered breathing with apneas and hypopneas of both central and obstructive mechanisms often experienced by patients with chronic heart failure. The clinical efficacy of these devices is still questioned. Study Objectives: This study challenged the detection and treatment capabilities of the three commercially available adaptive servoventilation devices in response to sleep disordered breathing events reproduced on an innovative bench test. Methods: The bench test consisted of a computer-controlled piston and a Starling resistor. The three devices were subjected to a flow sequence composed of central and obstructive apneas and hypopneas including Cheyne-Stokes respiration derived from a patient. The responses of the devices were separately evaluated with the maximum and the clinical settings (titrated expiratory positive airway pressure), and the detected events were compared to the bench-scored values. Results: The three devices responded similarly to central events, by increasing pressure support to raise airflow. All central apneas were eliminated, whereas hypopneas remained. The three devices responded differently to the obstructive events with the maximum settings. These obstructive events could be normalized with clinical settings. The residual events of all the devices were scored lower than bench test values with the maximum settings, but were in agreement with the clinical settings. However, their mechanisms were misclassified. Conclusion: The tested devices reacted as expected to the disordered breathing events, but not sufficiently to normalize the breathing flow. The device-scored results should be used with caution to judge efficacy, as their validity depends upon the initial settings. Citation: Zhu K; Kharboutly H; Ma J; Bouzit M; Escourrou P. Bench test evaluation of adaptive servoventilation devices for sleep apnea treatment. J Clin Sleep Med 2013;9(9):861-871. PMID:23997698
Resilience at University: The Development and Testing of a New Measure
ERIC Educational Resources Information Center
Turner, Michelle; Holdsworth, Sarah; Scott-Young, Christina M.
2017-01-01
While measures of resilience have been applied in university settings, progress has been hindered by the lack of a consistent measure of resilience. Additionally, results from these measures cannot be easily translated into practical curriculum-based initiatives which support resilience development. Resilience is linked to student mental health…
The environmental psychology of shopping: assessing the value of trees
Kathleen L. Wolf
2007-01-01
A multi-study research program has investigated how consumers respond to trees in various business settings in cities and towns. Some studies focused on central business districts, others tested perceptions along freeways and arterials. Results are remarkably consistent. Trees not only positively affect judgments of visual quality but,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
A minimal standardization setting for language mapping tests: an Italian example.
Rofes, Adrià; de Aguiar, Vânia; Miceli, Gabriele
2015-07-01
During awake surgery, picture-naming tests are administered to identify brain structures related to language function (language mapping), and to avoid iatrogenic damage. Before and after surgery, naming tests and other neuropsychological procedures aim at charting naming abilities, and at detecting which items the subject can respond to correctly. To achieve this goal, sufficiently large samples of normed and standardized stimuli must be available for preoperative and postoperative testing, and to prepare intraoperative tasks, the latter only including items named flawlessly preoperatively. To discuss design, norming and presentation of stimuli, and to describe the minimal standardization setting used to develop two sets of Italian stimuli, one for object naming and one for verb naming, respectively. The setting includes a naming study (to obtain picture-name agreement ratings), two on-line questionnaires (to acquire age-of-acquisition and imageability ratings for all test items), and the norming of other relevant language variables. The two sets of stimuli have >80 % picture-name agreement, high levels of internal consistency and reliability for imageability and age of acquisition ratings. They are normed for psycholinguistic variables known to affect lexical access and retrieval, and are validated in a clinical population. This framework can be used to increase the probability of reliably detecting language impairments before and after surgery, to prepare intraoperative tests based on sufficient knowledge of pre-surgical language abilities in each patient, and to decrease the probability of false positives during surgery. Examples of data usage are provided. Normative data can be found in the supplementary materials.
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-11-15
Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test-a score test-with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene-gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test-up to 23 more associations-whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene-gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. heckerma@microsoft.com Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
Ivy, Reid A; Farber, Jeffrey M; Pagotto, Franco; Wiedmann, Martin
2013-01-01
Foodborne pathogen isolate collections are important for the development of detection methods, for validation of intervention strategies, and to develop an understanding of pathogenesis and virulence. We have assembled a publicly available Cronobacter (formerly Enterobacter sakazakii) isolate set that consists of (i) 25 Cronobacter sakazakii isolates, (ii) two Cronobacter malonaticus isolates, (iii) one Cronobacter muytjensii isolate, which displays some atypical phenotypic characteristics, biochemical profiles, and colony color on selected differential media, and (iv) two nonclinical Enterobacter asburiae isolates, which show some phenotypic characteristics similar to those of Cronobacter spp. The set consists of human (n = 10), food (n = 11), and environmental (n = 9) isolates. Analysis of partial 16S rDNA sequence and seven-gene multilocus sequence typing data allowed for reliable identification of these isolates to species and identification of 14 isolates as sequence type 4, which had previously been shown to be the most common C. sakazakii sequence type associated with neonatal meningitis. Phenotypic characterization was carried out with API 20E and API 32E test strips and streaking on two selective chromogenic agars; isolates were also assessed for sorbitol fermentation and growth at 45°C. Although these strategies typically produced the same classification as sequence-based strategies, based on a panel of four biochemical tests, one C. sakazakii isolate yielded inconclusive data and one was classified as C. malonaticus. EcoRI automated ribotyping and pulsed-field gel electrophoresis (PFGE) with XbaI separated the set into 23 unique ribotypes and 30 unique PFGE types, respectively, indicating subtype diversity within the set. Subtype and source data for the collection are publicly available in the PathogenTracker database (www. pathogentracker. net), which allows for continuous updating of information on the set, including links to publications that include information on isolates from this collection.
American Alcohol Photo Stimuli (AAPS): A standardized set of alcohol and matched non-alcohol images.
Stauffer, Christopher S; Dobberteen, Lily; Woolley, Joshua D
2017-11-01
Photographic stimuli are commonly used to assess cue reactivity in the research and treatment of alcohol use disorder. The stimuli used are often non-standardized, not properly validated, and poorly controlled. There are no previously published, validated, American-relevant sets of alcohol images created in a standardized fashion. We aimed to: 1) make available a standardized, matched set of photographic alcohol and non-alcohol beverage stimuli, 2) establish face validity, the extent to which the stimuli are subjectively viewed as what they are purported to be, and 3) establish construct validity, the degree to which a test measures what it claims to be measuring. We produced a standardized set of 36 images consisting of American alcohol and non-alcohol beverages matched for basic color, form, and complexity. A total of 178 participants (95 male, 82 female, 1 genderqueer) rated each image for appetitiveness. An arrow-probe task, in which matched pairs were categorized after being presented for 200 ms, assessed face validity. Criteria for construct validity were met if variation in AUDIT scores were associated with variation in performance on tasks during alcohol image presentation. Overall, images were categorized with >90% accuracy. Participants' AUDIT scores correlated significantly with alcohol "want" and "like" ratings [r(176) = 0.27, p = <0.001; r(176) = 0.36, p = <0.001] and arrow-probe latency [r(176) = -0.22, p = 0.004], but not with non-alcohol outcomes. Furthermore, appetitive ratings and arrow-probe latency for alcohol, but not non-alcohol, differed significantly for heavy versus light drinkers. Our image set provides valid and reliable alcohol stimuli for both explicit and implicit tests of cue reactivity. The use of standardized, validated, reliable image sets may improve consistency across research and treatment paradigms.
Holman, N; Lewis-Barned, N; Bell, R; Stephens, H; Modder, J; Gardosi, J; Dornhorst, A; Hillson, R; Young, B; Murphy, H R
2011-07-01
To develop and evaluate a standardized data set for measuring pregnancy outcomes in women with Type 1 and Type 2 diabetes and to compare recent outcomes with those of the 2002-2003 Confidential Enquiry into Maternal and Child Health. Existing regional, national and international data sets were compared for content, consistency and validity to develop a standardized data set for diabetes in pregnancy of 46 key clinical items. The data set was tested retrospectively using data from 2007-2008 pregnancies included in three regional audits (Northern, North West and East Anglia). Obstetric and neonatal outcomes of pregnancies resulting in a stillbirth or live birth were compared with those from the same regions during 2002-2003. Details of 1381 pregnancies, 812 (58.9%) in women with Type 1 diabetes and 556 (40.3%) in women with Type 2 diabetes, were available to test the proposed standardized data set. Of the 46 data items proposed, only 16 (34.8%), predominantly the delivery and neonatal items, achieved ≥ 85% completeness. Ethnic group data were available for 746 (54.0%) pregnancies and BMI for 627 (46.5%) pregnancies. Glycaemic control data were most complete-available for 1217 pregnancies (88.1%), during the first trimester. Only 239 women (19.9%) had adequate pregnancy preparation, defined as pre-conception folic acid and first trimester HbA(1c) ≤ 7% (≤ 53 mmol/mol). Serious adverse outcome rates (major malformation and perinatal mortality) were 55/1000 and had not improved since 2002-2003. A standardized data set for diabetes in pregnancy may improve consistency of data collection and allow for more meaningful evaluation of pregnancy outcomes in women with pregestational diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
The experimental behavior of spinning pretwisted laminated composite plates
NASA Technical Reports Server (NTRS)
Kosmatka, John B.; Lapid, Alex J.
1993-01-01
The purpose of the research is to gain an understanding of the material and geometric couplings present in advanced composite turbo-propellers. Twelve pre-twisted laminated composite plates are tested. Three different ply lay-ups (2 symmetric and 1 asymmetric) and four different geometries (flat and 30x pre-twist about the mid-chord, quarter-chord, and leading edge) distinguish each plate from one another. Four rotating and non-rotating tests are employed to isolate the material and geometric couplings of an advanced turbo propeller. The first series of tests consist of non-rotating static displacement, strain, and vibrations. These tests examine the effects of ply lay-up and geometry. The second series of tests consist of rotating displacement, strain, and vibrations with various pitch and sweep settings. These tests utilize the Dynamic Spin Rig Facility at the NASA Lewis Research Center. The rig allows the spin testing of the plates in a near vacuum environment. The tests examine how the material and plate geometry interact with the pitch and sweep geometry of an advanced turbo-propeller.
The CO₂ GAP Project--CO₂ GAP as a prognostic tool in emergency departments.
Shetty, Amith L; Lai, Kevin H; Byth, Karen
2010-12-01
To determine whether CO₂ GAP [(a-ET) PCO₂] value differs consistently in patients presenting with shortness of breath to the ED requiring ventilatory support. To determine a cut-off value of CO₂ GAP, which is consistently associated with measured outcome and to compare its performance against other derived variables. This prospective observational study was conducted in ED on a convenience sample of 412 from 759 patients who underwent concurrent arterial blood gas and ETCO₂ (end-tidal CO₂) measurement. They were randomized to test sample of 312 patients and validation set of 100 patients. The primary outcome of interest was the need for ventilatory support and secondary outcomes were admission to high dependency unit or death during stay in ED. The randomly selected training set was used to select cut-points for the possible predictors; that is, CO₂ GAP, CO₂ gradient, physiologic dead space and A-a gradient. The sensitivity, specificity and predictive values of these predictors were validated in the test set of 100 patients. Analysis of the receiver operating characteristic curves revealed the CO₂ GAP performed significantly better than the arterial-alveolar gradient in patients requiring ventilator support (area under the curve 0.950 vs 0.726). A CO₂ GAP ≥10 was associated with assisted ventilation outcomes when applied to the validation test set (100% sensitivity 70% specificity). The CO₂ GAP [(a-ET) PCO₂] differs significantly in patients requiring assisted ventilation when presenting with shortness of breath to EDs and further research addressing the prognostic value of CO₂ GAP in this specific aspect is required. © 2010 The Authors. EMA © 2010 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Swab Protocol for Rapid Laboratory Diagnosis of Cutaneous Anthrax
Marston, Chung K.; Bhullar, Vinod; Baker, Daniel; Rahman, Mahmudur; Hossain, M. Jahangir; Chakraborty, Apurba; Khan, Salah Uddin; Hoffmaster, Alex R.
2012-01-01
The clinical laboratory diagnosis of cutaneous anthrax is generally established by conventional microbiological methods, such as culture and directly straining smears of clinical specimens. However, these methods rely on recovery of viable Bacillus anthracis cells from swabs of cutaneous lesions and often yield negative results. This study developed a rapid protocol for detection of B. anthracis on clinical swabs. Three types of swabs, flocked-nylon, rayon, and polyester, were evaluated by 3 extraction methods, the swab extraction tube system (SETS), sonication, and vortex. Swabs were spiked with virulent B. anthracis cells, and the methods were compared for their efficiency over time by culture and real-time PCR. Viability testing indicated that the SETS yielded greater recovery of B. anthracis from 1-day-old swabs; however, reduced viability was consistent for the 3 extraction methods after 7 days and nonviability was consistent by 28 days. Real-time PCR analysis showed that the PCR amplification was not impacted by time for any swab extraction method and that the SETS method provided the lowest limit of detection. When evaluated using lesion swabs from cutaneous anthrax outbreaks, the SETS yielded culture-negative, PCR-positive results. This study demonstrated that swab extraction methods differ in their efficiency of recovery of viable B. anthracis cells. Furthermore, the results indicated that culture is not reliable for isolation of B. anthracis from swabs at ≥7 days. Thus, we recommend the use of the SETS method with subsequent testing by culture and real-time PCR for diagnosis of cutaneous anthrax from clinical swabs of cutaneous lesions. PMID:23035192
What the Milky Way's dwarfs tell us about the Galactic Center extended gamma-ray excess
NASA Astrophysics Data System (ADS)
Keeley, Ryan E.; Abazajian, Kevork N.; Kwa, Anna; Rodd, Nicholas L.; Safdi, Benjamin R.
2018-05-01
The Milky Way's Galactic Center harbors a gamma-ray excess that is a candidate signal of annihilating dark matter. Dwarf galaxies remain predominantly dark in their expected commensurate emission. In this work we quantify the degree of consistency between these two observations through a joint likelihood analysis. In doing so we incorporate Milky Way dark matter halo profile uncertainties, as well as an accounting of diffuse gamma-ray emission uncertainties in dark matter annihilation models for the Galactic Center extended gamma-ray excess (GCE) detected by the Fermi Gamma-Ray Space Telescope. The preferred range of annihilation rates and masses expands when including these unknowns. Even so, using two recent determinations of the Milky Way halo's local density leaves the GCE preferred region of single-channel dark matter annihilation models to be in strong tension with annihilation searches in combined dwarf galaxy analyses. A third, higher Milky Way density determination, alleviates this tension. Our joint likelihood analysis allows us to quantify this inconsistency. We provide a set of tools for testing dark matter annihilation models' consistency within this combined data set. As an example, we test a representative inverse Compton sourced self-interacting dark matter model, which is consistent with both the GCE and dwarfs.
Results of the long range position-determining system tests. [Field Army system
NASA Technical Reports Server (NTRS)
Rhode, F. W.
1973-01-01
The long range position-determining system (LRPDS) has been developed by the Corps of Engineers to provide the Field Army with a rapid and accurate positioning capability. The LRPDS consists of an airborne reference position set (RPS), up to 30 ground based positioning sets (PS), and a position computing central (PCC). The PCC calculates the position of each PS based on the range change information provided by each Set. The positions can be relayed back to the PS again via RPS. Each PS unit contains a double oven precise crystal oscillator. The RPS contains a Hewlett-Packard cesium beam standard. Frequency drifts and off-sets of the crystal oscillators are taken in account in the data reduction process. A field test program was initiated in November 1972. A total of 54 flights were made which included six flights for equipment testing and 48 flights utilizing the field test data reduction program. The four general types of PS layouts used were: short range; medium range; long range; tactical configuration. The overall RMS radial error of the unknown positions varied from about 2.3 meters for the short range to about 15 meters for the long range. The corresponding elevation RMS errors vary from about 12 meters to 37 meters.
In Vitro Microbiological Analysis of Bacterial Seal in Hybrid Zirconia Abutment Tapered Connection.
Harlos, Maurício Marcelo; Bezerra da Silva, Thiago; Peruzzo, Daiane C; Napimoga, Marcelo H; Joly, Julio Cesar; Martinez, Elizabeth F
2017-04-01
The aim of this study was to evaluate the bacterial seal at the implant-hybrid zirconia abutment interface and Morse taper-type connections through in vitro microbiological analysis. Sixteen implants and their respective abutments were divided into 3 groups: test (10 sets), positive control (3 sets), and negative control (3 sets). In the test group, 10 implants were contaminated with Escherichia coli using a sterile inoculating loop to the inner portion of the implants, followed by torque application to the abutment (30 N·cm). The positive controls were also contaminated, but no torque was applied to the abutment screw. The negative control consisted of uncontaminated sets. All specimens were immersed in test tubes containing 5 mL brain heart infusion (BHI) broth, maintained in a microbiological incubator for 14 days at 37°C under aerobic conditions, and monitored every 24 hours for evidence of bacterial growth. During the 14 days of incubation, no significant increase in the number of cloudy culture media was observed in the test group (P = 0.448). No significant difference in broth turbidity ratio was observed (P > 0.05). Hybrid zirconia abutments can create an effective seal at the tapered abutment-implant interface with a 30-N·cm installation torque.
Potential metabolite markers of schizophrenia.
Yang, J; Chen, T; Sun, L; Zhao, Z; Qi, X; Zhou, K; Cao, Y; Wang, X; Qiu, Y; Su, M; Zhao, A; Wang, P; Yang, P; Wu, J; Feng, G; He, L; Jia, W; Wan, C
2013-01-01
Schizophrenia is a severe mental disorder that affects 0.5-1% of the population worldwide. Current diagnostic methods are based on psychiatric interviews, which are subjective in nature. The lack of disease biomarkers to support objective laboratory tests has been a long-standing bottleneck in the clinical diagnosis and evaluation of schizophrenia. Here we report a global metabolic profiling study involving 112 schizophrenic patients and 110 healthy subjects, who were divided into a training set and a test set, designed to identify metabolite markers. A panel of serum markers consisting of glycerate, eicosenoic acid, β-hydroxybutyrate, pyruvate and cystine was identified as an effective diagnostic tool, achieving an area under the receiver operating characteristic curve (AUC) of 0.945 in the training samples (62 patients and 62 controls) and 0.895 in the test samples (50 patients and 48 controls). Furthermore, a composite panel by the addition of urine β-hydroxybutyrate to the serum panel achieved a more satisfactory accuracy, which reached an AUC of 1 in both the training set and the test set. Multiple fatty acids and ketone bodies were found significantly (P<0.01) elevated in both the serum and urine of patients, suggesting an upregulated fatty acid catabolism, presumably resulting from an insufficiency of glucose supply in the brains of schizophrenia patients.
Mefford, Linda C; Alligood, Martha R
2011-11-01
To explore the influences of intensity of nursing care and consistency of nursing caregivers on health and economic outcomes using Levine's Conservation Model of Nursing as the guiding theoretical framework. Professional nursing practice models are increasingly being used although limited research is available regarding their efficacy. A structural equation modelling approach tested the influence of intensity of nursing care (direct care by professional nurses and patient-nurse ratio) and consistency of nursing caregivers on morbidity and resource utilization in a neonatal intensive care unit (NICU) setting using primary nursing. Consistency of nursing caregivers served as a powerful mediator of length of stay and the duration of mechanical ventilation, supplemental oxygen therapy and parenteral nutrition. Analysis of nursing intensity indicators revealed that a mix of professional nurses and assistive personnel was effective. Providing consistency of nursing caregivers may significantly improve both health and economic outcomes. New evidence was found to support the efficacy of the primary nursing model in the NICU. Designing nursing care delivery systems in acute inpatient settings with an emphasis on consistency of nursing caregivers could improve health outcomes, increase organizational effectiveness, and enhance satisfaction of nursing staff, patients, and families. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Mawardi, M.; Deyundha, D.; Zainul, R.; Zalmi P, R.
2018-04-01
The study has been conducted to determine characteristics of the portland composite cement by the addition of napa soil from Sarilamak subdistrict, 50 Kota District as an alternative additional material at PT. Semen Padang. Napa soil is a natural material highly containing silica and alumina minerals so that it can be one of material in producing cement. This study aims to determine the effect of napa soil on the quality of portland composite cement. Napa soil used in the variation compositions 0%, 4%, 8%, 12% and 16%, for control of cement used 8 % of pozzolan and 0 % of napa soil. Determination of cement quality by testing cement characteristics include blaine test, sieving, lost of ignition or LOI, insoluble residue, normal consistency, setting time and compressive strength. Cement was characterized using XRF. Fineness of cement decreases with the addition of napa soil. Lost of Ignition of cement decreased, while the insoluble residue increased with the addition of napa soil. Normal consistency of cement increasing, so does initial setting time and final setting time of cement. While the resultant compressive strength decreases with the addition of napa soil on 28 days, 342, 325, 307, 306, and 300 kg / cm2.
Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Valero-Campo, Carlos; Berzosa, César; Bataller, Ana Vanessa; Arjol-Serrano, José Luis; Moras, Gerard; Mendez-Villanueva, Alberto
2017-08-01
To analyze the effects of 2 different eccentric-overload training (EOT) programs, using a rotational conical pulley, on functional performance in team-sport players. A traditional movement paradigm (ie, squat) including several sets of 1 bilateral and vertical movement was compared with a novel paradigm including a different exercise in each set of unilateral and multi-directional movements. Forty-eight amateur or semiprofessional team-sport players were randomly assigned to an EOT program including either the same bilateral vertical (CBV, n = 24) movement (squat) or different unilateral multidirectional (VUMD, n = 24) movements. Training programs consisted of 6 sets of 1 exercise (CBV) or 1 set of 6 exercises (VUMD) × 6-10 repetitions with 3 min of passive recovery between sets and exercises, biweekly for 8 wk. Functional-performance assessment included several change-of-direction (COD) tests, a 25-m linear-sprint test, unilateral multidirectional jumping tests (ie, lateral, horizontal, and vertical), and a bilateral vertical-jump test. Within-group analysis showed substantial improvements in all tests in both groups, with VUMD showing more robust adaptations in pooled COD tests and lateral/horizontal jumping, whereas the opposite occurred in CBV respecting linear sprinting and vertical jumping. Between-groups analyses showed substantially better results in lateral jumps (ES = 0.21), left-leg horizontal jump (ES = 0.35), and 10-m COD with right leg (ES = 0.42) in VUMD than in CBV. In contrast, left-leg countermovement jump (ES = 0.26) was possibly better in CBV than in VUMD. Eight weeks of EOT induced substantial improvements in functional-performance tests, although the force-vector application may play a key role to develop different and specific functional adaptations.
ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data
NASA Astrophysics Data System (ADS)
Akca, Irfan
2016-04-01
ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discre-tized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.
Routine development of objectively derived search strategies.
Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael
2012-02-29
Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.
NASA Technical Reports Server (NTRS)
1976-01-01
The results of the spread spectrum despreader project are reported and three principal products are designed and tested. The products are, (1) a spread spectrum despreader breadboard, (2) associated test equipment consisting of a spectrum spreader and bit reconstruction/error counter and (3) paper design of a Ku-band receiver which would incorporate the despreader as a principal subsystem. The despreader and test set are designed for maximum flexibility. A choice of unbalanced quadriphase or biphase shift keyed data modulation is available. Selectable integration time and threshold voltages on the despreader further lend true usefulness as laboratory test equipment to the delivered hardware.
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery
NASA Astrophysics Data System (ADS)
Piragnolo, Marco; Masiero, Andrea; Pirotti, Francesco
2017-04-01
Since recent years surveying with unmanned aerial vehicles (UAV) is getting a great amount of attention due to decreasing costs, higher precision and flexibility of usage. UAVs have been applied for geomorphological investigations, forestry, precision agriculture, cultural heritage assessment and for archaeological purposes. It can be used for land use and land cover classification (LULC). In literature, there are two main types of approaches for classification of remote sensing imagery: pixel-based and object-based. On one hand, pixel-based approach mostly uses training areas to define classes and respective spectral signatures. On the other hand, object-based classification considers pixels, scale, spatial information and texture information for creating homogeneous objects. Machine learning methods have been applied successfully for classification, and their use is increasing due to the availability of faster computing capabilities. The methods learn and train the model from previous computation. Two machine learning methods which have given good results in previous investigations are Random Forest (RF) and Support Vector Machine (SVM). The goal of this work is to compare RF and SVM methods for classifying LULC using images collected with a fixed wing UAV. The processing chain regarding classification uses packages in R, an open source scripting language for data analysis, which provides all necessary algorithms. The imagery was acquired and processed in November 2015 with cameras providing information over the red, blue, green and near infrared wavelength reflectivity over a testing area in the campus of Agripolis, in Italy. Images were elaborated and ortho-rectified through Agisoft Photoscan. The ortho-rectified image is the full data set, and the test set is derived from partial sub-setting of the full data set. Different tests have been carried out, using a percentage from 2 % to 20 % of the total. Ten training sets and ten validation sets are obtained from each test set. The control dataset consist of an independent visual classification done by an expert over the whole area. The classes are (i) broadleaf, (ii) building, (iii) grass, (iv) headland access path, (v) road, (vi) sowed land, (vii) vegetable. The RF and SVM are applied to the test set. The performances of the methods are evaluated using the three following accuracy metrics: Kappa index, Classification accuracy and Classification Error. All three are calculated in three different ways: with K-fold cross validation, using the validation test set and using the full test set. The analysis indicates that SVM gets better results in terms of good scores using K-fold cross or validation test set. Using the full test set, RF achieves a better result in comparison to SVM. It also seems that SVM performs better with smaller training sets, whereas RF performs better as training sets get larger.
Use of a Self-Instructional Radiographic Anatomy Module for Dental Hygiene Faculty Calibration.
Brame, Jennifer L; AlGheithy, Demah Salem; Platin, Enrique; Mitchell, Shannon H
2017-06-01
Purpose: Dental hygiene educators often provide inconsistent instruction in clinical settings and various attempts to address the lack of consistency have been reported in the literature. The purpose of this pilot study was to determine if the use of a use of a self-instructional, radiographic anatomy (SIRA) module improved DH faculty calibration regarding the identifica-tion of normal intraoral and extraoral radiographic anatomy and whether its effect could be sustained over a period of four months. Methods: A convenience sample consisting of all dental hygiene faculty members involved in clinical instruction (N=23) at the University of North Carolina (UNC) was invited to complete the four parts of this online pilot study: a pre-test, review of the SIRA module, an immediate post-test, and a four-month follow-up post-test. Descriptive analyses, the Friedman's ANOVA, and the exact form of the Wilcoxon-Signed-Rank test were used to an-alyze the data. Level of significance was set at 0.05. Participants who did not complete all parts of the study were omitted from data analysis comparing the pre to post-test performance. Results: The pre-test response rate was 73.9% (N=17), and 88.2% (N=15) of those initial participants completed both the immediate and follow-up post-tests. Faculty completing all parts of the study consisted of: 5 full-time faculty, 5 part-time faculty, and 5 graduate teaching assistants. The Friedman's ANOVA revealed no statistically significant difference (P=0.179) in percentages of correct responses between the three tests (pre, post and follow-up). The exact form of the Wilcoxon-Signed-Rank test revealed marginal significance when comparing percent of correct responses at pre-test and immediate post-test (P=0.054), and no statistically significant difference when comparing percent of correct responses at immediate post-test and the follow-up post-test four months later (P=0.106). Conclusions: Use of a SIRA module did not significantly affect DH faculty test performance. Lack of statistical significance in the percentages of correct responses between the three tests may have been affected by the small number of participants completing all four parts of the study (N=15). Additional research is needed to identify and improve methods for faculty calibration. Copyright © 2017 The American Dental Hygienists’ Association.
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
Development, refinement, and testing of a short term solar flare prediction algorithm
NASA Technical Reports Server (NTRS)
Smith, Jesse B., Jr.
1993-01-01
During the period included in this report, the expenditure of time and effort, and progress toward performance of the tasks and accomplishing the goals set forth in the two year research grant proposal, consisted primarily of calibration and analysis of selected data sets. The heliographic limits of 30 degrees from central meridian were continued. As previously reported, all analyses are interactive and are performed by the Principal Investigator. It should also be noted that the analysis time involved by the Principal Investigator during this reporting period was limited, partially due to illness and partially resulting from other uncontrollable factors. The calibration technique (as developed by MSFC solar scientists), incorporates sets of constants which vary according to the wave length of the observation data set. One input constant is then varied interactively to correct for observing conditions, etc., to result in a maximum magnetic field strength (in the calibrated data), based on a separate analysis. There is some insecurity in the methodology and the selection of variables to yield the most self-consistent results for variable maximum field strengths and for variable observing/atmospheric conditions. Several data sets were analyzed using differing constant sets, and separate analyses to differing maximum field strength - toward standardizing methodology and technique for the most self-consistent results for the large number of cases. It may be necessary to recalibrate some of the analyses, but the sc analyses are retained on the optical disks and can still be used with recalibration where necessary. Only the extracted parameters will be changed.
Feenstra, Heleen E M; Murre, Jaap M J; Vermeulen, Ivar E; Kieffer, Jacobien M; Schagen, Sanne B
2018-04-01
To facilitate large-scale assessment of a variety of cognitive abilities in clinical studies, we developed a self-administered online neuropsychological test battery: the Amsterdam Cognition Scan (ACS). The current studies evaluate in a group of adult cancer patients: test-retest reliability of the ACS and the influence of test setting (home or hospital), and the relationship between our online and a traditional test battery (concurrent validity). Test-retest reliability was studied in 96 cancer patients (57 female; M age = 51.8 years) who completed the ACS twice. Intraclass correlation coefficients (ICCs) were used to assess consistency over time. The test setting was counterbalanced between home and hospital; influence on test performance was assessed by repeated measures analyses of variance. Concurrent validity was studied in 201 cancer patients (112 female; M age = 53.5 years) who completed both the online and an equivalent traditional neuropsychological test battery. Spearman or Pearson correlations were used to assess consistency between online and traditional tests. ICCs of the online tests ranged from .29 to .76, with an ICC of .78 for the ACS total score. These correlations are generally comparable with the test-retest correlations of the traditional tests as reported in the literature. Correlating online and traditional test scores, we observed medium to large concurrent validity (r/ρ = .42 to .70; total score r = .78), except for a visuospatial memory test (ρ = .36). Correlations were affected-as expected-by design differences between online tests and their offline counterparts. Although development and optimization of the ACS is an ongoing process, and reliability can be optimized for several tests, our results indicate that it is a highly usable tool to obtain (online) measures of various cognitive abilities. The ACS is expected to facilitate efficient gathering of data on cognitive functioning in the near future.
Lippert, Christoph; Xiang, Jing; Horta, Danilo; Widmer, Christian; Kadie, Carl; Heckerman, David; Listgarten, Jennifer
2014-01-01
Motivation: Set-based variance component tests have been identified as a way to increase power in association studies by aggregating weak individual effects. However, the choice of test statistic has been largely ignored even though it may play an important role in obtaining optimal power. We compared a standard statistical test—a score test—with a recently developed likelihood ratio (LR) test. Further, when correction for hidden structure is needed, or gene–gene interactions are sought, state-of-the art algorithms for both the score and LR tests can be computationally impractical. Thus we develop new computationally efficient methods. Results: After reviewing theoretical differences in performance between the score and LR tests, we find empirically on real data that the LR test generally has more power. In particular, on 15 of 17 real datasets, the LR test yielded at least as many associations as the score test—up to 23 more associations—whereas the score test yielded at most one more association than the LR test in the two remaining datasets. On synthetic data, we find that the LR test yielded up to 12% more associations, consistent with our results on real data, but also observe a regime of extremely small signal where the score test yielded up to 25% more associations than the LR test, consistent with theory. Finally, our computational speedups now enable (i) efficient LR testing when the background kernel is full rank, and (ii) efficient score testing when the background kernel changes with each test, as for gene–gene interaction tests. The latter yielded a factor of 2000 speedup on a cohort of size 13 500. Availability: Software available at http://research.microsoft.com/en-us/um/redmond/projects/MSCompBio/Fastlmm/. Contact: heckerma@microsoft.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25075117
An unusual case of random fire-setting behavior associated with lacunar stroke.
Bosshart, Herbert; Capek, Sonia
2011-06-15
A case of a 47-year-old man with a sudden onset of a bizarre and random fire-setting behavior is reported. The man, who had been arrested on felony arson charges, complained of difficulties concentrating and of recent memory impairment. Axial T1-weighted magnetic resonance imaging showed a low intensity lacunar lesion in the genu and anterior limb of the left internal capsule. A neuropsychological test battery revealed lower than normal scores for executive functions, attention and memory, consistent with frontal lobe dysfunction. The recent onset of fire-setting behavior and the chronic nature of the lacunar lesion, together with an unremarkable performance on tests measuring executive functions two years prior, suggested a causal relationship between this organic brain lesion and the fire-setting behavior. The present case describes a rare and as yet unreported association between random impulse-driven fire-setting behavior and damage to the left internal capsule and suggests a disconnection of frontal lobe structures as a possible pathogenic mechanism. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Evaluation of rules to distinguish unique female grizzly bears with cubs in Yellowstone
Schwartz, C.C.; Haroldson, M.A.; Cherry, S.; Keating, K.A.
2008-01-01
The United States Fish and Wildlife Service uses counts of unduplicated female grizzly bears (Ursus arctos) with cubs-of-the-year to establish limits of sustainable mortality in the Greater Yellowstone Ecosystem, USA. Sightings are dustered into observations of unique bears based on an empirically derived rule set. The method has never been tested or verified. To evaluate the rule set, we used data from radiocollared females obtained during 1975-2004 to simulate populations under varying densities, distributions, and sighting frequencies. We tested individual rules and rule-set performance, using custom software to apply the rule-set and duster sightings. Results indicated most rules were violated to some degree, and rule-based dustering consistently underestimated the minimum number of females and total population size derived from a nonparametric estimator (Chao2). We conclude that the current rule set returns conservative estimates, but with minor improvements, counts of unduplicated females-with-cubs can serve as a reasonable index of population size useful for establishing annual mortality limits. For the Yellowstone population, the index is more practical and cost-effective than capture-mark-recapture using either DNA hair snagging or aerial surveys with radiomarked bears. The method has useful application in other ecosystems, but we recommend rules used to distinguish unique females be adapted to local conditions and tested.
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G
2015-10-01
One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.
Doig, Emmah; Prescott, Sarah; Fleming, Jennifer; Cornwell, Petrea; Kuipers, Pim
2016-01-01
To examine the internal reliability and test-retest reliability of the Client-Centeredness of Goal Setting (C-COGS) scale. The C-COGS scale was administered to 42 participants with acquired brain injury after completion of multidisciplinary goal planning. Internal reliability of scale items was examined using item-partial total correlations and Cronbach's α coefficient. The scale was readministered within a 1-mo period to a subsample of 12 participants to examine test-retest reliability by calculating exact and close percentage agreement for each item. After examination of item-partial total correlations, test items were revised. The revised items demonstrated stronger internal consistency than the original items. Preliminary evaluation of test-retest reliability was fair, with an average exact percent agreement across all test items of 67%. Findings support the preliminary reliability of the C-COGS scale as a tool to evaluate and promote client-centered goal planning in brain injury rehabilitation. Copyright © 2016 by the American Occupational Therapy Association, Inc.
Antimicrobial susceptibility testing by Australian veterinary diagnostic laboratories.
Hardefeldt, L Y; Marenda, M; Crabb, H; Stevenson, M A; Gilkerson, J R; Billman-Jacobe, H; Browning, G F
2018-04-01
The national strategy for tackling antimicrobial resistance highlights the need for antimicrobial stewardship in veterinary practice and for surveillance of antimicrobial susceptibility in veterinary pathogens. Diagnostic laboratories have an important role in facilitating both of these processes, but it is unclear whether data from veterinary diagnostic laboratories are similar enough to allow for compilation and if there is consistent promotion of appropriate antimicrobial use embedded in the approaches of different laboratories to susceptibility testing. A cross-sectional study of antimicrobial susceptibility testing and reporting procedures by Australian veterinary diagnostic laboratories was conducted in 2017 using an online questionnaire. All 18 veterinary diagnostic laboratories in Australia completed the questionnaire. Kirby-Bauer disc diffusion was the method predominantly used for antimicrobial susceptibility testing and was used to evaluate 86% of all isolates, although two different protocols were used across the 18 laboratories (CLSI 15/18, CDS 3/18). Minimum inhibitory concentrations were never reported by 61% of laboratories. Common isolates were consistently reported on across all species, except for gram-negative isolates in pigs, for which there was some variation in the approach to reporting. There was considerable diversity in the panels of antimicrobials used for susceptibility testing on common isolates and no consistency was apparent between laboratories for any bacterial species. We recommend that nationally agreed and consistent antimicrobial panels for routine susceptibility testing should be developed and a uniform set of guidelines should be adopted by veterinary diagnostic laboratories in Australia. © 2018 Australian Veterinary Association.
The Chinese version of the Outcome Expectations for Exercise scale: validation study.
Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger
2011-06-01
Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out to see whether these results are generalisable to older Chinese people living in urban areas. Copyright © 2010 Elsevier Ltd. All rights reserved.
Benchmarking contactless acquisition sensor reproducibility for latent fingerprint trace evidence
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Dittmann, Jana
2015-03-01
Optical, nano-meter range, contactless, non-destructive sensor devices are promising acquisition techniques in crime scene trace forensics, e.g. for digitizing latent fingerprint traces. Before new approaches are introduced in crime investigations, innovations need to be positively tested and quality ensured. In this paper we investigate sensor reproducibility by studying different scans from four sensors: two chromatic white light sensors (CWL600/CWL1mm), one confocal laser scanning microscope, and one NIR/VIS/UV reflection spectrometer. Firstly, we perform an intra-sensor reproducibility testing for CWL600 with a privacy conform test set of artificial-sweat printed, computer generated fingerprints. We use 24 different fingerprint patterns as original samples (printing samples/templates) for printing with artificial sweat (physical trace samples) and their acquisition with contactless sensory resulting in 96 sensor images, called scan or acquired samples. The second test set for inter-sensor reproducibility assessment consists of the first three patterns from the first test set, acquired in two consecutive scans using each device. We suggest using a simple feature space set in spatial and frequency domain known from signal processing and test its suitability for six different classifiers classifying scan data into small differences (reproducible) and large differences (non-reproducible). Furthermore, we suggest comparing the classification results with biometric verification scores (calculated with NBIS, with threshold of 40) as biometric reproducibility score. The Bagging classifier is nearly for all cases the most reliable classifier in our experiments and the results are also confirmed with the biometric matching rates.
Optical tests for using smartphones inside medical devices
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Acobas, Jennifer K.; Phang, Ye Shang; Hassan, David; Bolton, Frank J.; Levitz, David
2018-02-01
Smartphones are currently used in many medical applications and are more frequently being integrated into medical imaging devices. The regulatory requirements in existence today however, particularly the standardization of smartphone imaging through validation and verification testing, only partially cover imaging characteristics with a smartphone. Specifically, it has been shown that smartphone camera specifications are of sufficient quality for medical imaging, and there are devices which comply with the FDA's regulatory requirements for a medical device such as a device's field of view, direction of viewing and optical resolution and optical distortion. However, these regulatory requirements do not call specifically for color testing. Images of the same object using automatic settings or different light sources can show different color composition. Experimental results showing such differences are presented. Under some circumstances, such differences in color composition could potentially lead to incorrect diagnoses. It is therefore critical to control the smartphone camera and illumination parameters properly. This paper examines different smartphone camera settings that affect image quality and color composition. To test and select the correct settings, a test methodology is proposed. It aims at evaluating and testing image color correctness and white balance settings for mobile phones and LED light sources. Emphasis is placed on color consistency and deviation from gray values, specifically by evaluating the ΔC values based on the CIEL*a*b* color space. Results show that such standardization minimizes differences in color composition and thus could reduce the risk of a wrong diagnosis.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Solvent effects in time-dependent self-consistent field methods. I. Optical response calculations
Bjorgaard, J. A.; Kuzmenko, V.; Velizhanin, K. A.; ...
2015-01-22
In this study, we implement and examine three excited state solvent models in time-dependent self-consistent field methods using a consistent formalism which unambiguously shows their relationship. These are the linear response, state specific, and vertical excitation solvent models. Their effects on energies calculated with the equivalent of COSMO/CIS/AM1 are given for a set of test molecules with varying excited state charge transfer character. The resulting solvent effects are explained qualitatively using a dipole approximation. It is shown that the fundamental differences between these solvent models are reflected by the character of the calculated excitations.
Prototype ultrasonic instrument for quantitative testing
NASA Technical Reports Server (NTRS)
Lynnworth, L. C.; Dubois, J. L.; Kranz, P. R.
1972-01-01
A prototype ultrasonic instrument has been designed and developed for quantitative testing. The complete delivered instrument consists of a pulser/receiver which plugs into a standard oscilloscope, an rf power amplifier, a standard decade oscillator, and a set of broadband transducers for typical use at 1, 2, 5 and 10 MHz. The system provides for its own calibration, and on the oscilloscope, presents a quantitative (digital) indication of time base and sensitivity scale factors and some measurement data.
Combustion Integration Rack (CIR) Testing
2015-02-18
Fluids and Combustion Facility (FCF), Combustion Integration Rack (CIR) during testing in the Structural Dynamics Laboratory (SDL). The Fluids and Combustion Facility (FCF) is a set of two International Space Station (ISS) research facilities designed to support physical and biological experiments in support of technology development and validation in space. The FCF consists of two modular, reconfigurable racks called the Combustion Integration Rack (CIR) and the Fluids Integration Rack (FIR). The CIR and FIR were developed at NASAʼs Glenn Research Center.
NIR spectroscopic measurement of moisture content in Scots pine seeds.
Lestander, Torbjörn A; Geladi, Paul
2003-04-01
When tree seeds are used for seedling production it is important that they are of high quality in order to be viable. One of the factors influencing viability is moisture content and an ideal quality control system should be able to measure this factor quickly for each seed. Seed moisture content within the range 3-34% was determined by near-infrared (NIR) spectroscopy on Scots pine (Pinus sylvestris L.) single seeds and on bulk seed samples consisting of 40-50 seeds. The models for predicting water content from the spectra were made by partial least squares (PLS) and ordinary least squares (OLS) regression. Different conditions were simulated involving both using less wavelengths and going from samples to single seeds. Reflectance and transmission measurements were used. Different spectral pretreatment methods were tested on the spectra. Including bias, the lowest prediction errors for PLS models based on reflectance within 780-2280 nm from bulk samples and single seeds were 0.8% and 1.9%, respectively. Reduction of the single seed reflectance spectrum to 850-1048 nm gave higher biases and prediction errors in the test set. In transmission (850-1048 nm) the prediction error was 2.7% for single seeds. OLS models based on simulated 4-sensor single seed system consisting of optical filters with Gaussian transmission indicated more than 3.4% error in prediction. A practical F-test based on test sets to differentiate models is introduced.
2006-09-01
Richardson, in review). Figure 1 shows the lithostratigraphic setting for Eocene through Miocene strata, and the occurrence of hydrostratigraphic units of...basal Haw- thorn unit lies unconformably on lithologies informally called “ Eocene limestones,” which consist of Suwannee Limestone, Ocala Limestone
Acquiescent Responding in Balanced Multidimensional Scales and Exploratory Factor Analysis
ERIC Educational Resources Information Center
Lorenzo-Seva, Urbano; Rodriguez-Fornells, Antoni
2006-01-01
Personality tests often consist of a set of dichotomous or Likert items. These response formats are known to be susceptible to an agreeing-response bias called acquiescence. The common assumption in balanced scales is that the sum of appropriately reversed responses should be reasonably free of acquiescence. However, inter-item correlation (or…
Statistical Criteria for Setting Thresholds in Medical School Admissions
ERIC Educational Resources Information Center
Albanese, Mark A.; Farrell, Philip; Dottl, Susan
2005-01-01
In 2001, Dr. Jordan Cohen, President of the AAMC, called for medical schools to consider using an Medical College Admission Test (MCAT) threshold to eliminate high-risk applicants from consideration and then to use non-academic qualifications for further consideration. This approach would seem to be consistent with the recent Supreme Court ruling…
Some Tests of Response Membership in Acquired Equivalence Classes
ERIC Educational Resources Information Center
Urcuioli, Peter J.; Lionello-DeNolf, Karen; Michalek, Sarah; Vasconcelos, Marco
2006-01-01
Pigeons were trained on many-to-one matching in which pairs of samples, each consisting of a visual stimulus and a distinctive pattern of center-key responding, occasioned the same reinforced comparison choice. Acquired equivalence between the visual and response samples then was evaluated by reinforcing new comparison choices to one set of…
Modern Managers Move Away from the Carrot and Stick Approach.
ERIC Educational Resources Information Center
Stahelski, Anthony J.; Frost, Dean E.
Studies using social power theory constructs (French and Raven, 1959) to analyze compliance attempts in field settings show that the power bases are not consistently related to any subordinate outcome variables such as job performance or attitudes. A study was undertaken to test key hypotheses derived from the social power theory concerning…
Criterion-Referenced Item Banking in Electronics: Appendix G. Final Report.
ERIC Educational Resources Information Center
Gorth, William Phillip; Swaminathan, Hariharan
This is one of the outcomes of the work of the Massachusetts Evaluation Service Center for Occupational Education (ESCOE). After an overview of the Performance Test Development Project, a summary of the major products and byproducts is presented. The major products are: (1) a set of clearly defined, well-structured, and consistent behavioral…
Comparison of modern icing cloud instruments
NASA Technical Reports Server (NTRS)
Takeuchi, D. M.; Jahnsen, L. J.; Callander, S. M.; Humbert, M. C.
1983-01-01
Intercomparison tests with Particle Measuring Systems (PMS) were conducted. Cloud liquid water content (LWC) measurements were also taken with a Johnson and Williams (JW) hot-wire device and an icing rate device (Leigh IDS). Tests include varying cloud LWC (0.5 to 5 au gm), cloud median volume diameter (MVD) (15 to 26 microns), temperature (-29 to 20 C), and air speeds (50 to 285 mph). Comparisons were based upon evaluating probe estimates of cloud LWC and median volume diameter for given tunnel settings. Variations of plus or minus 10% and plus or minus 5% in LWC and MVD, respectively, were determined of spray clouds between test made at given tunnel settings (fixed LWC, MVD, and air speed) indicating cloud conditions were highly reproducible. Although LWC measurements from JW and Leigh devices were consistent with tunnel values, individual probe measurements either consistently over or underestimated tunnel values by factors ranging from about 0.2 to 2. Range amounted to a factor of 6 differences between LWC estimates of probes for given cloud conditions. For given cloud conditions, estimates of cloud MVD between probes were within plus or minus 3 microns and 93% of the test cases. Measurements overestimated tunnel values in the range between 10 to 20 microns. The need for improving currently used calibration procedures was indicated. Establishment of test facility (or facilities) such as an icing tunnel where instruments can be calibrated against known cloud standards would be a logical choice.
Internal consistency and stability of the CANTAB neuropsychological test battery in children.
Syväoja, Heidi J; Tammelin, Tuija H; Ahonen, Timo; Räsänen, Pekka; Tolvanen, Asko; Kankaanpää, Anna; Kantomaa, Marko T
2015-06-01
The Cambridge Neuropsychological Test Automated Battery (CANTAB) is a computer-assessed test battery widely use in different populations. The internal consistency and 1-year stability of CANTAB tests were examined in school-age children. Two hundred-thirty children (57% girls) from five schools in the Jyväskylä school district in Finland participated in the study in spring 2011. The children completed the following CANTAB tests: (a) visual memory (pattern recognition memory [PRM] and spatial recognition memory [SRM]), (b) executive function (spatial span [SSP], Stockings of Cambridge [SOC], and intra-extra dimensional set shift [IED]), and (c) attention (reaction time [RTI] and rapid visual information processing [RVP]). Seventy-four children participated in the follow-up measurements (64% girls) in spring 2012. Cronbach's alpha reliability coefficient was used to estimate the internal consistency of the nonhampering test, and structural equation models were applied to examine the stability of these tests. The reliability and the stability could not be determined for IED or SSP because of the nature of these tests. The internal consistency was acceptable only in the RTI task. The 1-year stability was moderate-to-good for the PRM, RTI, and RVP. The SSP and IED showed a moderate correlation between the two measurement points. The SRM and the SOC tasks were not reliable or stable measures in this study population. For research purposes, we recommend using structural equation modeling to improve reliability. The results suggest that the reliability and the stability of computer-based test batteries should be confirmed in the target population before using them for clinical or research purposes. (c) 2015 APA, all rights reserved).
Correlation consistent basis sets for the atoms In–Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahler, Andrew; Wilson, Angela K., E-mail: akwilson@unt.edu
In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.
Magnetic monopole search with the MoEDAL test trapping detector
NASA Astrophysics Data System (ADS)
Katre, Akshay
2016-11-01
IMoEDAL is designed to search for monopoles produced in high-energy Large Hadron Collider (LHC) collisions, based on two complementary techniques: nucleartrack detectors for high-ionisation signatures and other highly ionising avatars of new physics, and trapping volumes for direct magnetic charge measurements with a superconducting magnetometer. The MoEDAL test trapping detector array deployed in 2012, consisting of over 600 aluminium samples, was analysed and found to be consistent with zero trapped magnetic charge. Stopping acceptances are obtained from a simulation of monopole propagation in matter for a range of charges and masses, allowing to set modelindependent and model-dependent limits on monopole production cross sections. Multiples of the fundamental Dirac magnetic charge are probed for the first time at the LHC.
Phonological mismatch makes aided speech recognition in noise cognitively taxing.
Rudner, Mary; Foo, Catharina; Rönnberg, Jerker; Lunner, Thomas
2007-12-01
The working memory framework for Ease of Language Understanding predicts that speech processing becomes more effortful, thus requiring more explicit cognitive resources, when there is mismatch between speech input and phonological representations in long-term memory. To test this prediction, we changed the compression release settings in the hearing instruments of experienced users and allowed them to train for 9 weeks with the new settings. After training, aided speech recognition in noise was tested with both the trained settings and orthogonal settings. We postulated that training would lead to acclimatization to the trained setting, which in turn would involve establishment of new phonological representations in long-term memory. Further, we postulated that after training, testing with orthogonal settings would give rise to phonological mismatch, associated with more explicit cognitive processing. Thirty-two participants (mean=70.3 years, SD=7.7) with bilateral sensorineural hearing loss (pure-tone average=46.0 dB HL, SD=6.5), bilaterally fitted for more than 1 year with digital, two-channel, nonlinear signal processing hearing instruments and chosen from the patient population at the Linköping University Hospital were randomly assigned to 9 weeks training with new, fast (40 ms) or slow (640 ms), compression release settings in both channels. Aided speech recognition in noise performance was tested according to a design with three within-group factors: test occasion (T1, T2), test setting (fast, slow), and type of noise (unmodulated, modulated) and one between-group factor: experience setting (fast, slow) for two types of speech materials-the highly constrained Hagerman sentences and the less-predictable Hearing in Noise Test (HINT). Complex cognitive capacity was measured using the reading span and letter monitoring tests. PREDICTION: We predicted that speech recognition in noise at T2 with mismatched experience and test settings would be associated with more explicit cognitive processing and thus stronger correlations with complex cognitive measures, as well as poorer performance if complex cognitive capacity was exceeded. Under mismatch conditions, stronger correlations were found between performance on speech recognition with the Hagerman sentences and reading span, along with poorer speech recognition for participants with low reading span scores. No consistent mismatch effect was found with HINT. The mismatch prediction generated by the working memory framework for Ease of Language Understanding is supported for speech recognition in noise with the highly constrained Hagerman sentences but not the less-predictable HINT.
Pyrotechnically Operated Valves for Testing and Flight
NASA Technical Reports Server (NTRS)
Conley, Edgar G.; St.Cyr, William (Technical Monitor)
2002-01-01
Pyrovalves still warrant careful description of their operating characteristics, which is consistent with the NASA mission - to assure that both testing and flight hardware perform with the utmost reliability. So, until the development and qualification of the next generation of remotely controlled valves, in all likelihood based on shape memory alloy technology, pyrovalves will remain ubiquitous in controlling flow systems aloft and will possibly see growing use in ground-based testing facilities. In order to assist NASA in accomplishing this task, we propose a three-phase, three-year testing program. Phase I would set up an experimental facility, a 'test rig' in close cooperation with the staff located at the White Sands Test Facility in Southern New Mexico.
Force Limited Vibration Test of HESSI Imager
NASA Technical Reports Server (NTRS)
Amato, Deborah; Pankow, David; Thomsen, Knud
2000-01-01
The High Energy Solar Spectroscopic Imager (HESSI) is a solar x-ray and gamma-ray observatory scheduled for launch in November 2000. Vibration testing of the HESSI imager flight unit was performed in August 1999. The HESSI imager consists of a composite metering tube, two aluminum trays mounted to the tube on titanium flexure mounts, and nine modulation grids mounted on each tray. The vibration tests were acceleration controlled and force limited, in order to prevent overtesting. The force limited strategy reduced the shaker force and notched the acceleration at resonances. The test set-up, test levels, and results are presented. The development of the force limits is also discussed. The imager successfully survived the vibration testing.
Assessing Understanding of the Learning Cycle: The ULC
NASA Astrophysics Data System (ADS)
Marek, Edmund A.; Maier, Steven J.; McCann, Florence
2008-08-01
An 18-item, multiple choice, 2-tiered instrument designed to measure understanding of the learning cycle (ULC) was developed and field-tested from the learning cycle test (LCT) of Odom and Settlage ( Journal of Science Teacher Education, 7, 123 142, 1996). All question sets of the LCT were modified to some degree and 5 new sets were added, resulting in the ULC. The ULC measures (a) understandings and misunderstandings of the learning cycle, (b) the learning cycle’s association with Piaget’s ( Biology and knowledge theory: An essay on the relations between organic regulations and cognitive processes, 1975) theory of mental functioning, and (c) applications of the learning cycle. The resulting ULC instrument was evaluated for internal consistency with Cronbach’s alpha, yielding a coefficient of .791.
2011-01-01
Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712
Joseph, Saju; Kielmann, Karina; Kudale, Abhay; Sheikh, Kabir; Shinde, Swati; Porter, John; Rangan, Sheela
2010-03-01
Sex differentials in the uptake of HIV testing have been reported in a range of settings, however, men's and women's testing patterns are not consistent across these settings, suggesting the need to set sex differentials against gender norms in patient testing behaviour and provider practices. A community-based, cross-sectional survey among 347 people living with HIV in three HIV high prevalence districts of India examined reasons for undergoing an HIV test, location of testing and conditions under which individuals were tested. HIV testing was almost always provider-initiated for men. Men were more likely to be advised to test by a private practitioner and to test in the private sector. Women were more likely to be advised to test by a family member, and to test in the public sector. Men were more likely to receive pre-test information than women, when tested in the private sector. Men were also more likely to receive direct disclosure of their HIV positive status by a health provider, regardless of the sector in which they tested. More women than men were repeatedly tested for HIV, regardless of sector. These sex differentials in the uptake and process of HIV testing are partially explained through differences in public and private sector testing practices. However, they also reflect women's lack of awareness and agency in HIV care seeking and differential treatment by providers. Examining gender dynamics that underpin sex differentials in HIV testing patterns and practices is essential for a realistic assessment of the challenges and implications of scaling-up HIV testing and mainstreaming gender in HIV/AIDS programmes.
2013-01-01
Background Differential gene expression (DGE) analysis is commonly used to reveal the deregulated molecular mechanisms of complex diseases. However, traditional DGE analysis (e.g., the t test or the rank sum test) tests each gene independently without considering interactions between them. Top-ranked differentially regulated genes prioritized by the analysis may not directly relate to the coherent molecular changes underlying complex diseases. Joint analyses of co-expression and DGE have been applied to reveal the deregulated molecular modules underlying complex diseases. Most of these methods consist of separate steps: first to identify gene-gene relationships under the studied phenotype then to integrate them with gene expression changes for prioritizing signature genes, or vice versa. It is warrant a method that can simultaneously consider gene-gene co-expression strength and corresponding expression level changes so that both types of information can be leveraged optimally. Results In this paper, we develop a gene module based method for differential gene expression analysis, named network-based differential gene expression (nDGE) analysis, a one-step integrative process for prioritizing deregulated genes and grouping them into gene modules. We demonstrate that nDGE outperforms existing methods in prioritizing deregulated genes and discovering deregulated gene modules using simulated data sets. When tested on a series of smoker and non-smoker lung adenocarcinoma data sets, we show that top differentially regulated genes identified by the rank sum test in different sets are not consistent while top ranked genes defined by nDGE in different data sets significantly overlap. nDGE results suggest that a differentially regulated gene module, which is enriched for cell cycle related genes and E2F1 targeted genes, plays a role in the molecular differences between smoker and non-smoker lung adenocarcinoma. Conclusions In this paper, we develop nDGE to prioritize deregulated genes and group them into gene modules by simultaneously considering gene expression level changes and gene-gene co-regulations. When applied to both simulated and empirical data, nDGE outperforms the traditional DGE method. More specifically, when applied to smoker and non-smoker lung cancer sets, nDGE results illustrate the molecular differences between smoker and non-smoker lung cancer. PMID:24341432
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
An early-biomarker algorithm predicts lethal graft-versus-host disease and survival
Hartwell, Matthew J.; Özbek, Umut; Holler, Ernst; Major-Monfried, Hannah; Reddy, Pavan; Aziz, Mina; Hogan, William J.; Ayuk, Francis; Efebera, Yvonne A.; Hexner, Elizabeth O.; Bunworasate, Udomsak; Qayed, Muna; Ordemann, Rainer; Wölfl, Matthias; Mielke, Stephan; Chen, Yi-Bin; Devine, Steven; Jagasia, Madan; Kitko, Carrie L.; Litzow, Mark R.; Kröger, Nicolaus; Locatelli, Franco; Morales, George; Nakamura, Ryotaro; Reshef, Ran; Rösler, Wolf; Weber, Daniela; Yanik, Gregory A.; Levine, John E.; Ferrara, James L.M.
2017-01-01
BACKGROUND. No laboratory test can predict the risk of nonrelapse mortality (NRM) or severe graft-versus-host disease (GVHD) after hematopoietic cellular transplantation (HCT) prior to the onset of GVHD symptoms. METHODS. Patient blood samples on day 7 after HCT were obtained from a multicenter set of 1,287 patients, and 620 samples were assigned to a training set. We measured the concentrations of 4 GVHD biomarkers (ST2, REG3α, TNFR1, and IL-2Rα) and used them to model 6-month NRM using rigorous cross-validation strategies to identify the best algorithm that defined 2 distinct risk groups. We then applied the final algorithm in an independent test set (n = 309) and validation set (n = 358). RESULTS. A 2-biomarker model using ST2 and REG3α concentrations identified patients with a cumulative incidence of 6-month NRM of 28% in the high-risk group and 7% in the low-risk group (P < 0.001). The algorithm performed equally well in the test set (33% vs. 7%, P < 0.001) and the multicenter validation set (26% vs. 10%, P < 0.001). Sixteen percent, 17%, and 20% of patients were at high risk in the training, test, and validation sets, respectively. GVHD-related mortality was greater in high-risk patients (18% vs. 4%, P < 0.001), as was severe gastrointestinal GVHD (17% vs. 8%, P < 0.001). The same algorithm can be successfully adapted to define 3 distinct risk groups at GVHD onset. CONCLUSION. A biomarker algorithm based on a blood sample taken 7 days after HCT can consistently identify a group of patients at high risk for lethal GVHD and NRM. FUNDING. The National Cancer Institute, American Cancer Society, and the Doris Duke Charitable Foundation. PMID:28194439
Patient Safety Culture Survey in Pediatric Complex Care Settings: A Factor Analysis.
Hessels, Amanda J; Murray, Meghan; Cohen, Bevin; Larson, Elaine L
2017-04-19
Children with complex medical needs are increasing in number and demanding the services of pediatric long-term care facilities (pLTC), which require a focus on patient safety culture (PSC). However, no tool to measure PSC has been tested in this unique hybrid acute care-residential setting. The objective of this study was to evaluate the psychometric properties of the Nursing Home Survey on Patient Safety Culture tool slightly modified for use in the pLTC setting. Factor analyses were performed on data collected from 239 staff at 3 pLTC in 2012. Items were screened by principal axis factoring, and the original structure was tested using confirmatory factor analysis. Exploratory factor analysis was conducted to identify the best model fit for the pLTC data, and factor reliability was assessed by Cronbach alpha. The extracted, rotated factor solution suggested items in 4 (staffing, nonpunitive response to mistakes, communication openness, and organizational learning) of the original 12 dimensions may not be a good fit for this population. Nevertheless, in the pLTC setting, both the original and the modified factor solutions demonstrated similar reliabilities to the published consistencies of the survey when tested in adult nursing homes and the items factored nearly identically as theorized. This study demonstrates that the Nursing Home Survey on Patient Safety Culture with minimal modification may be an appropriate instrument to measure PSC in pLTC settings. Additional psychometric testing is recommended to further validate the use of this instrument in this setting, including examining the relationship to safety outcomes. Increased use will yield data for benchmarking purposes across these specialized settings to inform frontline workers and organizational leaders of areas of strength and opportunity for improvement.
Mardirossian, Narbe; Head-Gordon, Martin
2014-03-25
The limit of accuracy for semi-empirical generalized gradient approximation (GGA) density functionals is explored in this paper by parameterizing a variety of local, global hybrid, and range-separated hybrid functionals. The training methodology employed differs from conventional approaches in 2 main ways: (1) Instead of uniformly truncating the exchange, same-spin correlation, and opposite-spin correlation functional inhomogeneity correction factors, all possible fits up to fourth order are considered, and (2) Instead of selecting the optimal functionals based solely on their training set performance, the fits are validated on an independent test set and ranked based on their overall performance on the trainingmore » and test sets. The 3 different methods of accounting for exchange are trained both with and without dispersion corrections (DFT-D2 and VV10), resulting in a total of 491 508 candidate functionals. For each of the 9 functional classes considered, the results illustrate the trade-off between improved training set performance and diminished transferability. Since all 491 508 functionals are uniformly trained and tested, this methodology allows the relative strengths of each type of functional to be consistently compared and contrasted. Finally, the range-separated hybrid GGA functional paired with the VV10 nonlocal correlation functional emerges as the most accurate form for the present training and test sets, which span thermochemical energy differences, reaction barriers, and intermolecular interactions involving lighter main group elements.« less
Experimentation and evaluation of advanced integrated system concepts
NASA Astrophysics Data System (ADS)
Ross, M.; Garrigus, K.; Gottschalck, J.; Rinearson, L.; Longee, E.
1980-09-01
This final report examines the implementation of a time-phased test bed for experimentation and evaluation of advanced system concepts relative to the future Defense Switched Network (DSN). After identifying issues pertinent to the DSN, a set of experiments which address these issues are developed. Experiments are ordered based on their immediacy and relative importance to DSN development. The set of experiments thus defined allows requirements for a time phased implementation of a test bed to be identified, and several generic test bed architectures which meet these requirements are examined. Specific architecture implementations are costed and cost/schedule profiles are generated as a function of experimental capability. The final recommended system consists of two separate test beds: a circuit switch test bed, configured around an off-the-shelf commercial switch, and directed toward the examination of nearer term and transitional issues raised by the evolving DSN; and a packet/hybrid test bed, featuring a discrete buildup of new hardware and software modules, and directed toward examination of the more advanced integrated voice and data telecommunications issues and concepts.
Maki, Alexander; Rothman, Alexander J
2017-01-01
To better understand the consistency of people's proenvironmental intentions and behaviors, we set out to examine two sets of research questions. First, do people perform (1) different types of proenvironmental behaviors consistently, and (2) the same proenvironmental behavior consistently across settings? Second, are there consistent predictors of proenvironmental behavioral intentions across behavior and setting type? Participants reported four recycling and conservation behaviors across three settings, revealing significant variability in rates of behaviors across settings. Prior behavior, attitudes toward the behavior, and importance of the behaviour consistently predicted proenvironmental intentions. However, perceived behavioral control tended to predict intentions to perform proenvironmental behavior outside the home. Future research aimed at understanding and influencing different proenvironmental behaviors should carefully consider how settings affect intentions and behavior.
A technology mapping based on graph of excitations and outputs for finite state machines
NASA Astrophysics Data System (ADS)
Kania, Dariusz; Kulisz, Józef
2017-11-01
A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.
Herbst, M; Lehmhus, H; Oldenburg, B; Orlowski, C; Ohgke, H
1983-04-01
A simple experimental set for the production and investigation of bacterially contaminated solid-state aerosols with constant concentration is described. The experimental set consists mainly of a fluidized bed-particle generator within a modified chamber for formaldehyde desinfection. The special conditions for the production of a defined concentration of particles and microorganisms are to be found out empirically. In a first application aerosol-sizing of an Andersen sampler is investigated. The findings of Andersen (1) are confirmed with respect to our experimental conditions.
Bobby, Zachariah; Nandeesha, H; Sridhar, M G; Soundravally, R; Setiya, Sajita; Babu, M Sathish; Niranjan, G
2014-01-01
Graduate medical students often get less opportunity for clarifying their doubts and to reinforce their concepts after lecture classes. The Medical Council of India (MCI) encourages group discussions among students. We evaluated the effect of identifying mistakes in a given set of wrong statements and their correction by a small group discussion by graduate medical students as a revision exercise. At the end of a module, a pre-test consisting of multiple-choice questions (MCQs) was conducted. Later, a set of incorrect statements related to the topic was given to the students and they were asked to identify the mistakes and correct them in a small group discussion. The effects on low, medium and high achievers were evaluated by a post-test and delayed post-tests with the same set of MCQs. The mean post-test marks were significantly higher among all the three groups compared to the pre-test marks. The gain from the small group discussion was equal among low, medium and high achievers. The gain from the exercise was retained among low, medium and high achievers after 15 days. Identification of mistakes in statements and their correction by a small group discussion is an effective, but unconventional revision exercise in biochemistry. Copyright 2014, NMJI.
Coefficient alpha and interculture test selection.
Thurber, Steven; Kishi, Yasuhiro
2014-04-01
The internal consistency reliability of a measure can be a focal point in an evaluation of the potential adequacy of an instrument for adaptation to another cultural setting. Cronbach's alpha (α) coefficient is often used as the statistical index for such a determination. However, alpha presumes a tau-equivalent test and may constitute an inaccurate population estimate for multidimensional tests. These notions are expanded and examined with a Japanese version of a questionnaire on nursing attitudes toward suicidal patients, originally constructed in Sweden using the English language. The English measure was reported to have acceptable internal consistency (α) albeit the dimensionality of the questionnaire was not addressed. The Japanese scale was found to lack tau-equivalence. An alternative to alpha, "composite reliability," was computed and found to be below acceptable standards in magnitude and precision. Implications for research application of the Japanese instrument are discussed. © The Author(s) 2012.
Testing the Zimbardo Time Perspective Inventory in the Chinese context.
Wang, Ya; Chen, Xing-Jie; Cui, Ji-Fang; Liu, Lu-Lu
2015-09-01
In this study, the authors evaluated the Chinese version of the Zimbardo Time Perspective Inventory (ZTPI). The ZTPI was tested among a sample of 303 university students. A subsample of 51 participants was then asked to complete the ZTPI again along with another set of questionnaires. The five-factor model of a 20-item short version of the ZTPI showed good model fit, internal consistency, and test-retest reliability. The 20-item Chinese version of the ZTPI also provided good validity, showing correlations with other variables in expected directions. Past-Positive was positively correlated with reappraisal and negatively correlated with suppression emotion regulation strategies, and Present-Hedonistic was positively correlated with reappraisal emotion regulation strategies. These findings indicate that the ZTPI is a reliable and valid instrument for measuring time perspective in the Chinese setting. © 2015 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy
2002-01-01
A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.
Development Test 1 Advanced Attack Helicopter Competitive Evaluation Hughes YAH-64 Helicopter
1976-12-01
pilot or the copilot/gunner. The gun/rocket firing circuits were armed by selecting either guns or rockets on the armament panel (fig. 36, app B). The...number of 30mm rounds to be fired and gun barrel positions could only be set from the gunner position for DT I testing. Once the systems were armed ...fuselage is of a semimonocoque construction of primarily aluminum alloys. It consists of 10 major bulkheads and frames and 8 major longerons and
Fabrication of angleply carbon-aluminum composites
NASA Technical Reports Server (NTRS)
Novak, R. C.
1974-01-01
A study was conducted to fabricate and test angleply composite consisting of NASA-Hough carbon base monofilament in a matrix of 2024 aluminum. The effect of fabrication variables on the tensile properties was determined, and an optimum set of conditions was established. The size of the composite panels was successfully scaled up, and the material was tested to measure tensile behavior as a function of temperature, stress-rupture and creep characteristics at two elevated temperatures, bending fatigue behavior, resistance to thermal cycling, and Izod impact response.
Site characterization in densely fractured dolomite: Comparison of methods
Muldoon, M.; Bradbury, K.R.
2005-01-01
One of the challenges in characterizing fractured-rock aquifers is determining whether the equivalent porous medium approximation is valid at the problem scale. Detailed hydrogeologic characterization completed at a small study site in a densely fractured dolomite has yielded an extensive data set that was used to evaluate the utility of the continuum and discrete-fracture approaches to aquifer characterization. There are two near-vertical sets of fractures at the site; near-horizontal bedding-plane partings constitute a third fracture set. Eighteen boreholes, including five coreholes, were drilled to a depth of ???10.6 m. Borehole geophysical logs revealed several laterally extensive horizontal fractures and dissolution zones. Flowmeter and short-interval packer testing identified which of these features were hydraulically important. A monitoring system, consisting of short-interval piezometers and multilevel samplers, was designed to monitor four horizontal fractures and two dissolution zones. The resulting network consisted of >70 sampling points and allowed detailed monitoring of head distributions in three dimensions. Comparison of distributions of hydraulic head - and hydraulic conductivity determined by these two approaches suggests that even in a densely fractured-carbonate aquifer, a characterization approach using traditional long-interval monitoring wells is inadequate to characterize ground water movement for the purposes of regulatory monitoring or site remediation. In addition, traditional multiwell pumping tests yield an average or bulk hydraulic conductivity that is not adequate for predicting rapid ground water travel times through the fracture network, and the pumping test response does not appear to be an adequate tool for assessing whether the porous medium approximation is valid. Copyright ?? 2005 National Ground Water Association.
Site characterization in densely fractured dolomite: comparison of methods.
Muldoon, Maureen; Bradbury, Ken R
2005-01-01
One of the challenges in characterizing fractured-rock aquifers is determining whether the equivalent porous medium approximation is valid at the problem scale. Detailed hydrogeologic characterization completed at a small study site in a densely fractured dolomite has yielded an extensive data set that was used to evaluate the utility of the continuum and discrete-fracture approaches to aquifer characterization. There are two near-vertical sets of fractures at the site; near-horizontal bedding-plane partings constitute a third fracture set. Eighteen boreholes, including five coreholes, were drilled to a depth of approximately 10.6 m. Borehole geophysical logs revealed several laterally extensive horizontal fractures and dissolution zones. Flowmeter and short-interval packer testing identified which of these features were hydraulically important. A monitoring system, consisting of short-interval piezometers and multilevel samplers, was designed to monitor four horizontal fractures and two dissolution zones. The resulting network consisted of >70 sampling points and allowed detailed monitoring of head distributions in three dimensions. Comparison of distributions of hydraulic head and hydraulic conductivity determined by these two approaches suggests that even in a densely fractured-carbonate aquifer, a characterization approach using traditional long-interval monitoring wells is inadequate to characterize ground water movement for the purposes of regulatory monitoring or site remediation. In addition, traditional multiwell pumping tests yield an average or bulk hydraulic conductivity that is not adequate for predicting rapid ground water travel times through the fracture network, and the pumping test response does not appear to be an adequate tool for assessing whether the porous medium approximation is valid.
Patient Core Data Set. Standard for a longitudinal health/medical record.
Renner, A L; Swart, J C
1997-01-01
Blue Chip Computers Company, in collaboration with Wright State University-Miami Valley College of Nursing and Health, with support from the Agency for Health Care Policy and Research, Public Health Service, completed Small Business innovative Research research to design a comprehensive integrated Patient information System. The Wright State University consultants undertook the development of a Patient Core Data Set (PCDS) in response to the lack of uniform standards of minimum data sets, and lack of standards in data transfer for continuity of care. The purpose of the Patient Core Data Set is to develop a longitudinal patient health record and medical history using a common set of standard data elements with uniform definitions and coding consistent with Health Level 7 (HL7) protocol and the American Society for Testing and Materials (ASTM) standards. The PCDS, intended for transfer across all patient-care settings, is essential information for clinicians, administrators, researchers, and health policy makers.
Standardizing electrofishing power for boat electrofishing: chapter 14
Miranda, L.E. (Steve); Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.
2009-01-01
Standardizing boat electrofishing entails achieving an accepted level of collection consistency by managing various brand factors, including (1) the temporal and spatial distribution of sampling effort, (2) boat operation, (3) equipment configuration, (4) characteristics of the waveform and energized field, and (5) power transferred to fish. This chapter focuses exclusively on factor 5:L factors 1-4 have been addressed in earlier chapters. Additionally, while the concepts covered in this chapter address boat electrofishing in general, the power settings discussed were developed from tests with primarily warmwater fish communities. Others (see Chapter 9) recommend lower power settings for communities consisting of primarily coldwater fishes. For reviews of basic concepts of electricity, electrofishing theory and systems, fish behavior relative to diverse waveforms, and injury matter, the reader is referred to Novotny (1990), Reynold (1996), and Snyder (2003).
Challenges with controlling varicella in prison settings: Experience of California, 2010–2011
Leung, Jessica; Lopez, Adriana S.; Tootell, Elena; Baumrind, Nikki; Mohle-Boetani, Janet; Leistikow, Bruce; Harriman, Kathleen H.; Preas, Christopher P.; Cosentino, Giorgio; Bialek, Stephanie R.; Marin, Mona
2015-01-01
We describe the epidemiology of varicella in one state prison in California during 2010–2011, control measures implemented, and associated costs. Eleven varicella cases were reported, 9 associated with 2 outbreaks. One outbreak consisted of 3 cases and the second consisted of 6 cases with 2 generations of spread. Among exposed inmates serologically tested, 98% (643/656) were VZV sero-positive. The outbreaks resulted in >1,000 inmates exposed, 444 staff exposures, and >$160,000 in costs. We documented the challenges and costs associated with controlling and managing varicella in a prison setting. A screening policy for evidence of varicella immunity for incoming inmates and staff and vaccination of susceptible persons has the potential to mitigate the impact of future outbreaks and reduce resources necessary for managing cases and outbreaks. PMID:25201912
NASA Astrophysics Data System (ADS)
Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria
2016-04-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP models, residual scores show that the NSHMP model is preferred in locations with earthquake occurrence, due to the lower seismicity rates forecasted by the UCERF2 model.
Summary of: radiation protection in dental X-ray surgeries--still rooms for improvement.
Walker, Anne
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
Radiation protection in dental X-ray surgeries--still rooms for improvement.
Hart, G; Dugdale, M
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
The Immune System as a Model for Pattern Recognition and Classification
Carter, Jerome H.
2000-01-01
Objective: To design a pattern recognition engine based on concepts derived from mammalian immune systems. Design: A supervised learning system (Immunos-81) was created using software abstractions of T cells, B cells, antibodies, and their interactions. Artificial T cells control the creation of B-cell populations (clones), which compete for recognition of “unknowns.” The B-cell clone with the “simple highest avidity” (SHA) or “relative highest avidity” (RHA) is considered to have successfully classified the unknown. Measurement: Two standard machine learning data sets, consisting of eight nominal and six continuous variables, were used to test the recognition capabilities of Immunos-81. The first set (Cleveland), consisting of 303 cases of patients with suspected coronary artery disease, was used to perform a ten-way cross-validation. After completing the validation runs, the Cleveland data set was used as a training set prior to presentation of the second data set, consisting of 200 unknown cases. Results: For cross-validation runs, correct recognition using SHA ranged from a high of 96 percent to a low of 63.2 percent. The average correct classification for all runs was 83.2 percent. Using the RHA metric, 11.2 percent were labeled “too close to determine” and no further attempt was made to classify them. Of the remaining cases, 85.5 percent were correctly classified. When the second data set was presented, correct classification occurred in 73.5 percent of cases when SHA was used and in 80.3 percent of cases when RHA was used. Conclusions: The immune system offers a viable paradigm for the design of pattern recognition systems. Additional research is required to fully exploit the nuances of immune computation. PMID:10641961
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Background Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. Methods From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. Results 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31–0.89] (P value = 0.009). Conclusion Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies. PMID:27716793
Application of the Trend Filtering Algorithm for Photometric Time Series Data
NASA Astrophysics Data System (ADS)
Gopalan, Giri; Plavchan, Peter; van Eyken, Julian; Ciardi, David; von Braun, Kaspar; Kane, Stephen R.
2016-08-01
Detecting transient light curves (e.g., transiting planets) requires high-precision data, and thus it is important to effectively filter systematic trends affecting ground-based wide-field surveys. We apply an implementation of the Trend Filtering Algorithm (TFA) to the 2MASS calibration catalog and select Palomar Transient Factory (PTF) photometric time series data. TFA is successful at reducing the overall dispersion of light curves, however, it may over-filter intrinsic variables and increase “instantaneous” dispersion when a template set is not judiciously chosen. In an attempt to rectify these issues we modify the original TFA from the literature by including measurement uncertainties in its computation, including ancillary data correlated with noise, and algorithmically selecting a template set using clustering algorithms as suggested by various authors. This approach may be particularly useful for appropriately accounting for variable photometric precision surveys and/or combined data sets. In summary, our contributions are to provide a MATLAB software implementation of TFA and a number of modifications tested on synthetics and real data, summarize the performance of TFA and various modifications on real ground-based data sets (2MASS and PTF), and assess the efficacy of TFA and modifications using synthetic light curve tests consisting of transiting and sinusoidal variables. While the transiting variables test indicates that these modifications confer no advantage to transit detection, the sinusoidal variables test indicates potential improvements in detection accuracy.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
A Report on the Findings of the Administration of the Institutional Goals Inventory.
ERIC Educational Resources Information Center
Brevard Community Coll., Cocoa, FL.
The Institutional Goals Inventory (I.G.I.), an integral part of the Institutional Goals-Setting Model developed at Brevard Community College during the fall of 1973, was field-tested during the period December 15, 1973 through February 1, 1974. The Inventory consists of 90 statements of possible institutional goals, to which the respondent gives…
Psychometric Inferences from a Meta-Analysis of Reliability and Internal Consistency Coefficients
ERIC Educational Resources Information Center
Botella, Juan; Suero, Manuel; Gambara, Hilda
2010-01-01
A meta-analysis of the reliability of the scores from a specific test, also called reliability generalization, allows the quantitative synthesis of its properties from a set of studies. It is usually assumed that part of the variation in the reliability coefficients is due to some unknown and implicit mechanism that restricts and biases the…
Adverse Selection in Insurance Markets: Policyholder Evidence from the U.K. Annuity Market
ERIC Educational Resources Information Center
Finkelstein, Amy; Poterba, James
2004-01-01
We use a unique data set of annuities in the United Kingdom to test for adverse selection. We find systematic relationships between ex post mortality and annuity characteristics, such as the timing of payments and the possibility of payments to the annuitant's estate. These patterns are consistent with the presence of asymmetric information.…
A Curriculum Activities Guide to Water Pollution and Environmental Studies, Volume I - Activities.
ERIC Educational Resources Information Center
Hershey, John T., Ed.; And Others
This publication, Volume I of a two volume set, consists of many tested water pollution study activities. The activities are grouped into four headings: (1) Hydrologic Cycle, (2) Human Activities, (3) Ecological Perspectives, and (4) Social and Political Factors. Three levels of activities are provided: (1) those which increase awareness, (2)…
Differential item functioning analysis of the Vanderbilt Expertise Test for cars.
Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W; Van Gulick, Ana Beth; Gauthier, Isabel
2015-01-01
The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge.
Ibrahim, Azianah; Singh, Devinder Kaur Ajit; Shahar, Suzana; Omar, Mohd Azahadi
2017-01-01
Background Early detection of falls risk among older adults using simple tools may assist in fall prevention strategies. The aim of this study was to identify the best parameters associated with previous falls, either the timed up and go (TUG) test combined with sociodemographic factors and a self-rated multifactorial questionnaire (SRMQ) on falls risk or the TUG on its own. Falls risk was determined based on parameters associated with previous falls. Design This was a retrospective cohort study. Setting The study was conducted in a community setting. Participants The participants were 1,086 community-dwelling older adults, with mean age of 69.6±5.6 years. Participants were categorized into fallers and nonfallers based on their history of falls in the past 12 months. Method Participants’ sociodemographic data was taken, and SRMQ consisting of five falls-related questions was administered. Participants performed the TUG test twice, and the mean was taken as the result. Results A total of 161 participants were categorized as fallers (14.8%). Multivariate logistic regression analysis showed that the model (χ2(6)=61.0, p<0.001, Nagelkerke R2=0.10) consisting of the TUG test, sociodemographic factors (gender, cataract/glaucoma and joint pain), as well as the SRMQ items “previous falls history” (Q1) and “worried of falls” (Q5), was more robust in terms of falls risk association compared to that with TUG on its own (χ2(1)=10.3, p<0.001, Nagelkerke R2=0.02). Conclusion Combination of sociodemographic factors and SRMQ with TUG is more favorable as an initial falls risk screening tool among community-dwelling older adults. Subsequently, further comprehensive falls risk assessment may be performed in clinical settings to identify the specific impairments for effective management. PMID:29138571
TDRSS system configuration study for space shuttle program
NASA Technical Reports Server (NTRS)
1978-01-01
This study was set up to assure that operation of the shuttle orbiter communications systems met the program requirements when subjected to electrical conditions similar to those which will be encountered during the operational mission. The test program intended to implement an integrated test bed, consisting of applicable orbiter, EVA, payload simulator, STDN, and AF/SCF, as well as the TDRSS equipment. The stated intention of Task 501 Program was to configure the test bed with prototype hardware for a system development test and production hardware for a system verification test. In case of TDRSS when the hardware was not available, simulators whose functional performance was certified to meet appropriate end item specification were used.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
This document consists of 80 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 15 consists of 27 packages; set 16 consists of 53 packages. Each software review lists producer, time and place of evaluation,…
Marelli, Marco; Amenta, Simona; Crepaldi, Davide
2015-01-01
A largely overlooked side effect in most studies of morphological priming is a consistent main effect of semantic transparency across priming conditions. That is, participants are faster at recognizing stems from transparent sets (e.g., farm) in comparison to stems from opaque sets (e.g., fruit), regardless of the preceding primes. This suggests that semantic transparency may also be consistently associated with some property of the stem word. We propose that this property might be traced back to the consistency, throughout the lexicon, between the orthographic form of a word and its meaning, here named Orthography-Semantics Consistency (OSC), and that an imbalance in OSC scores might explain the "stem transparency" effect. We exploited distributional semantic models to quantitatively characterize OSC, and tested its effect on visual word identification relying on large-scale data taken from the British Lexicon Project (BLP). Results indicated that (a) the "stem transparency" effect is solid and reliable, insofar as it holds in BLP lexical decision times (Experiment 1); (b) an imbalance in terms of OSC can account for it (Experiment 2); and (c) more generally, OSC explains variance in a large item sample from the BLP, proving to be an effective predictor in visual word access (Experiment 3).
Testing Spatial Symmetry Using Contingency Tables Based on Nearest Neighbor Relations
Ceyhan, Elvan
2014-01-01
We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set. PMID:24605061
A test matrix sequencer for research test facility automation
NASA Technical Reports Server (NTRS)
Mccartney, Timothy P.; Emery, Edward F.
1990-01-01
The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.
Soh, BaoLin Pauline; Lee, Warwick Bruce; Mello-Thoms, Claudia; Tapia, Kriscia; Ryan, John; Hung, Wai Tak; Thompson, Graham; Heard, Rob; Brennan, Patrick
2015-08-01
Test sets have been increasingly utilised to augment clinical audit in breast screening programmes; however, their relationship has never been satisfactorily understood. This study examined the relationship between mammographic test set performance and clinical audit data. Clinical audit data over a 2-year period was generated for each of 20 radiologists. Sixty mammographic examinations, consisting of 40 normal and 20 cancer cases, formed the test set. Readers located any identifiable cancer, and levels of confidence were scored from 2 to 5, where a score of 3 and above is considered a recall rating. Jackknifing free response operating characteristic (JAFROC) figure-of-merit (FOM), location sensitivity and specificity were calculated for individual readers and then compared with clinical audit values using Spearman's rho. JAFROC FOM showed significant correlations to: recall rate at a first round of screening (r = 0.51; P = 0.02); rate of small invasive cancers per 10 000 reads (r = 0.5; P = 0.02); percentage of all cancers read that were not recalled (r = -0.51; P = 0.02); and sensitivity (r = 0.51; P = 0.02). Location sensitivity demonstrated significant correlations with: rate of small invasive cancers per 10 000 reads (r = 0.46; P = 0.04); rate of DCIS (ductal carcinoma in situ) per 10 000 reads (r = 0.44; P = 0.05); detection rate of all invasive cancers and DCIS per 10 000 reads (r = 0.54; P = 0.01); percentage of all cancers read that were not recalled (r = -0.57; P = 0.009); and sensitivity (r = 0.57; P = 0.009). No other significant relationships were noted. Performance indicators from test set demonstrate significant correlations with specific aspects of clinical performance, although caution needs to be exercised when generalising test set specificity to the clinical situation. © 2015 The Royal Australian and New Zealand College of Radiologists.
Longitudinal Multiple Sclerosis Lesion Segmentation: Resource & Challenge
Carass, Aaron; Roy, Snehashis; Jog, Amod; Cuzzocreo, Jennifer L.; Magrath, Elizabeth; Gherman, Adrian; Button, Julia; Nguyen, James; Prados, Ferran; Sudre, Carole H.; Cardoso, Manuel Jorge; Cawley, Niamh; Ciccarelli, Olga; Wheeler-Kingshott, Claudia A. M.; Ourselin, Sébastien; Catanese, Laurence; Deshpande, Hrishikesh; Maurel, Pierre; Commowick, Olivier; Barillot, Christian; Tomas-Fernandez, Xavier; Warfield, Simon K.; Vaidya, Suthirth; Chunduru, Abhijith; Muthuganapathy, Ramanathan; Krishnamurthi, Ganapathy; Jesson, Andrew; Arbel, Tal; Maier, Oskar; Handels, Heinz; Iheme, Leonardo O.; Unay, Devrim; Jain, Saurabh; Sima, Diana M.; Smeets, Dirk; Ghafoorian, Mohsen; Platel, Bram; Birenbaum, Ariel; Greenspan, Hayit; Bazin, Pierre-Louis; Calabresi, Peter A.; Crainiceanu, Ciprian M.; Ellingsen, Lotta M.; Reich, Daniel S.; Prince, Jerry L.; Pham, Dzung L.
2017-01-01
In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: 1) the sharing of a rich data set; 2) collaboration and comparison of the various avenues of research being pursued in the community; and 3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website1 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters. PMID:28087490
The Function Biomedical Informatics Research Network Data Repository.
Keator, David B; van Erp, Theo G M; Turner, Jessica A; Glover, Gary H; Mueller, Bryon A; Liu, Thomas T; Voyvodic, James T; Rasmussen, Jerod; Calhoun, Vince D; Lee, Hyo Jong; Toga, Arthur W; McEwen, Sarah; Ford, Judith M; Mathalon, Daniel H; Diaz, Michele; O'Leary, Daniel S; Jeremy Bockholt, H; Gadde, Syam; Preda, Adrian; Wible, Cynthia G; Stern, Hal S; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G
2016-01-01
The Function Biomedical Informatics Research Network (FBIRN) developed methods and tools for conducting multi-scanner functional magnetic resonance imaging (fMRI) studies. Method and tool development were based on two major goals: 1) to assess the major sources of variation in fMRI studies conducted across scanners, including instrumentation, acquisition protocols, challenge tasks, and analysis methods, and 2) to provide a distributed network infrastructure and an associated federated database to host and query large, multi-site, fMRI and clinical data sets. In the process of achieving these goals the FBIRN test bed generated several multi-scanner brain imaging data sets to be shared with the wider scientific community via the BIRN Data Repository (BDR). The FBIRN Phase 1 data set consists of a traveling subject study of 5 healthy subjects, each scanned on 10 different 1.5 to 4 T scanners. The FBIRN Phase 2 and Phase 3 data sets consist of subjects with schizophrenia or schizoaffective disorder along with healthy comparison subjects scanned at multiple sites. In this paper, we provide concise descriptions of FBIRN's multi-scanner brain imaging data sets and details about the BIRN Data Repository instance of the Human Imaging Database (HID) used to publicly share the data. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Northwest Regional Educational Lab., Portland, OR.
This document consists of 170 microcomputer software package evaluations prepared by the MicroSIFT (Microcomputer Software and Information for Teachers) Clearinghouse at the Northwest Regional Education Laboratory. Set 11 consists of 37 packages. Set 12 consists of 34 packages. A special unnumbered set, entitled LIBRA Reviews, treats 99 packages…
The achromatic locus: Effect of navigation direction in color space
Chauhan, Tushar; Perales, Esther; Xiao, Kaida; Hird, Emily; Karatzas, Dimosthenis; Wuerger, Sophie
2014-01-01
An achromatic stimulus is defined as a patch of light that is devoid of any hue. This is usually achieved by asking observers to adjust the stimulus such that it looks neither red nor green and at the same time neither yellow nor blue. Despite the theoretical and practical importance of the achromatic locus, little is known about the variability in these settings. The main purpose of the current study was to evaluate whether achromatic settings were dependent on the task of the observers, namely the navigation direction in color space. Observers could either adjust the test patch along the two chromatic axes in the CIE u*v* diagram or, alternatively, navigate along the unique-hue lines. Our main result is that the navigation method affects the reliability of these achromatic settings. Observers are able to make more reliable achromatic settings when adjusting the test patch along the directions defined by the four unique hues as opposed to navigating along the main axes in the commonly used CIE u*v* chromaticity plane. This result holds across different ambient viewing conditions (Dark, Daylight, Cool White Fluorescent) and different test luminance levels (5, 20, and 50 cd/m2). The reduced variability in the achromatic settings is consistent with the idea that internal color representations are more aligned with the unique-hue lines than the u* and v* axes. PMID:24464164
The achromatic locus: effect of navigation direction in color space.
Chauhan, Tushar; Perales, Esther; Xiao, Kaida; Hird, Emily; Karatzas, Dimosthenis; Wuerger, Sophie
2014-01-24
An achromatic stimulus is defined as a patch of light that is devoid of any hue. This is usually achieved by asking observers to adjust the stimulus such that it looks neither red nor green and at the same time neither yellow nor blue. Despite the theoretical and practical importance of the achromatic locus, little is known about the variability in these settings. The main purpose of the current study was to evaluate whether achromatic settings were dependent on the task of the observers, namely the navigation direction in color space. Observers could either adjust the test patch along the two chromatic axes in the CIE u*v* diagram or, alternatively, navigate along the unique-hue lines. Our main result is that the navigation method affects the reliability of these achromatic settings. Observers are able to make more reliable achromatic settings when adjusting the test patch along the directions defined by the four unique hues as opposed to navigating along the main axes in the commonly used CIE u*v* chromaticity plane. This result holds across different ambient viewing conditions (Dark, Daylight, Cool White Fluorescent) and different test luminance levels (5, 20, and 50 cd/m(2)). The reduced variability in the achromatic settings is consistent with the idea that internal color representations are more aligned with the unique-hue lines than the u* and v* axes.
NASA Astrophysics Data System (ADS)
Strohmeier, Dominik; Kunze, Kristina; Göbel, Klemens; Liebetrau, Judith
2013-01-01
Assessing audiovisual Quality of Experience (QoE) is a key element to ensure quality acceptance of today's multimedia products. The use of descriptive evaluation methods allows evaluating QoE preferences and the underlying QoE features jointly. From our previous evaluations on QoE for mobile 3D video we found that mainly one dimension, video quality, dominates the descriptive models. Large variations of the visual video quality in the tests may be the reason for these findings. A new study was conducted to investigate whether test sets of low QoE are described differently than those of high audiovisual QoE. Reanalysis of previous data sets seems to confirm this hypothesis. Our new study consists of a pre-test and a main test, using the Descriptive Sorted Napping method. Data sets of good-only and bad-only video quality were evaluated separately. The results show that the perception of bad QoE is mainly determined one-dimensionally by visual artifacts, whereas the perception of good quality shows multiple dimensions. Here, mainly semantic-related features of the content and affective descriptors are used by the naïve test participants. The results show that, with increasing QoE of audiovisual systems, content semantics and users' a_ective involvement will become important for assessing QoE differences.
Eo, Taejoon; Jun, Yohan; Kim, Taeseong; Jang, Jinseong; Lee, Ho-Joon; Hwang, Dosik
2018-04-06
To demonstrate accurate MR image reconstruction from undersampled k-space data using cross-domain convolutional neural networks (CNNs) METHODS: Cross-domain CNNs consist of 3 components: (1) a deep CNN operating on the k-space (KCNN), (2) a deep CNN operating on an image domain (ICNN), and (3) an interleaved data consistency operations. These components are alternately applied, and each CNN is trained to minimize the loss between the reconstructed and corresponding fully sampled k-spaces. The final reconstructed image is obtained by forward-propagating the undersampled k-space data through the entire network. Performances of K-net (KCNN with inverse Fourier transform), I-net (ICNN with interleaved data consistency), and various combinations of the 2 different networks were tested. The test results indicated that K-net and I-net have different advantages/disadvantages in terms of tissue-structure restoration. Consequently, the combination of K-net and I-net is superior to single-domain CNNs. Three MR data sets, the T 2 fluid-attenuated inversion recovery (T 2 FLAIR) set from the Alzheimer's Disease Neuroimaging Initiative and 2 data sets acquired at our local institute (T 2 FLAIR and T 1 weighted), were used to evaluate the performance of 7 conventional reconstruction algorithms and the proposed cross-domain CNNs, which hereafter is referred to as KIKI-net. KIKI-net outperforms conventional algorithms with mean improvements of 2.29 dB in peak SNR and 0.031 in structure similarity. KIKI-net exhibits superior performance over state-of-the-art conventional algorithms in terms of restoring tissue structures and removing aliasing artifacts. The results demonstrate that KIKI-net is applicable up to a reduction factor of 3 to 4 based on variable-density Cartesian undersampling. © 2018 International Society for Magnetic Resonance in Medicine.
Castañeda, Sheila F; Bharti, Balambal; Espinoza-Giacinto, Rebeca Aurora; Sanchez, Valerie; O'Connell, Shawne; Muñoz, Fatima; Mercado, Sylvia; Meza, Marie Elena; Rojas, Wendy; Talavera, Gregory A; Gupta, Samir
2017-06-20
Regular use of colorectal cancer screening can reduce incidence and mortality, but participation rates remain low among low-income, Spanish-speaking Latino adults. We conducted two distinct pilot studies testing the implementation of evidence-based interventions to promote fecal immunochemical test (FIT) screening among Latinos aged 50-75 years who were not up-to-date with CRC screening (n = 200) at a large Federally Qualified Health Center (FQHC) in San Diego, CA. One pilot focused on an opportunistic clinic visit "in-reach" intervention including a 30-min session with a patient navigator, review of an educational "flip-chart," and a take-home FIT kit with instructions. The second pilot was a system-level "outreach" intervention consisting of mailed materials (i.e., FIT kit, culturally and linguistically tailored instructions, and a pre-paid return envelope). Both received follow-up calls to promote screening completion and referrals for additional screening and treatment if needed. The primary outcome was FIT kit completion and return within 3 months assessed through electronic medical records. The in-reach pilot consisted of mostly insured (85%), women (82%), and Spanish-speaking (88%) patients. The outreach pilot consisted of mostly of Spanish-speaking (73%) women (64%), half of which were insured (50%). At a 3-month follow-up, screening completion was 76% for in-reach and 19% for outreach. These data demonstrate that evidence-based strategies to promote CRC screening can be implemented successfully within FQHCs, but implementation (particularly of mailed outreach) may require setting and population-specific optimization. Patient, provider, and healthcare system related implementation approaches and lessons learned from this study may be implemented in other primary care settings.
Standard Specimen Reference Set: Pancreatic — EDRN Public Portal
The primary objective of the EDRN Pancreatic Cancer Working Group Proposal is to create a reference set consisting of well-characterized serum/plasma specimens to use as a resource for the development of biomarkers for the early detection of pancreatic adenocarcinoma. The testing of biomarkers on the same sample set permits direct comparison among them; thereby, allowing the development of a biomarker panel that can be evaluated in a future validation study. Additionally, the establishment of an infrastructure with core data elements and standardized operating procedures for specimen collection, processing and storage, will provide the necessary preparatory platform for larger validation studies when the appropriate marker/panel for pancreatic adenocarcinoma has been identified.
ERIC Educational Resources Information Center
Prayekti
2017-01-01
This research was aimed at developing printed teaching materials of Atomic Physics PEFI4421 Course using Research and Development (R & D) model; which consisted of three major set of activities. The first set consisted of seven stages, the second set consisted of one stage, and the third set consisted of seven stages. This research study was…
Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development
NASA Technical Reports Server (NTRS)
Pereira, J. Michael; Revilock, Duane M.; Lerch, Bradley A.; Ruggeri, Charles R.
2013-01-01
One of the difficulties with developing and verifying accurate impact models is that parameters such as high strain rate material properties, failure modes, static properties, and impact test measurements are often obtained from a variety of different sources using different materials, with little control over consistency among the different sources. In addition there is often a lack of quantitative measurements in impact tests to which the models can be compared. To alleviate some of these problems, a project is underway to develop a consistent set of material property, impact test data and failure analysis for a variety of aircraft materials that can be used to develop improved impact failure and deformation models. This project is jointly funded by the NASA Glenn Research Center and the FAA William J. Hughes Technical Center. Unique features of this set of data are that all material property data and impact test data are obtained using identical material, the test methods and procedures are extensively documented and all of the raw data is available. Four parallel efforts are currently underway: Measurement of material deformation and failure response over a wide range of strain rates and temperatures and failure analysis of material property specimens and impact test articles conducted by The Ohio State University; development of improved numerical modeling techniques for deformation and failure conducted by The George Washington University; impact testing of flat panels and substructures conducted by NASA Glenn Research Center. This report describes impact testing which has been done on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade. Data from this testing will be used in validating material models developed under this program. The material tests and the material models developed in this program will be published in separate reports.
Nass, C; Lee, K M
2001-09-01
Would people exhibit similarity-attraction and consistency-attraction toward unambiguously computer-generated speech even when personality is clearly not relevant? In Experiment 1, participants (extrovert or introvert) heard a synthesized voice (extrovert or introvert) on a book-buying Web site. Participants accurately recognized personality cues in text to speech and showed similarity-attraction in their evaluation of the computer voice, the book reviews, and the reviewer. Experiment 2, in a Web auction context, added personality of the text to the previous design. The results replicated Experiment 1 and demonstrated consistency (voice and text personality)-attraction. To maximize liking and trust, designers should set parameters, for example, words per minute or frequency range, that create a personality that is consistent with the user and the content being presented.
Clark, S; Rose, D J
2001-04-01
To establish reliability estimates of the 75% Limits of Stability Test (75% LOS test) when administered to community-dwelling older adults with a history of falls. Generalizability theory was used to estimate both the relative contribution of identified error sources to the total measurement error and generalizability coefficients. A random effects repeated-measures analysis of variance (ANOVA) was used to assess consistency of LOS test movement variables across both days and targets. A motor control research laboratory in a university setting. Fifty community-dwelling older adults with 2 or more falls in the previous year. Spatial and temporal measures of dynamic balance derived from the 75% LOS test included average movement velocity, maximum center of gravity (COG) excursion, end-point COG excursion, and directional control. Estimated generalizability coefficients for 2 testing days ranged from.58 to.87. Total variance in LOS test measures attributable to inconsistencies in day-to-day test performance (Day and Subject x Day facets) ranged from 2.5% to 8.4%. The ANOVA results indicated that no significant differences were observed in the LOS test variables across the 2 testing days. The 75% LOS test administered to older adult fallers on 2 consecutive days provides consistent and reliable measures of dynamic balance.
Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J
1995-11-01
The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.
NASA Astrophysics Data System (ADS)
Verma, Sneha K.; Chun, Sophia; Liu, Brent J.
2014-03-01
Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.
Psychometric Properties of the Young Children’s Participation and Environment Measure
Khetani, Mary A.; Graham, James E.; Davies, Patricia L.; Law, Mary C.; Simeonsson, Rune J.
2014-01-01
Objective To evaluate the psychometric properties of the newly developed Young Children’s Participation and Environment Measure (YC-PEM). Design Cross-sectional study. Setting Data were collected online and by telephone. Participants Convenience and snowball sampling methods were used to survey caregivers of 395 children (93 children with developmental disabilities and delays, 302 without developmental disabilities and delays) between 0–5 years (mean = 35.33 months, SD = 20.29) and residing in North America. Interventions Not applicable. Main Outcome Measure(s) The YC-PEM includes three participation scales and one environment scale. Each scale is assessed across three settings: home, daycare/preschool, and community. Data were analyzed to derive estimates of internal consistency, test-retest reliability, and construct validity. Results Internal consistency ranged from .68 to .96 and .92 to .96 for the participation and environment scales, respectively. Test-retest reliability (2–4 weeks) ranged from .31 to .93 for participation scales and from .91 to .94 for the environment scale. One of three participation scales and the environment scale demonstrated significant group differences by disability status across all three settings, and all four scales discriminated between disability groups for the daycare/preschool setting. The participation scales exhibited small to moderate positive associations with functional performance scores. Conclusion(s) Results lend initial support for the use of the YC-PEM in research to assess the participation of young children with disabilities and delays in terms of 1) home, daycare/preschool, and community participation patterns, 2) perceived environmental supports and barriers to participation, and 3) activity-specific parent strategies to promote participation. PMID:25449189
Static Frequency Converter System Installed and Tested
NASA Technical Reports Server (NTRS)
Brown, Donald P.; Sadhukhan, Debashis
2003-01-01
A new Static Frequency Converter (SFC) system has been installed and tested at the NASA Glenn Research Center s Central Air Equipment Building to provide consistent, reduced motor start times and improved reliability for the building s 14 large exhausters and compressors. The operational start times have been consistent around 2 min, 20 s per machine. This is at least a 3-min improvement (per machine) over the old variable-frequency motor generator sets. The SFC was designed and built by Asea Brown Boveri (ABB) and installed by Encompass Design Group (EDG) as part of a Construction of Facilities project managed by Glenn (Robert Scheidegger, project manager). The authors designed the Central Process Distributed Control Systems interface and control between the programmable logic controller, solid-state exciter, and switchgear, which was constructed by Gilcrest Electric.
NASA Technical Reports Server (NTRS)
Berry, R. L.; Tegart, J. R.; Demchak, L. J.
1979-01-01
Thirty sets of test data selected from the 89 low-g aircraft tests flown by NASA KC-135 zero-g aircraft are listed in tables with their accompanying test conditions. The data for each test consists of the time history plots of digitalized data (in engineering units) and the time history plots of the load cell data transformed to the tank axis system. The transformed load cell data was developed for future analytical comparisons; therefore, these data were transformed and plotted from the time at which the aircraft Z axis acceleration passed through l-g. There are 14 time history plots per test condition. The contents of each plot is shown in a table.
RAVE: Rapid Visualization Environment
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos
1994-01-01
Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.
Incredible Years parenting interventions: current effectiveness research and future directions.
Gardner, Frances; Leijten, Patty
2017-06-01
The Incredible Years parenting intervention is a social learning theory-based programme for reducing children's conduct problems. Dozens of randomized trials, many by independent investigators, find consistent effects of Incredible Years on children's conduct problems across multiple countries and settings. However, in common with other interventions, these average effects hide much variability in the responses of individual children and families. Innovative moderator research is needed to enhance scientific understanding of why individual children and parents respond differently to intervention. Additionally, research is needed to test whether there are ways to make Incredible Years more effective and accessible for families and service providers, especially in low resource settings, by developing innovative delivery systems using new media, and by systematically testing for essential components of parenting interventions. Copyright © 2017. Published by Elsevier Ltd.
Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment
NASA Astrophysics Data System (ADS)
Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen
2018-01-01
A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.
Grimme, Stefan; Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas
2015-08-07
A density functional theory (DFT) based composite electronic structure approach is proposed to efficiently compute structures and interaction energies in large chemical systems. It is based on the well-known and numerically robust Perdew-Burke-Ernzerhoff (PBE) generalized-gradient-approximation in a modified global hybrid functional with a relatively large amount of non-local Fock-exchange. The orbitals are expanded in Ahlrichs-type valence-double zeta atomic orbital (AO) Gaussian basis sets, which are available for many elements. In order to correct for the basis set superposition error (BSSE) and to account for the important long-range London dispersion effects, our well-established atom-pairwise potentials are used. In the design of the new method, particular attention has been paid to an accurate description of structural parameters in various covalent and non-covalent bonding situations as well as in periodic systems. Together with the recently proposed three-fold corrected (3c) Hartree-Fock method, the new composite scheme (termed PBEh-3c) represents the next member in a hierarchy of "low-cost" electronic structure approaches. They are mainly free of BSSE and account for most interactions in a physically sound and asymptotically correct manner. PBEh-3c yields good results for thermochemical properties in the huge GMTKN30 energy database. Furthermore, the method shows excellent performance for non-covalent interaction energies in small and large complexes. For evaluating its performance on equilibrium structures, a new compilation of standard test sets is suggested. These consist of small (light) molecules, partially flexible, medium-sized organic molecules, molecules comprising heavy main group elements, larger systems with long bonds, 3d-transition metal systems, non-covalently bound complexes (S22 and S66×8 sets), and peptide conformations. For these sets, overall deviations from accurate reference data are smaller than for various other tested DFT methods and reach that of triple-zeta AO basis set second-order perturbation theory (MP2/TZ) level at a tiny fraction of computational effort. Periodic calculations conducted for molecular crystals to test structures (including cell volumes) and sublimation enthalpies indicate very good accuracy competitive to computationally more involved plane-wave based calculations. PBEh-3c can be applied routinely to several hundreds of atoms on a single processor and it is suggested as a robust "high-speed" computational tool in theoretical chemistry and physics.
Lunny, Carole; McKenzie, Joanne E; McDonald, Steve
2016-06-01
Locating overviews of systematic reviews is difficult because of an absence of appropriate indexing terms and inconsistent terminology used to describe overviews. Our objective was to develop a validated search strategy to retrieve overviews in MEDLINE. We derived a test set of overviews from the references of two method articles on overviews. Two population sets were used to identify discriminating terms, that is, terms that appear frequently in the test set but infrequently in two population sets of references found in MEDLINE. We used text mining to conduct a frequency analysis of terms appearing in the titles and abstracts. Candidate terms were combined and tested in MEDLINE in various permutations, and the performance of strategies measured using sensitivity and precision. Two search strategies were developed: a sensitivity-maximizing strategy, achieving 93% sensitivity (95% confidence interval [CI]: 87, 96) and 7% precision (95% CI: 6, 8), and a sensitivity-and-precision-maximizing strategy, achieving 66% sensitivity (95% CI: 58, 74) and 21% precision (95% CI: 17, 25). The developed search strategies enable users to more efficiently identify overviews of reviews compared to current strategies. Consistent language in describing overviews would aid in their identification, as would a specific MEDLINE Publication Type. Copyright © 2015 Elsevier Inc. All rights reserved.
Machado, Armando; Pata, Paulo
2005-02-01
Two theories of timing, scalar expectancy theory (SET) and learning-to-time (LeT), make substantially different assumptions about what animals learn in temporal tasks. In a test of these assumptions, pigeons learned two temporal discriminations. On Type 1 trials, they learned to choose a red key after a 1-sec signal and a green key after a 4-sec signal; on Type 2 trials, they learned to choose a blue key after a 4-sec signal and a yellow key after either an 8-sec signal (Group 8) or a 16-sec signal (Group 16). Then, the birds were exposed to signals 1 sec, 4 sec, and 16 sec in length and given a choice between novel key combinations (red or green vs. blue or yellow). The choice between the green key and the blue key was of particular significance because both keys were associated with the same 4-sec signal. Whereas SET predicted no effect of the test signal duration on choice, LeT predicted that preference for green would increase monotonically with the length of the signal but would do so faster for Group 8 than for Group 16. The results were consistent with LeT, but not with SET.
Assessment of resampling methods for causality testing: A note on the US inflation behavior
Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870
Assessment of resampling methods for causality testing: A note on the US inflation behavior.
Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.
Bedside hemoglobinometry in hemodialysis patients: lessons from point-of-care testing.
Agarwal, R; Heinz, T
2001-01-01
The HemoCue B-hemoglobin test system (HemoCue, Inc., Mission Viejo, CA) is a photometric method for rapid bedside determination of hemoglobin (Hb). We compared the performance of HemoCue measured Hb against Coulter STK-S (CSTK) measured Hb in chronic hemodialysis (HD) patients in two different settings. In the first setting, Hemocue analysis was performed by multiple HD technicians (n = 132). In the second setting, a nurse trained in proper specimen handling performed the HemoCue analysis (n = 74). Simultaneous measurement of Hb by the CSTK method was performed. First setting: Hb was 11.1+/-1.66 (SD) g/dl by CSTK and 11.7+/-2.29 g/dl by HemoCue. The HemoCue method consistently overestimated Hb by an average (SD) of 0.63 (1.267) g/dl (95% CI = 0.42 to 0.85). Hb was overestimated in 25.7% and underestimated in 2.3% of the patients by 1 g/dl or more. Thus, the HemoCue system was accurate within 1 g/dl only 72% of the time. Second setting: HemoCue overestimated Hb by an average (SD) of 0.29 (0.52) g/dl (95% CI, 0.17 to 0.41). Only 4% of all patients had errors in estimation of 1 g/dl or more. Thus, HemoCue was accurate in 96% of the patients within 1 g/dl. After reviewing the two protocols, the primary difference in the two studies was the technique used to obtain the specimens. When performed properly, Hb testing using the HemoCue testing system had a high level of agreement with CSTK. Appropriate training in specimen handling, as well as test performance, will increase accuracy and reliability of bedside hemoglobinometry.
Basehore, Monica J; Marlowe, Natalia M; Jones, Julie R; Behlendorf, Deborah E; Laver, Thomas A; Friez, Michael J
2012-06-01
Most individuals with intellectual disability and/or autism are tested for Fragile X syndrome at some point in their lifetime. Greater than 99% of individuals with Fragile X have an expanded CGG trinucleotide repeat motif in the promoter region of the FMR1 gene, and diagnostic testing involves determining the size of the CGG repeat as well as methylation status when an expansion is present. Using a previously described triplet repeat-primed polymerase chain reaction, we have performed additional validation studies using two cohorts with previous diagnostic testing results available for comparison purposes. The first cohort (n=88) consisted of both males and females and had a high percentage of abnormal samples, while the second cohort (n=624) consisted of only females and was not enriched for expansion mutations. Data from each cohort were completely concordant with the results previously obtained during the course of diagnostic testing. This study further demonstrates the utility of using laboratory-developed triplet repeat-primed FMR1 testing in a clinical setting.
Initial evaluation of an interactive test of sentence gist recognition.
Tye-Murray, N; Witt, S; Castelloe, J
1996-12-01
The laser videodisc-based Sentence Gist Recognition (SGR) test consists of sets of topically related sentences that are cued by short film clips. Clients respond to test items by selecting picture illustrations and may interact with the talker by using repair strategies when they do not recognize a test item. The two experiments, involving 40 and 35 adult subjects, respectively, indicated that the SGR may better predict subjective measures of speechreading and listening performance than more traditional audiologic sentence and nonsense syllable tests. Data from cochlear implant users indicated that the SGR accounted for a greater percentage of the variance for selected items of the Communication Profile for the Hearing-Impaired and the Speechreading Questionnaire for Cochlear-Implant Users than two other audiologic tests. As in previous work, subjects were most apt to ask the talker to repeat an utterance that they did not recognize than to ask the talker to restructure it. It is suggested that the SGR may reflect the interactive nature of conversation and provide a simulated real-world listening and/or speechreading task. The principles underlaying this test are consistent with the development of other computer technologies and concepts, such as compact discinteractive and virtual reality.
Common IED exploitation target set ontology
NASA Astrophysics Data System (ADS)
Russomanno, David J.; Qualls, Joseph; Wowczuk, Zenovy; Franken, Paul; Robinson, William
2010-04-01
The Common IED Exploitation Target Set (CIEDETS) ontology provides a comprehensive semantic data model for capturing knowledge about sensors, platforms, missions, environments, and other aspects of systems under test. The ontology also includes representative IEDs; modeled as explosives, camouflage, concealment objects, and other background objects, which comprise an overall threat scene. The ontology is represented using the Web Ontology Language and the SPARQL Protocol and RDF Query Language, which ensures portability of the acquired knowledge base across applications. The resulting knowledge base is a component of the CIEDETS application, which is intended to support the end user sensor test and evaluation community. CIEDETS associates a system under test to a subset of cataloged threats based on the probability that the system will detect the threat. The associations between systems under test, threats, and the detection probabilities are established based on a hybrid reasoning strategy, which applies a combination of heuristics and simplified modeling techniques. Besides supporting the CIEDETS application, which is focused on efficient and consistent system testing, the ontology can be leveraged in a myriad of other applications, including serving as a knowledge source for mission planning tools.
Use of the azimuthal resistivity technique for determination of regional azimuth of transmissivity
Carlson, D.
2010-01-01
Many bedrock units contain joint sets that commonly act as preferred paths for the movement of water, electrical charge, and possible contaminants associated with production or transit of crude oil or refined products. To facilitate the development of remediation programs, a need exists to reliably determine regional-scale properties of these joint sets: azimuth of transmissivity ellipse, dominant set, and trend(s). The surface azimuthal electrical resistivity survey method used for local in situ studies can be a noninvasive, reliable, efficient, and relatively cost-effective method for regional studies. The azimuthal resistivity survey method combines the use of standard resistivity equipment with a Wenner array rotated about a fixed center point, at selected degree intervals, which yields an apparent resistivity ellipse from which joint-set orientation can be determined. Regional application of the azimuthal survey method was tested at 17 sites in an approximately 500 km2 (193 mi2) area around Milwaukee, Wisconsin, with less than 15m (50 ft) overburden above the dolomite. Results of 26 azimuthal surveys were compared and determined to be consistent with the results of two other methods: direct observation of joint-set orientation and transmissivity ellipses from multiple-well-aquifer tests. The average of joint-set trend determined by azimuthal surveys is within 2.5?? of the average of joint-set trend determined by direct observation of major joint sets at 24 sites. The average of maximum of transmissivity trend determined by azimuthal surveys is within 5.7?? of the average of maximum of transmissivity trend determined for 14 multiple-well-aquifer tests. Copyright ?? 2010 The American Association of Petroleum Geologists/Division of Environmental Geosciences. All rights reserved.
The General Mission Analysis Tool (GMAT) System Test Plan
NASA Technical Reports Server (NTRS)
Conway, Darrel J.; Hughes, Steven P.
2007-01-01
This document serves as the System Test Approach for the GMAT Project. Preparation for system testing consists of three major stages: 1) The Test Approach sets the scope of system testing, the overall strategy to be adopted, the activities to be completed, the general resources required and the methods and processes to be used to test the release. It also details the activities, dependencies and effort required to conduct the System Test. 2) Test Planning details the activities, dependencies and effort required to conduct the System Test. 3) Test Cases documents the tests to be applied, the data to be processed, the automated testing coverage and the expected results. This document covers the first two of these items, and established the framework used for the GMAT test case development. The test cases themselves exist as separate components, and are managed outside of and concurrently with this System Test Plan.
IPv6 Test Bed for Testing Aeronautical Applications
NASA Technical Reports Server (NTRS)
Wilkins, Ryan; Zernic, Michael; Dhas, Chris
2004-01-01
Aviation industries in United States and in Europe are undergoing a major paradigm shift in the introduction of new network technologies. In the US, NASA is also actively investigating the feasibility of IPv6 based networks for the aviation needs of the United States. In Europe, the Eurocontrol lead, Internet Protocol for Aviation Exchange (iPAX) Working Group is actively investigating the various ways of migrating the aviation authorities backbone infrastructure from X.25 based networks to an IPv6 based network. For the last 15 years, the global aviation community has pursued the development and implementation of an industry-specific set of communications standards known as the Aeronautical Telecommunications Network (ATN). These standards are now beginning to affect the emerging military Global Air Traffic Management (GATM) community as well as the commercial air transport community. Efforts are continuing to gain a full understanding of the differences and similarities between ATN and Internet architectures as related to Communications, Navigation, and Surveillance (CNS) infrastructure choices. This research paper describes the implementation of the IPv6 test bed at NASA GRC, and Computer Networks & Software, Inc. and these two test beds are interface to Eurocontrol over the IPv4 Internet. This research work looks into the possibility of providing QoS performance for Aviation application in an IPv6 network as is provided in an ATN based network. The test bed consists of three autonomous systems. The autonomous system represents CNS domain, NASA domain and a EUROCONTROL domain. The primary mode of connection between CNS IPv6 testbed and NASA and EUROCONTROL IPv6 testbed is initially a set of IPv6 over IPv4 tunnels. The aviation application under test (CPDLC) consists of two processes running on different IPv6 enabled machines.
2013-02-21
telescope consists of six Mimosa tracking planes, the readout data acquisition system and the trigger hardware, and provides a ≈ 3µm track point- ing...is larger than the Mimosa sensors of the telescope, separate sets of data were taken to cover the irradiated and non-irradiated regions of the sensors
The Effects of Judgment-Based Stratum Classifications on the Efficiency of Stratum Scored CATs.
ERIC Educational Resources Information Center
Finney, Sara J.; Smith, Russell W.; Wise, Steven L.
Two operational item pools were used to investigate the performance of stratum computerized adaptive tests (CATs) when items were assigned to strata based on empirical estimates of item difficulty or human judgments of item difficulty. Items from the first data set consisted of 54 5-option multiple choice items from a form of the ACT mathematics…
Ad Hoc Categories and False Memories: Memory Illusions for Categories Created On-The-Spot
ERIC Educational Resources Information Center
Soro, Jerônimo C.; Ferreira, Mário B.; Semin, Gün R.; Mata, André; Carneiro, Paula
2017-01-01
Three experiments were designed to test whether experimentally created ad hoc associative networks evoke false memories. We used the DRM (Deese, Roediger, McDermott) paradigm with lists of ad hoc categories composed of exemplars aggregated toward specific goals (e.g., going for a picnic) that do not share any consistent set of features. Experiment…
ERIC Educational Resources Information Center
Garcia-Quintana, Roan A.; Mappus, M. Lynne
1980-01-01
Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)
Horizon Detection In The Visible Spectrum
2016-09-01
techniques can also recognize star patterns in star trackers for satellite attitude determination. Horizon detection in the visible spectrum was largely...discarded for attitude determination in favor of thermal imagery, due to the greater consistency of the earth’s thermal radiation. This thesis...in 85% of the tested image set. 14. SUBJECT TERMS attitude determination, machine learning, image classification, earth horizon sensor, computer
NASA Technical Reports Server (NTRS)
Wilkie, W. Keats; Langston, Chester W.; Mirick, Paul H.; Singleton, Jeffrey D.; Wilbur, Matthew L.; Yeager, William T., Jr.
1991-01-01
The sensitivity of blade tracking in hover to variations in root pitch was examined for two rotor configurations. Tests were conducted using a four bladed articulated rotor mounted on the NASA-Army aeroelastic rotor experimental system (ARES). Two rotor configurations were tested: one consisting of a blade set with flexible fiberglass spars and one with stiffer (by a factor of five in flapwise and torsional stiffnesses) aluminum spars. Both blade sets were identical in planform and airfoil distribution and were untwisted. The two configurations were ballasted to the same Lock number so that a direct comparison of the tracking sensitivity to a gross change in blade stiffness could be made. Experimental results show no large differences between the two sets of blades in the sensitivity of the blade tracking to root pitch adjustments. However, a measurable reduction in intrack coning of the fiberglass spar blades with respect to the aluminum blades is noted at higher rotor thrust conditions.
2017-01-01
In the field of evaluative conditioning (EC), two opposing theories—propositional single-process theory versus dual-process theory—are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms. PMID:28989730
Heycke, Tobias; Aust, Frederik; Stahl, Christoph
2017-09-01
In the field of evaluative conditioning (EC), two opposing theories-propositional single-process theory versus dual-process theory-are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms.
Filter parameter tuning analysis for operational orbit determination support
NASA Technical Reports Server (NTRS)
Dunham, J.; Cox, C.; Niklewski, D.; Mistretta, G.; Hart, R.
1994-01-01
The use of an extended Kalman filter (EKF) for operational orbit determination support is being considered by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). To support that investigation, analysis was performed to determine how an EKF can be tuned for operational support of a set of earth-orbiting spacecraft. The objectives of this analysis were to design and test a general purpose scheme for filter tuning, evaluate the solution accuracies, and develop practical methods to test the consistency of the EKF solutions in an operational environment. The filter was found to be easily tuned to produce estimates that were consistent, agreed with results from batch estimation, and compared well among the common parameters estimated for several spacecraft. The analysis indicates that there is not a sharply defined 'best' tunable parameter set, especially when considering only the position estimates over the data arc. The comparison of the EKF estimates for the user spacecraft showed that the filter is capable of high-accuracy results and can easily meet the current accuracy requirements for the spacecraft included in the investigation. The conclusion is that the EKF is a viable option for FDD operational support.
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Stefanick, M.; Jurdy, D. M.
1984-01-01
Statistical analyses are compared for two published hot spot data sets, one minimal set of 42 and another larger set of 117, using three different approaches. First, the earths surface is divided into 16 equal-area fractions and the observed distribution of hot spots among them is analyzed using chi-square tests. Second, cumulative distributions about the principal axes of the hot spot inertia tensor are used to describe hot spot distribution. Finally, a hot spot density function is constructed for each of the two hot spot data sets. The methods all indicate that hot spots have a nonuniform distribution, even when statistical fluctuations are considered. To the first order, hot spots are concentrated on one half of of the earth's surface area; within that portion, the distribution is consistent with a uniform distribution. The observed hot spot densities for neither data set are explained solely by plate speed.
Challenges with controlling varicella in prison settings: experience of California, 2010 to 2011.
Leung, Jessica; Lopez, Adriana S; Tootell, Elena; Baumrind, Nikki; Mohle-Boetani, Janet; Leistikow, Bruce; Harriman, Kathleen H; Preas, Christopher P; Cosentino, Giorgio; Bialek, Stephanie R; Marin, Mona
2014-10-01
This article describes the epidemiology of varicella in one state prison in California during 2010 and 2011, control measures implemented, and associated costs. Eleven varicella cases were reported, of which nine were associated with two outbreaks. One outbreak consisted of three cases and the second consisted of six cases with two generations of spread. Among exposed inmates serologically tested, 98% (643/656) were varicella-zoster virus seropositive. The outbreaks resulted in > 1,000 inmates exposed, 444 staff exposures, and > $160,000 in costs. The authors documented the challenges and costs associated with controlling and managing varicella in a prison setting. A screening policy for evidence of varicella immunity for incoming inmates and staff and vaccination of susceptible persons has the potential to mitigate the impact of future outbreaks and reduce resources necessary to manage cases and outbreaks. © The Author(s) 2014.
Calibration of Gimbaled Platforms: The Solar Dynamics Observatory High Gain Antennas
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.
2006-01-01
Simple parameterization of gimbaled platform pointing produces a complete set of 13 calibration parameters-9 misalignment angles, 2 scale factors and 2 biases. By modifying the parameter representation, redundancy can be eliminated and a minimum set of 9 independent parameters defined. These consist of 5 misalignment angles, 2 scale factors, and 2 biases. Of these, only 4 misalignment angles and 2 biases are significant for the Solar Dynamics Observatory (SDO) High Gain Antennas (HGAs). An algorithm to determine these parameters after launch has been developed and tested with simulated SDO data. The algorithm consists of a direct minimization of the root-sum-square of the differences between expected power and measured power. The results show that sufficient parameter accuracy can be attained even when time-dependent thermal distortions are present, if measurements from a pattern of intentional offset pointing positions is included.
NASA Astrophysics Data System (ADS)
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-01
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Møller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
Santra, Biswajit; Michaelides, Angelos; Scheffler, Matthias
2007-11-14
The ability of several density-functional theory (DFT) exchange-correlation functionals to describe hydrogen bonds in small water clusters (dimer to pentamer) in their global minimum energy structures is evaluated with reference to second order Moller-Plesset perturbation theory (MP2). Errors from basis set incompleteness have been minimized in both the MP2 reference data and the DFT calculations, thus enabling a consistent systematic evaluation of the true performance of the tested functionals. Among all the functionals considered, the hybrid X3LYP and PBE0 functionals offer the best performance and among the nonhybrid generalized gradient approximation functionals, mPWLYP and PBE1W perform best. The popular BLYP and B3LYP functionals consistently underbind and PBE and PW91 display rather variable performance with cluster size.
The Wilcoxon signed rank test for paired comparisons of clustered data.
Rosner, Bernard; Glynn, Robert J; Lee, Mei-Ling T
2006-03-01
The Wilcoxon signed rank test is a frequently used nonparametric test for paired data (e.g., consisting of pre- and posttreatment measurements) based on independent units of analysis. This test cannot be used for paired comparisons arising from clustered data (e.g., if paired comparisons are available for each of two eyes of an individual). To incorporate clustering, a generalization of the randomization test formulation for the signed rank test is proposed, where the unit of randomization is at the cluster level (e.g., person), while the individual paired units of analysis are at the subunit within cluster level (e.g., eye within person). An adjusted variance estimate of the signed rank test statistic is then derived, which can be used for either balanced (same number of subunits per cluster) or unbalanced (different number of subunits per cluster) data, with an exchangeable correlation structure, with or without tied values. The resulting test statistic is shown to be asymptotically normal as the number of clusters becomes large, if the cluster size is bounded. Simulation studies are performed based on simulating correlated ranked data from a signed log-normal distribution. These studies indicate appropriate type I error for data sets with > or =20 clusters and a superior power profile compared with either the ordinary signed rank test based on the average cluster difference score or the multivariate signed rank test of Puri and Sen. Finally, the methods are illustrated with two data sets, (i) an ophthalmologic data set involving a comparison of electroretinogram (ERG) data in retinitis pigmentosa (RP) patients before and after undergoing an experimental surgical procedure, and (ii) a nutritional data set based on a randomized prospective study of nutritional supplements in RP patients where vitamin E intake outside of study capsules is compared before and after randomization to monitor compliance with nutritional protocols.
NASA Technical Reports Server (NTRS)
Wright, Kenneth H.; Schneider, Todd; Vaughn, Jason; Hoang, Bao; Funderburk, Victor V.; Wong, Frankie; Gardiner, George
2010-01-01
A set of multi-junction GaAs/Ge solar array test coupons were subjected to a sequence of 5-year increments of combined environmental exposure tests. The test coupons capture an integrated design intended for use in a geosynchronous (GEO) space environment. A key component of this test campaign is conducting electrostatic discharge (ESD) tests in the inverted gradient mode. The protocol of the ESD tests is based on the ISO/CD 11221, the ISO standard for ESD testing on solar array panels. This standard is currently in its final review with expected approval in 2010. The test schematic in the ISO reference has been modified with Space System/Loral designed circuitry to better simulate the on-orbit operational conditions of its solar array design. Part of the modified circuitry is to simulate a solar array panel coverglass flashover discharge. All solar array coupons used in the test campaign consist of 4 cells. The ESD tests are performed at the beginning of life (BOL) and at each 5-year environment exposure point. The environmental exposure sequence consists of UV radiation, electron/proton particle radiation, thermal cycling, and ion thruster plume. This paper discusses the coverglass flashover simulation, ESD test setup, and the importance of the electrical test design in simulating the on-orbit operational conditions. Results from 5th-year testing are compared to the baseline ESD characteristics determined at the BOL condition.
Behavioural thermoregulation is highly repeatable and unaffected by digestive status in Agama atra.
van Berkel, Jenna; Clusella-Trullas, Susana
2018-05-03
The precision and the extent of behavioral thermoregulation are likely to provide fitness benefits to ectotherms. Yet the factors driving variation in selected or preferred body temperature (T set ) and its usefulness as a proxy for optimal physiological temperature (T opt ) are still debated. Although T set is often conserved among closely related species, substantial variation at the individual, population, and species level has also been reported but repeatability (sensu the intra-class correlation coefficient, ICC) of T set is generally low. One factor that influences T set is feeding status, with fed reptiles typically showing higher T set , a process thought to aid meal digestion. Here using experiments simulating realistic ranges of feeding and fasting regimes in Agama atra, a heliothermic lizard from southern Africa, we test if T set and its repeatability under these two states significantly differ. Daily T set ranged from 33.7 to 38.4° C, with a mean (± SE) of 36.7 ± 0.1° C for fed and 36.6 ± 0.1° C for unfed individuals. Comparisons of repeatability showed that females tend to be more consistent in the selection of body temperature than males, but not significantly so regardless of feeding status. We report some of the highest repeatability estimates of T set to date (full range: 0.229 - 0.642), and that the weak positive effects of feeding status on T set detected here do not hinder obtaining relatively high repeatability estimates. In conclusion, one of the major prerequisites for natural selection, consistent among-individual variation, is present, making the adaptive significance of T set considerably more plausible. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
1987-01-01
southern part of Shelby County, Tennessee (Figure 1-1). The project extends from the mouth of the creek at Lake McKellar upstream for a distance of 18.2 mi...project consists of two distinct improvement plans, each requiring a different ROW. From the mouth of the creek at Lake McKellar upstream to the confluence...in fact, consisted of a set of pier or wharf pilingssituated along the north bank of the creek at its junction with Lake McKellar(Figures 5-2 and 5-3
2010-01-01
Background The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. Methods A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). Results The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). Conclusions As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach. PMID:20731858
Simon, Michael; Hausner, Elke; Klaus, Susan F; Dunton, Nancy E
2010-08-23
The identification of health services research in databases such as PubMed/Medline is a cumbersome task. This task becomes even more difficult if the field of interest involves the use of diverse methods and data sources, as is the case with nurse staffing research. This type of research investigates the association between nurse staffing parameters and nursing and patient outcomes. A comprehensively developed search strategy may help identify nurse staffing research in PubMed/Medline. A set of relevant references in PubMed/Medline was identified by means of three systematic reviews. This development set was used to detect candidate free-text and MeSH terms. The frequency of these terms was compared to a random sample from PubMed/Medline in order to identify terms specific to nurse staffing research, which were then used to develop a sensitive, precise and balanced search strategy. To determine their precision, the newly developed search strategies were tested against a) the pool of relevant references extracted from the systematic reviews, b) a reference set identified from an electronic journal screening, and c) a sample from PubMed/Medline. Finally, all newly developed strategies were compared to PubMed's Health Services Research Queries (PubMed's HSR Queries). The sensitivities of the newly developed search strategies were almost 100% in all of the three test sets applied; precision ranged from 6.1% to 32.0%. PubMed's HSR queries were less sensitive (83.3% to 88.2%) than the new search strategies. Only minor differences in precision were found (5.0% to 32.0%). As with other literature on health services research, nurse staffing studies are difficult to identify in PubMed/Medline. Depending on the purpose of the search, researchers can choose between high sensitivity and retrieval of a large number of references or high precision, i.e. and an increased risk of missing relevant references, respectively. More standardized terminology (e.g. by consistent use of the term "nurse staffing") could improve the precision of future searches in this field. Empirically selected search terms can help to develop effective search strategies. The high consistency between all test sets confirmed the validity of our approach.
Optimization of Regression Models of Experimental Data Using Confirmation Points
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Nakamura, Priscila M; Papini, Camila B; Teixeira, Inaian P; Chiyoda, Alberto; Luciano, Eliete; Cordeira, Kelly Lynn; Kokubun, Eduardo
2015-01-01
Interventions in primary health care settings have been effective in increasing physical fitness. In 2001, the Programa de Exercício Físico em Unidades de Saúde (Physical Exercise in Health Primary Care Program-PEHPCP) was launched in Rio Claro City, Brazil. The intervention consisted of biweekly, 60-minute group sessions in all primary health care settings in the city. This study evaluated the effect of PEHPCP on physical fitness and on the aging process after a decade of ongoing implementation. There were 409 women (50 ± 26 y old) and 31 men (64 ± 10 y old) who were eligible for this study. Every 4 months, participants completed the American Alliance for Health, Physical Education, Recreation and Dance standardized tests. Program participation was associated with a reduced effect, compared with baseline, of the natural decline of physical fitness caused by aging, as represented by changes in the following measures: coordination test time, -0.44 seconds; agility and dynamic balance test time; -1.81 seconds; aerobic capacity test time, 3.57 seconds; and muscle strength exercises, +0.60 repetitions. No significant effect on flexibility was found. The PEHPCP showed potential in improving muscle strength, coordination, aerobic capacity, and agility and dynamic balance in participants and in maintaining flexibility in participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, M., Prothro, L. B., Obi, C.
A test bed for a series of chemical explosives tests known as Source Physics Experiments (SPE) was constructed in granitic rock of the Climax stock, in northern Yucca Flat at the Nevada National Security Site in 2010-2011. These tests are sponsored by the U.S. Department of Energy, National Nuclear Security Administration's National Center for Nuclear Security. The test series is designed to study the generation and propagation of seismic waves, and will provide data that will improve the predictive capability of calculational models for detecting and characterizing underground explosions. Abundant geologic data are available for the area, primarily as amore » result of studies performed in conjunction with the three underground nuclear tests conducted in the Climax granite in the 1960s and a few later studies of various types. The SPE test bed was constructed at an elevation of approximately 1,524 meters (m), and consists of a 91.4-centimeter (cm) diameter source hole at its center, surrounded by two rings of three 20.3-cm diameter instrument holes. The inner ring of holes is positioned 10 m away from the source hole, and the outer ring of holes is positioned 20 m from the source hole. An initial 160-m deep core hole was drilled at the location of the source hole that provided information on the geology of the site and rock samples for later laboratory testing. A suite of geophysical logs was run in the core hole and all six instruments holes to obtain matrix and fracture properties. Detailed information on the character and density of fractures encountered was obtained from the borehole image logs run in the holes. A total of 2,488 fractures were identified in the seven boreholes, and these were ranked into six categories (0 through 5) on the basis of their degree of openness and continuity. The analysis presented here considered only the higher-ranked fractures (ranks 2 through 5), of which there were 1,215 (approximately 49 percent of all fractures identified from borehole image logs). The fractures were grouped into sets based on their orientation. The most ubiquitous fracture set (50 percent of all higher-ranked fractures) is a group of low-angle fractures (dips 0 to 30 degrees). Fractures with dips of 60 to 90 degrees account for 38 percent of high-ranked fractures, and the remaining 12 percent are fractures with moderate dips (30 to 60 degrees). The higher-angle fractures are further subdivided into three sets based on their dip direction: fractures of Set 1 dip to the north-northeast, fractures of Set 2 dip to the south-southwest, and Set 3 consists of high-angle fractures that dip to the southeast and strike northeast. The low-angle fractures (Set 4) dip eastward. Fracture frequency does not appear to change substantially with depth. True fracture spacing averages 0.9 to 1.2 m for high-angle Sets 1, 2, and 3, and 0.6 m for Set 4. Two significant faults were observed in the core, centered at the depths of 25.3 and 32.3 m. The upper of these two faults dips 80 degrees to the north-northeast and, thus, is related to the Set-1 fractures. The lower fault dips 79 degrees to the south-southwest and is related to SPE Set-2 fractures. Neither fault has an identifiable surface trace. Groundwater was encountered in all holes drilled on the SPE test bed, and the fluid level averaged about 15.2 to 18.3 m below ground surface. An informal study of variations in the fluid level in the holes conducted during various phases of construction of the test bed concluded that groundwater flow through the fractured granitic rocks is not uniform, and appears to be controlled by variations in the orientation and degree of interconnectedness of the fractures. It may also be possible that an aplite dike or quartz vein may be present in the test bed, which could act as a barrier to groundwater flow and, thus, could account for anisotropy seen in the groundwater recovery measurements.« less
Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments
NASA Astrophysics Data System (ADS)
Shaar, Ron; Tauxe, Lisa
2013-03-01
Thellier-type experiments are a method used to estimate the intensity of the ancient geomagnetic field from samples carrying thermoremanent magnetization. The analysis of Thellier-type experimental data is conventionally done by manually interpreting data from each specimen individually. The main limitations of this approach are: (1) manual interpretation is highly subjective and can be biased by misleading concepts, (2) the procedure is time consuming, and (3) unless the measurement data are published, the final results cannot be reproduced by readers. These issues compound when trying to combine together paleointensity data from a collection of studies. Here, we address these problems by introducing the Thellier GUI: a comprehensive tool for interpreting Thellier-type experimental data. The tool presents a graphical user interface, which allows manual interpretation of the data, but also includes two new interpretation tools: (1) Thellier Auto Interpreter: an automatic interpretation procedure based on a given set of experimental requirements, and 2) Consistency Test: a self-test for the consistency of the results assuming groups of samples that should have the same paleointensity values. We apply the new tools to data from two case studies. These demonstrate that interpretation of non-ideal Arai plots is nonunique and different selection criteria can lead to significantly different conclusions. Hence, we recommend adopting the automatic interpretation approach, as it allows a more objective interpretation, which can be easily repeated or revised by others. When the analysis is combined with a Consistency Test, the credibility of the interpretations is enhanced. We also make the case that published paleointensity studies should include the measurement data (as supplementary files or as a contributions to the MagIC database) so that results based on a particular data set can be reproduced and assessed by others.
Arantes, Joana
2008-06-01
The present research tested the generality of the "context effect" previously reported in experiments using temporal double bisection tasks [e.g., Arantes, J., Machado, A. Context effects in a temporal discrimination task: Further tests of the Scalar Expectancy Theory and Learning-to-Time models. J. Exp. Anal. Behav., in press]. Pigeons learned two temporal discriminations in which all the stimuli appear successively: 1s (red) vs. 4s (green) and 4s (blue) vs. 16s (yellow). Then, two tests were conducted to compare predictions of two timing models, Scalar Expectancy Theory (SET) and the Learning-to-Time (LeT) model. In one test, two psychometric functions were obtained by presenting pigeons with intermediate signal durations (1-4s and 4-16s). Results were mixed. In the critical test, pigeons were exposed to signals ranging from 1 to 16s and followed by the green or the blue key. Whereas SET predicted that the relative response rate to each of these keys should be independent of the signal duration, LeT predicted that the relative response rate to the green key (compared with the blue key) should increase with the signal duration. Results were consistent with LeT's predictions, showing that the context effect is obtained even when subjects do not need to make a choice between two keys presented simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.
1996-08-01
Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less
Rajgaria, R.; Wei, Y.; Floudas, C. A.
2010-01-01
An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin
Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.
Mardirossian, Narbe; Head-Gordon, Martin
2016-11-09
Benchmark datasets of non-covalent interactions are essential for assessing the performance of density functionals and other quantum chemistry approaches. In a recent blind test, Taylor et al. benchmarked 14 methods on a new dataset consisting of 10 dimer potential energy curves calculated using coupled cluster with singles, doubles, and perturbative triples (CCSD(T)) at the complete basis set (CBS) limit (80 data points in total). Finally, the dataset is particularly interesting because compressed, near-equilibrium, and stretched regions of the potential energy surface are extensively sampled.
Development of an integrated set of research facilities for the support of research flight test
NASA Technical Reports Server (NTRS)
Moore, Archie L.; Harney, Constance D.
1988-01-01
The Ames-Dryden Flight Research Facility (DFRF) serves as the site for high-risk flight research on many one-of-a-kind test vehicles like the X-29A advanced technology demonstrator, F-16 advanced fighter technology integration (AFTI), AFTI F-111 mission adaptive wing, and F-18 high-alpha research vehicle (HARV). Ames-Dryden is on a section of the historic Muroc Range. The facility is oriented toward the testing of high-performance aircraft, as shown by its part in the development of the X-series aircraft. Given the cost of research flight tests and the complexity of today's systems-driven aircraft, an integrated set of ground support experimental facilities is a necessity. In support of the research flight test of highly advanced test beds, the DFRF is developing a network of facilities to expedite the acquisition and distribution of flight research data to the researcher. The network consists of an array of experimental ground-based facilities and systems as nodes and the necessary telecommunications paths to pass research data and information between these facilities. This paper presents the status of the current network, an overview of current developments, and a prospectus on future major enhancements.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
Community detection for networks with unipartite and bipartite structure
NASA Astrophysics Data System (ADS)
Chang, Chang; Tang, Chao
2014-09-01
Finding community structures in networks is important in network science, technology, and applications. To date, most algorithms that aim to find community structures only focus either on unipartite or bipartite networks. A unipartite network consists of one set of nodes and a bipartite network consists of two nonoverlapping sets of nodes with only links joining the nodes in different sets. However, a third type of network exists, defined here as the mixture network. Just like a bipartite network, a mixture network also consists of two sets of nodes, but some nodes may simultaneously belong to two sets, which breaks the nonoverlapping restriction of a bipartite network. The mixture network can be considered as a general case, with unipartite and bipartite networks viewed as its limiting cases. A mixture network can represent not only all the unipartite and bipartite networks, but also a wide range of real-world networks that cannot be properly represented as either unipartite or bipartite networks in fields such as biology and social science. Based on this observation, we first propose a probabilistic model that can find modules in unipartite, bipartite, and mixture networks in a unified framework based on the link community model for a unipartite undirected network [B Ball et al (2011 Phys. Rev. E 84 036103)]. We test our algorithm on synthetic networks (both overlapping and nonoverlapping communities) and apply it to two real-world networks: a southern women bipartite network and a human transcriptional regulatory mixture network. The results suggest that our model performs well for all three types of networks, is competitive with other algorithms for unipartite or bipartite networks, and is applicable to real-world networks.
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Pareto fronts for multiobjective optimization design on materials data
NASA Astrophysics Data System (ADS)
Gopakumar, Abhijith; Balachandran, Prasanna; Gubernatis, James E.; Lookman, Turab
Optimizing multiple properties simultaneously is vital in materials design. Here we apply infor- mation driven, statistical optimization strategies blended with machine learning methods, to address multi-objective optimization tasks on materials data. These strategies aim to find the Pareto front consisting of non-dominated data points from a set of candidate compounds with known character- istics. The objective is to find the pareto front in as few additional measurements or calculations as possible. We show how exploration of the data space to find the front is achieved by using uncer- tainties in predictions from regression models. We test our proposed design strategies on multiple, independent data sets including those from computations as well as experiments. These include data sets for Max phases, piezoelectrics and multicomponent alloys.
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr
NASA Astrophysics Data System (ADS)
Feng, Rulin; Peterson, Kirk A.
2017-08-01
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP0 for PuO2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal/mol. The ground state of PuO2 is predicted to be the 0 g +5Σ state.
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.
Feng, Rulin; Peterson, Kirk A
2017-08-28
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal/mol. The ground state of PuO 2 is predicted to be the Σ0g+5 state.
Computer simulation of fibrillation threshold measurements and electrophysiologic testing procedures
NASA Technical Reports Server (NTRS)
Grumbach, M. P.; Saxberg, B. E.; Cohen, R. J.
1987-01-01
A finite element model of cardiac conduction was used to simulate two experimental protocols: 1) fibrillation threshold measurements and 2) clinical electrophysiologic (EP) testing procedures. The model consisted of a cylindrical lattice whose properties were determined by four parameters: element length, conduction velocity, mean refractory period, and standard deviation of refractory periods. Different stimulation patterns were applied to the lattice under a given set of lattice parameter values and the response of the model was observed through a simulated electrocardiogram. The studies confirm that the model can account for observations made in experimental fibrillation threshold measurements and in clinical EP testing protocols.
A new in silico classification model for ready biodegradability, based on molecular fragments.
Lombardo, Anna; Pizzo, Fabiola; Benfenati, Emilio; Manganaro, Alberto; Ferrari, Thomas; Gini, Giuseppina
2014-08-01
Regulations such as the European REACH (Registration, Evaluation, Authorization and restriction of Chemicals) often require chemicals to be evaluated for ready biodegradability, to assess the potential risk for environmental and human health. Because not all chemicals can be tested, there is an increasing demand for tools for quick and inexpensive biodegradability screening, such as computer-based (in silico) theoretical models. We developed an in silico model starting from a dataset of 728 chemicals with ready biodegradability data (MITI-test Ministry of International Trade and Industry). We used the novel software SARpy to automatically extract, through a structural fragmentation process, a set of substructures statistically related to ready biodegradability. Then, we analysed these substructures in order to build some general rules. The model consists of a rule-set made up of the combination of the statistically relevant fragments and of the expert-based rules. The model gives good statistical performance with 92%, 82% and 76% accuracy on the training, test and external set respectively. These results are comparable with other in silico models like BIOWIN developed by the United States Environmental Protection Agency (EPA); moreover this new model includes an easily understandable explanation. Copyright © 2014 Elsevier Ltd. All rights reserved.
TESTS OF LOW-FREQUENCY GEOMETRIC DISTORTIONS IN LANDSAT 4 IMAGES.
Batson, R.M.; Borgeson, W.T.; ,
1985-01-01
Tests were performed to investigate the geometric characteristics of Landsat 4 images. The first set of tests was designed to determine the extent of image distortion caused by the physical process of writing the Landsat 4 images on film. The second was designed to characterize the geometric accuracies inherent in the digital images themselves. Test materials consisted of film images of test targets generated by the Laser Beam Recorders at Sioux Falls, the Optronics* Photowrite film writer at Goddard Space Flight Center, and digital image files of a strip 600 lines deep across the full width of band 5 of the Washington, D. C. Thematic Mapper scene. The tests were made by least-squares adjustment of an array of measured image points to a corresponding array of control points.
Differential item functioning analysis of the Vanderbilt Expertise Test for cars
Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W.; Van Gulick, Ana Beth; Gauthier, Isabel
2015-01-01
The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge. PMID:26418499
Publication bias and the failure of replication in experimental psychology.
Francis, Gregory
2012-12-01
Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.
VizieR Online Data Catalog: Classification of 2XMM variable sources (Lo+, 2014)
NASA Astrophysics Data System (ADS)
Lo, K. K.; Farrell, S.; Murphy, T.; Gaensler, B. M.
2017-06-01
The 2XMMi-DR2 catalog (Cat. IX/40) consists of observations made with the XMM-Newton satellite between 2000 and 2008 and covers a sky area of about 420 deg2. The observations were made using the European Photon Imaging Camera (EPIC) that consists of three CCD cameras - pn, MOS1, and MOS2 - and covers the energy range from 0.2 keV to 12 keV. There are 221012 unique sources in 2XMM-DR2, of which 2267 were flagged as variable by the XMM processing pipeline (Watson et al. 2009, J/A+A/493/339). The variability test used by the pipeline is a {Chi}2 test against the null hypothesis that the source flux is constant, with the probability threshold set at 10-5. (1 data file).
NASA Astrophysics Data System (ADS)
Hadaway, James B.; Wells, Conrad; Olczak, Gene; Waldman, Mark; Whitman, Tony; Cosentino, Joseph; Connolly, Mark; Chaney, David; Telfer, Randal
2016-07-01
The James Webb Space Telescope (JWST) primary mirror (PM) is 6.6 m in diameter and consists of 18 hexagonal segments, each 1.5 m point-to-point. Each segment has a six degree-of-freedom hexapod actuation system and a radius of-curvature (RoC) actuation system. The full telescope will be tested at its cryogenic operating temperature at Johnson Space Center. This testing will include center-of-curvature measurements of the PM, using the Center-of-Curvature Optical Assembly (COCOA) and the Absolute Distance Meter Assembly (ADMA). The COCOA includes an interferometer, a reflective null, an interferometer-null calibration system, coarse and fine alignment systems, and two displacement measuring interferometer systems. A multiple-wavelength interferometer (MWIF) is used for alignment and phasing of the PM segments. The ADMA is used to measure, and set, the spacing between the PM and the focus of the COCOA null (i.e. the PM center-of-curvature) for determination of the ROC. The performance of these metrology systems was assessed during two cryogenic tests at JSC. This testing was performed using the JWST Pathfinder telescope, consisting mostly of engineering development and spare hardware. The Pathfinder PM consists of two spare segments. These tests provided the opportunity to assess how well the center-of-curvature optical metrology hardware, along with the software and procedures, performed using real JWST telescope hardware. This paper will describe the test setup, the testing performed, and the resulting metrology system performance. The knowledge gained and the lessons learned during this testing will be of great benefit to the accurate and efficient cryogenic testing of the JWST flight telescope.
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Wells, Conrad; Olczak, Gene; Waldman, Mark; Whitman, Tony; Cosentino, Joseph; Connolly, Mark; Chaney, David; Telfer, Randal
2016-01-01
The James Webb Space Telescope (JWST) primary mirror (PM) is 6.6 m in diameter and consists of 18 hexagonal segments, each 1.5 m point-to-point. Each segment has a six degree-of-freedom hexapod actuation system and a radius-of-curvature (RoC) actuation system. The full telescope will be tested at its cryogenic operating temperature at Johnson Space Center. This testing will include center-of-curvature measurements of the PM, using the Center-of-Curvature Optical Assembly (COCOA) and the Absolute Distance Meter Assembly (ADMA). The COCOA includes an interferometer, a reflective null, an interferometer-null calibration system, coarse & fine alignment systems, and two displacement measuring interferometer systems. A multiple-wavelength interferometer (MWIF) is used for alignment & phasing of the PM segments. The ADMA is used to measure, and set, the spacing between the PM and the focus of the COCOA null (i.e. the PM center-of-curvature) for determination of the ROC. The performance of these metrology systems was assessed during two cryogenic tests at JSC. This testing was performed using the JWST Pathfinder telescope, consisting mostly of engineering development & spare hardware. The Pathfinder PM consists of two spare segments. These tests provided the opportunity to assess how well the center-of-curvature optical metrology hardware, along with the software & procedures, performed using real JWST telescope hardware. This paper will describe the test setup, the testing performed, and the resulting metrology system performance. The knowledge gained and the lessons learned during this testing will be of great benefit to the accurate & efficient cryogenic testing of the JWST flight telescope.
An Update on the Lithium-Ion Cell Low-Earth-Orbit Verification Test Program
NASA Technical Reports Server (NTRS)
Reid, Concha M.; Manzo, Michelle A.; Miller, Thomas B.; McKissock, Barbara I.; Bennett, William
2007-01-01
A Lithium-Ion Cell Low-Earth-Orbit Verification Test Program is being conducted by NASA Glenn Research Center to assess the performance of lithium-ion (Li-ion) cells over a wide range of low-Earth-orbit (LEO) conditions. The data generated will be used to build an empirical model for Li-ion batteries. The goal of the modeling will be to develop a tool to predict the performance and cycle life of Li-ion batteries operating at a specified set of mission conditions. Using this tool, mission planners will be able to design operation points of the battery system while factoring in mission requirements and the expected life and performance of the batteries. Test conditions for the program were selected via a statistical design of experiments to span a range of feasible operational conditions for LEO aerospace applications. The variables under evaluation are temperature, depth-of-discharge (DOD), and end-of-charge voltage (EOCV). The baseline matrix was formed by generating combinations from a set of three values for each variable. Temperature values are 10 C, 20 C and 30 C. Depth-of-discharge values are 20%, 30% and 40%. EOCV values are 3.85 V, 3.95 V, and 4.05 V. Test conditions for individual cells may vary slightly from the baseline test matrix depending upon the cell manufacturer s recommended operating conditions. Cells from each vendor are being evaluated at each of ten sets of test conditions. Cells from four cell manufacturers are undergoing life cycle tests. Life cycling on the first sets of cells began in September 2004. These cells consist of Saft 40 ampere-hour (Ah) cells and Lith ion 30 Ah cells. These cells have achieved over 10,000 cycles each, equivalent to about 20 months in LEO. In the past year, the test program has expanded to include the evaluation of Mine Safety Appliances (MSA) 50 Ah cells and ABSL battery modules. The MSA cells will begin life cycling in October 2006. The ABSL battery modules consist of commercial Sony hard carbon 18650 lithium-ion cells configured in series and parallel combinations to create nominal 14.4 volt, 3 Ah packs (4s-2p). These modules have accumulated approximately 3000 cycles. Results on the performance of the cells and modules will be presented in this paper. The life prediction and performance model for Li-ion cells in LEO will be built by analyzing the data statistically and performing regression analysis. Cells are being cycled to failure so that differences in performance trends that occur at different stages in the life of the cell can be observed and accurately modeled. Cell testing is being performed at the Naval Surface Warfare Center in Crane, IN.
Translating and validating a Training Needs Assessment tool into Greek
Markaki, Adelais; Antonakis, Nikos; Hicks, Carolyn M; Lionis, Christos
2007-01-01
Background The translation and cultural adaptation of widely accepted, psychometrically tested tools is regarded as an essential component of effective human resource management in the primary care arena. The Training Needs Assessment (TNA) is a widely used, valid instrument, designed to measure professional development needs of health care professionals, especially in primary health care. This study aims to describe the translation, adaptation and validation of the TNA questionnaire into Greek language and discuss possibilities of its use in primary care settings. Methods A modified version of the English self-administered questionnaire consisting of 30 items was used. Internationally recommended methodology, mandating forward translation, backward translation, reconciliation and pretesting steps, was followed. Tool validation included assessing item internal consistency, using the alpha coefficient of Cronbach. Reproducibility (test – retest reliability) was measured by the kappa correlation coefficient. Criterion validity was calculated for selected parts of the questionnaire by correlating respondents' research experience with relevant research item scores. An exploratory factor analysis highlighted how the items group together, using a Varimax (oblique) rotation and subsequent Cronbach's alpha assessment. Results The psychometric properties of the Greek version of the TNA questionnaire for nursing staff employed in primary care were good. Internal consistency of the instrument was very good, Cronbach's alpha was found to be 0.985 (p < 0.001) and Kappa coefficient for reproducibility was found to be 0.928 (p < 0.0001). Significant positive correlations were found between respondents' current performance levels on each of the research items and amount of research involvement, indicating good criterion validity in the areas tested. Factor analysis revealed seven factors with eigenvalues of > 1.0, KMO (Kaiser-Meyer-Olkin) measure of sampling adequacy = 0.680 and Bartlett's test of sphericity, p < 0.001. Conclusion The translated and adapted Greek version is comparable with the original English instrument in terms of validity and reliability and it is suitable to assess professional development needs of nursing staff in Greek primary care settings. PMID:17474989
Progress of soil radionuclide distribution studies for the Nevada Applied Ecology Group: 1981
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essington, E.H.
Two nuclear sites have been under intensive study by the Nevada Applied Ecology Group (NAEG) during 1980 and 1981, NS201 in area 18 and NS219,221 in area 20. In support of the various studies Los Alamos National Laboratory (Group LS-6) has provided consultation and evaluations relative to radionuclide distributions in soils inundated with radioactive debris from those tests. In addition, a referee effort was also conducted in both analysis of replicate samples and in evaluating various data sets for consistency of results. This report summarizes results of several of the data sets collected to test certain hypotheses relative to radionuclidemore » distributions and factors affecting calculations of hypotheses relative to radionuclide distributions and factors affecting calculations of radionuclide inventories and covers the period February 1980 to May 1981.« less
Bloomfield, Sally F.; Carling, Philip C.; Exner, Martin
2017-01-01
Hygiene procedures for hands, surfaces and fabrics are central to preventing spread of infection in settings including healthcare, food production, catering, agriculture, public settings, and home and everyday life. They are used in situations including hand hygiene, clinical procedures, decontamination of environmental surfaces, respiratory hygiene, food handling, laundry hygiene, toilet hygiene and so on. Although the principles are common to all, approaches currently used in different settings are inconsistent. A concern is the use of inconsistent terminology which is misleading, especially to people we need to communicate with such as the public or cleaning professionals. This paper reviews the data on current approaches, alongside new insights to developing hygiene procedures. Using this data, we propose a more scientifically-grounded framework for developing procedures that maximize protection against infection, based on consistent principles and terminology, and applicable across all settings. A key feature is use of test models which assess the state of surfaces after treatment rather than product performance alone. This allows procedures that rely on removal of microbes to be compared with those employing chemical or thermal inactivation. This makes it possible to ensure that a consistent “safety target level” is achieved regardless of the type of procedure used, and allows us deliver maximum health benefit whilst ensuring prudent usage of antimicrobial agents, detergents, water and energy. PMID:28670508
Huang, Yun-Hsin; Wu, Chih-Hsun; Chen, Hsiu-Jung; Cheng, Yih-Ru; Hung, Fu-Chien; Leung, Kai-Kuan; Lue, Bee-Horng; Chen, Ching-Yu; Chiu, Tai-Yuan; Wu, Yin-Chang
2018-01-16
Severe negative emotional reactions to chronic illness are maladaptive to patients and they need to be addressed in a primary care setting. The psychometric properties of a quick screening tool-the Negative Emotions due to Chronic Illness Screening Test (NECIS)-for general emotional problems among patients with chronic illness being treated in a primary care setting was investigated. Three studies including 375 of patients with chronic illness were used to assess and analyze internal consistency, test-retest reliability, criterion-related validity, a cut-off point for distinguishing maladaptive emotions and clinical application validity of NECIS. Self-report questionnaires were used. Internal consistency (Cronbach's α) ranged from 0.78 to 0.82, and the test-retest reliability was 0.71 (P < 0.001). Criterion-related validity was 0.51 (P < 0.001). Based on the 'severe maladaptation' and 'moderate maladaptation' groups defined by using the 'Worsening due to Chronic Illness' index as the analysis reference, the receiver-operating characteristic curve analysis revealed an area under the curve of 0.81 and 0.82 (ps < 0.001), and a cut-off point of 19/20 was the most satisfactory for distinguishing those with overly negative emotions, with a sensitivity and specificity of 83.3 and 69.0%, and 68.5 and 83.0%, respectively. The clinical application validity analysis revealed that low NECIS group showed significantly better adaptation to chronic illness on the scales of subjective health, general satisfaction with life, self-efficacy of self-care for disease, illness perception and stressors in everyday life. The NECIS has satisfactory psychometric properties for use in the primary care setting. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Reliability Measure of a Clinical Test: Appreciation of Music in Cochlear Implantees (AMICI)
Cheng, Min-Yu; Spitzer, Jaclyn B.; Shafiro, Valeriy; Sheft, Stanley; Mancuso, Dean
2014-01-01
Purpose The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. Research Design Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10–14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. Data Analysis Test-retest reliability and equivalence of the test’s three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. Results Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = −0.71), and moderate correlation between AMICI and CNC (r = 0.4). Conclusions The study showed AMICI is highly reliable for assessing musical perception in CI users. PMID:24384082
Ng, Shamay S. M.; Ng, Gabriel Y. F.
2014-01-01
Objectives. To (1) translate and culturally adapt the English version Community Integration Measure into Chinese (Cantonese), (2) report the results of initial validation of the Chinese (Cantonese) version of CIM (CIM-C) including the content validity, internal consistency, test-retest reliability, and factor structure of CIM-C for use in stroke survivors in a Chinese community setting, and (3) investigate the level of community integration of stroke survivors living in Hong Kong. Design. Cross-sectional study. Setting. University-based rehabilitation centre. Participants. 62 (n = 62) subjects with chronic stroke. Methods. The CIM-C was produced after forward-backward translation, expert panel review, and pretesting. 25 (n = 25) of the same subjects were reassessed after a 1-week interval. Results. The items of the CIM-C demonstrated high internal consistency with a Cronbach's α of 0.84. The CIM-C showed good test-retest reliability with an intraclass correlation coefficient (ICC) of 0.84 (95% confidence interval, 0.64–0.93). A 3-factor structure of the CIM-C including “relationship and engagement,” “sense of knowing,” and “independent living,” was consistent with the original theoretical model. Hong Kong stroke survivors revealed a high level of community integration as measured by the CIM-C (mean (SD): 43.48 (5.79)). Conclusions. The CIM-C is a valid and reliable measure for clinical use. PMID:24995317
Balsamo, Sandor; Tibana, Ramires Alsamir; Nascimento, Dahan da Cunha; de Farias, Gleyverton Landim; Petruccelli, Zeno; de Santana, Frederico dos Santos; Martins, Otávio Vanni; de Aguiar, Fernando; Pereira, Guilherme Borges; de Souza, Jéssica Cardoso; Prestes, Jonato
2012-01-01
The super-set is a widely used resistance training method consisting of exercises for agonist and antagonist muscles with limited or no rest interval between them – for example, bench press followed by bent-over rows. In this sense, the aim of the present study was to compare the effects of different super-set exercise sequences on the total training volume. A secondary aim was to evaluate the ratings of perceived exertion and fatigue index in response to different exercise order. On separate testing days, twelve resistance-trained men, aged 23.0 ± 4.3 years, height 174.8 ± 6.75 cm, body mass 77.8 ± 13.27 kg, body fat 12.0% ± 4.7%, were submitted to a super-set method by using two different exercise orders: quadriceps (leg extension) + hamstrings (leg curl) (QH) or hamstrings (leg curl) + quadriceps (leg extension) (HQ). Sessions consisted of three sets with a ten-repetition maximum load with 90 seconds rest between sets. Results revealed that the total training volume was higher for the HQ exercise order (P = 0.02) with lower perceived exertion than the inverse order (P = 0.04). These results suggest that HQ exercise order involving lower limbs may benefit practitioners interested in reaching a higher total training volume with lower ratings of perceived exertion compared with the leg extension plus leg curl order. PMID:22371654
The role of competing knowledge structures in undermining learning: Newton's second and third laws
NASA Astrophysics Data System (ADS)
Low, David J.; Wilson, Kate F.
2017-01-01
We investigate the development of student understanding of Newton's laws using a pre-instruction test (the Force Concept Inventory), followed by a series of post-instruction tests and interviews. While some students' somewhat naive, pre-existing models of Newton's third law are largely eliminated following a semester of teaching, we find that a particular inconsistent model is highly resilient to, and may even be strengthened by, instruction. If test items contain words that cue students to think of Newton's second law, then students are more likely to apply a "net force" approach to solving problems, even if it is inappropriate to do so. Additional instruction, reinforcing physical concepts in multiple settings and from multiple sources, appears to help students develop a more connected and consistent level of understanding. We recommend explicitly encouraging students to check their work for consistency with physical principles, along with the standard checks for dimensionality and order of magnitude, to encourage reflective and rigorous problem solving.
Plasma Accelerator and Energy Conversion Research
1982-10-29
performance tests have been accomplished. A self-contained recirculating AMTEC device with a thermal to electric conversion efficiency of 19% has been...combined efficiency . These two match up particularly well, because thermionic conversion is a high temperature technique, whereas AMTEC is limited to...EXPERIENTAL: Samples: The samples were prepared with a high rate DC magnetron sputtering apparatus ( SFI model 1 ). The sample set consisted of four
ERIC Educational Resources Information Center
Novakovic, Nadezda
2008-01-01
The Angoff method is a widely used procedure for setting pass scores in vocational examinations, in which the awarders estimate the performance of minimally competent candidates (MCCs) on each test item. Within the context of some UK vocational examinations, the procedure consists of two stages: after making the first round of estimates, awarders…
ERIC Educational Resources Information Center
Carlfjord, Siw; Johansson, Kjell; Bendtsen, Preben; Nilsen, Per; Andersson, Agneta
2010-01-01
Objective: The aim of this study was to evaluate staff experiences of the use of a computer-based concept for lifestyle testing and tailored advice implemented in routine primary health care (PHC). Design: The design of the study was a cross-sectional, retrospective survey. Setting: The study population consisted of staff at nine PHC units in the…
Reasoning, Problem Solving, and Intelligence.
1980-04-01
designed to test the validity of their model of response choice in analogical reason- ing. In the first experiment, they set out to demonstrate that...second experiment were somewhat consistent with the prediction. The third experiment used a concept-formation design in which subjects were required to... designed to show interrelationships between various forms of inductive reasoning. Their model fits were highly comparable to those of Rumelhart and
ERIC Educational Resources Information Center
Gorlewski, Julie A., Ed.; Porfilio, Brad J., Ed.; Gorlewski, David A., Ed.
2012-01-01
This book overturns the typical conception of standards, empowering educators by providing concrete examples of how top-down models of assessment can be embraced and used in ways that are consistent with critical pedagogies. Although standards, as broad frameworks for setting learning targets, are not necessarily problematic, when they are…
Platelet Aggregometry Testing: Molecular Mechanisms, Techniques and Clinical Implications
Koltai, Katalin; Kesmarky, Gabor; Feher, Gergely; Tibold, Antal
2017-01-01
Platelets play a fundamental role in normal hemostasis, while their inherited or acquired dysfunctions are involved in a variety of bleeding disorders or thrombotic events. Several laboratory methodologies or point-of-care testing methods are currently available for clinical and experimental settings. These methods describe different aspects of platelet function based on platelet aggregation, platelet adhesion, the viscoelastic properties during clot formation, the evaluation of thromboxane metabolism or certain flow cytometry techniques. Platelet aggregometry is applied in different clinical settings as monitoring response to antiplatelet therapies, the assessment of perioperative bleeding risk, the diagnosis of inherited bleeding disorders or in transfusion medicine. The rationale for platelet function-driven antiplatelet therapy was based on the result of several studies on patients undergoing percutaneous coronary intervention (PCI), where an association between high platelet reactivity despite P2Y12 inhibition and ischemic events as stent thrombosis or cardiovascular death was found. However, recent large scale randomized, controlled trials have consistently failed to demonstrate a benefit of personalised antiplatelet therapy based on platelet function testing. PMID:28820484
Need for cognition moderates paranormal beliefs and magical ideation in inconsistent-handers.
Prichard, Eric C; Christman, Stephen D
2016-01-01
A growing literature suggests that degree of handedness predicts gullibility and magical ideation. Inconsistent-handers (people who use their non-dominant hand for at least one common manual activity) report more magical ideation and are more gullible. The current study tested whether this effect is moderated by need for cognition. One hundred eighteen university students completed questionnaires assessing handedness, self-reported paranormal beliefs, and self-reported need for cognition. Handedness (Inconsistent vs. Consistent Right) and Need for Cognition (High vs. Low) were treated as categorical predictors. Both paranormal beliefs and magical ideation served as dependent variable's in separate analyses. Neither set of tests yielded main effects for handedness or need for cognition. However, there were a significant handedness by need for cognition interactions. Post-hoc comparisons revealed that low, but not high, need for cognition inconsistent-handers reported relatively elevated levels of paranormal belief and magical ideation. A secondary set of tests treating the predictor variables as continuous instead of categorical obtained the same overall pattern.
Thinking within the box: The relational processing style elicited by counterfactual mind-sets.
Kray, Laura J; Galinsky, Adam D; Wong, Elaine M
2006-07-01
By comparing reality to what might have been, counterfactuals promote a relational processing style characterized by a tendency to consider relationships and associations among a set of stimuli. As such, counterfactual mind-sets were expected to improve performance on tasks involving the consideration of relationships and associations but to impair performance on tasks requiring novel ideas that are uninfluenced by salient associations. The authors conducted several experiments to test this hypothesis. In Experiments 1a and 1b, the authors determined that counterfactual mind-sets increase mental states and preferences for thinking styles consistent with relational thought. Experiment 2 demonstrated a facilitative effect of counterfactual mind-sets on an analytic task involving logical relationships; Experiments 3 and 4 demonstrated that counterfactual mind-sets structure thought and imagination around salient associations and therefore impaired performance on creative generation tasks. In Experiment 5, the authors demonstrated that the detrimental effect of counterfactual mind-sets is limited to creative tasks involving novel idea generation; in a creative association task involving the consideration of relationships between task stimuli, counterfactual mind-sets improved performance. Copyright 2006 APA, all rights reserved.
Foster, Paul M D
2014-12-01
The National Toxicology Program (NTP) has developed a new flexible study design, termed the modified one generation (MOG) reproduction study. The MOG study will encompass measurements of developmental and reproductive toxicity parameters as well as enable the setting of appropriate dose levels for a cancer bioassay through evaluation of target organ toxicity that is based on test article exposure that starts during gestation. This study design is compared and contrasted with the new Organization for Economic Co-operation and Development (OECD) 443 test guideline, the extended one generation reproduction study. The MOG study has a number of advantages, with a focus on F 1 animals, the generation of adequately powered, robust data sets that include both pre and postnatal developmental toxicity information, and the measurement of effects on reproductive structure and function in the same animals. This new study design does not employ the use of internal triggers in the design structure for the use of animals already on test and is also consistent with the principles of the 3R's. © 2014 by The Author(s).
Garway-Heath, David F
2008-01-01
This chapter reviews the evidence for the clinical application of vision function tests and imaging devices to identify early glaucoma, and sets out a scheme for the appropriate use and interpretation of test results in screening/case-finding and clinic settings. In early glaucoma, signs may be equivocal and the diagnosis is often uncertain. Either structural damage or vision function loss may be the first sign of glaucoma; neither one is consistently apparent before the other. Quantitative tests of visual function and measurements of optic-nerve head and retinal nerve fiber layer anatomy are useful to either raise or lower the probability that glaucoma is present. The posttest probability for glaucoma may be calculated from the pretest probability and the likelihood ratio of the diagnostic criterion, and the output of several diagnostic devices may be combined to achieve a final probability. However, clinicians need to understand how these diagnostic devices make their measurements, so that the validity of each test result can be adequately assessed. Only then should the result be used, together with the patient history and clinical examination, to derive a diagnosis.
Late summer sea ice segmentation with multi-polarisation SAR features in C- and X-band
NASA Astrophysics Data System (ADS)
Fors, A. S.; Brekke, C.; Doulgeris, A. P.; Eltoft, T.; Renner, A. H. H.; Gerland, S.
2015-09-01
In this study we investigate the potential of sea ice segmentation by C- and X-band multi-polarisation synthetic aperture radar (SAR) features during late summer. Five high-resolution satellite SAR scenes were recorded in the Fram Strait covering iceberg-fast first-year and old sea ice during a week with air temperatures varying around zero degrees Celsius. In situ data consisting of sea ice thickness, surface roughness and aerial photographs were collected during a helicopter flight at the site. Six polarimetric SAR features were extracted for each of the scenes. The ability of the individual SAR features to discriminate between sea ice types and their temporally consistency were examined. All SAR features were found to add value to sea ice type discrimination. Relative kurtosis, geometric brightness, cross-polarisation ratio and co-polarisation correlation angle were found to be temporally consistent in the investigated period, while co-polarisation ratio and co-polarisation correlation magnitude were found to be temporally inconsistent. An automatic feature-based segmentation algorithm was tested both for a full SAR feature set, and for a reduced SAR feature set limited to temporally consistent features. In general, the algorithm produces a good late summer sea ice segmentation. Excluding temporally inconsistent SAR features improved the segmentation at air temperatures above zero degrees Celcius.
Integrated cosmological probes: concordance quantified
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch
2017-10-01
Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less
Setsirichok, Damrongrit; Tienboon, Phuwadej; Jaroonruang, Nattapong; Kittichaijaroen, Somkit; Wongseree, Waranyu; Piroonratana, Theera; Usavanarong, Touchpong; Limwongse, Chanin; Aporntewan, Chatchawit; Phadoongsidhi, Marong; Chaiyaratana, Nachol
2013-01-01
This article presents the ability of an omnibus permutation test on ensembles of two-locus analyses (2LOmb) to detect pure epistasis in the presence of genetic heterogeneity. The performance of 2LOmb is evaluated in various simulation scenarios covering two independent causes of complex disease where each cause is governed by a purely epistatic interaction. Different scenarios are set up by varying the number of available single nucleotide polymorphisms (SNPs) in data, number of causative SNPs and ratio of case samples from two affected groups. The simulation results indicate that 2LOmb outperforms multifactor dimensionality reduction (MDR) and random forest (RF) techniques in terms of a low number of output SNPs and a high number of correctly-identified causative SNPs. Moreover, 2LOmb is capable of identifying the number of independent interactions in tractable computational time and can be used in genome-wide association studies. 2LOmb is subsequently applied to a type 1 diabetes mellitus (T1D) data set, which is collected from a UK population by the Wellcome Trust Case Control Consortium (WTCCC). After screening for SNPs that locate within or near genes and exhibit no marginal single-locus effects, the T1D data set is reduced to 95,991 SNPs from 12,146 genes. The 2LOmb search in the reduced T1D data set reveals that 12 SNPs, which can be divided into two independent sets, are associated with the disease. The first SNP set consists of three SNPs from MUC21 (mucin 21, cell surface associated), three SNPs from MUC22 (mucin 22), two SNPs from PSORS1C1 (psoriasis susceptibility 1 candidate 1) and one SNP from TCF19 (transcription factor 19). A four-locus interaction between these four genes is also detected. The second SNP set consists of three SNPs from ATAD1 (ATPase family, AAA domain containing 1). Overall, the findings indicate the detection of pure epistasis in the presence of genetic heterogeneity and provide an alternative explanation for the aetiology of T1D in the UK population.
Ihnen, S.K.Z.; Petersen, Steven E.; Schlaggar, Bradley L.
2015-01-01
Attentional control is important both for learning to read and for performing difficult reading tasks. A previous study invoked 2 mechanisms to explain reaction time (RT) differences between reading tasks with variable attentional demands. The present study combined behavioral and neuroimaging measures to test the hypotheses that there are 2 mechanisms of interaction between attentional control and reading; that these mechanisms are dissociable both behaviorally and neuro-anatomically; and that the 2 mechanisms involve functionally separable control systems. First, RT evidence was found in support of the 2-mechanism model, corroborating the previous study. Next, 2 sets of brain regions were identified as showing functional magnetic resonance imaging blood oxygen level-dependent activity that maps onto the 2-mechanism distinction. One set included bilateral Cingulo-opercular regions and mostly right-lateralized Dorsal Attention regions (CO/DA+). This CO/DA+ region set showed response properties consistent with a role in reporting which processing pathway (phonological or lexical) was biased for a particular trial. A second set was composed primarily of left-lateralized Frontal-parietal (FP) regions. Its signal properties were consistent with a role in response checking. These results demonstrate how the subcomponents of attentional control interact with subcomponents of reading processes in healthy young adults. PMID:24275830
Modeling ramp-hold indentation measurements based on Kelvin-Voigt fractional derivative model
NASA Astrophysics Data System (ADS)
Zhang, Hongmei; zhe Zhang, Qing; Ruan, Litao; Duan, Junbo; Wan, Mingxi; Insana, Michael F.
2018-03-01
Interpretation of experimental data from micro- and nano-scale indentation testing is highly dependent on the constitutive model selected to relate measurements to mechanical properties. The Kelvin-Voigt fractional derivative model (KVFD) offers a compact set of viscoelastic features appropriate for characterizing soft biological materials. This paper provides a set of KVFD solutions for converting indentation testing data acquired for different geometries and scales into viscoelastic properties of soft materials. These solutions, which are mostly in closed-form, apply to ramp-hold relaxation, load-unload and ramp-load creep-testing protocols. We report on applications of these model solutions to macro- and nano-indentation testing of hydrogels, gastric cancer cells and ex vivo breast tissue samples using an atomic force microscope (AFM). We also applied KVFD models to clinical ultrasonic breast data using a compression plate as required for elasticity imaging. Together the results show that KVFD models fit a broad range of experimental data with a correlation coefficient typically R 2 > 0.99. For hydrogel samples, estimation of KVFD model parameters from test data using spherical indentation versus plate compression as well as ramp relaxation versus load-unload compression all agree within one standard deviation. Results from measurements made using macro- and nano-scale indentation agree in trend. For gastric cell and ex vivo breast tissue measurements, KVFD moduli are, respectively, 1/3-1/2 and 1/6 of the elasticity modulus found from the Sneddon model. In vivo breast tissue measurements yield model parameters consistent with literature results. The consistency of results found for a broad range of experimental parameters suggest the KVFD model is a reliable tool for exploring intrinsic features of the cell/tissue microenvironments.
Simulated Space Environmental Effects on Thin Film Solar Array Components
NASA Technical Reports Server (NTRS)
Finckenor, Miria; Carr, John; SanSoucie, Michael; Boyd, Darren; Phillips, Brandon
2017-01-01
The Lightweight Integrated Solar Array and Transceiver (LISA-T) experiment consists of thin-film, low mass, low volume solar panels. Given the variety of thin solar cells and cover materials and the lack of environmental protection typically afforded by thick coverglasses, a series of tests were conducted in Marshall Space Flight Center's Space Environmental Effects Facility to evaluate the performance of these materials. Candidate thin polymeric films and nitinol wires used for deployment were also exposed. Simulated space environment exposures were selected based on SSP 30425 rev. B, "Space Station Program Natural Environment Definition for Design" or AIAA Standard S-111A-2014, "Qualification and Quality Requirements for Space Solar Cells." One set of candidate materials were exposed to 5 eV atomic oxygen and concurrent vacuum ultraviolet (VUV) radiation for low Earth orbit simulation. A second set of materials were exposed to 1 MeV electrons. A third set of samples were exposed to 50, 100, 500, and 700 keV energy protons, and a fourth set were exposed to >2,000 hours of near ultraviolet (NUV) radiation. A final set was rapidly thermal cycled between -55 and +125 C. This test series provides data on enhanced power generation, particularly for small satellites with reduced mass and volume resources. Performance versus mass and cost per Watt is discussed.
Simulated Space Environmental Effects on Thin Film Solar Array Components
NASA Technical Reports Server (NTRS)
Finckenor, Miria; Carr, John; SanSoucie, Michael; Boyd, Darren; Phillips, Brandon
2017-01-01
The Lightweight Integrated Solar Array and Transceiver (LISA-T) experiment consists of thin-film, low mass, low volume solar panels. Given the variety of thin solar cells and cover materials and the lack of environmental protection typically afforded by thick coverglasses, a series of tests were conducted in Marshall Space Flight Center's Space Environmental Effects Facility to evaluate the performance of these materials. Candidate thin polymeric films and nitinol wires used for deployment were also exposed. Simulated space environment exposures were selected based on SSP 30425 rev. B, "Space Station Program Natural Environment Definition for Design" or AIAA Standard S-111A-2014, "Qualification and Quality Requirements for Space Solar Cells." One set of candidate materials were exposed to 5 eV atomic oxygen and concurrent vacuum ultraviolet (VUV) radiation for low Earth orbit simulation. A second set of materials were exposed to 1 MeV electrons. A third set of samples were exposed to 50, 100, 500, and 700 keV energy protons, and a fourth set were exposed to >2,000 hours of near ultraviolet (NUV) radiation. A final set was rapidly thermal cycled between -55 and +125degC. This test series provides data on enhanced power generation, particularly for small satellites with reduced mass and volume resources. Performance versus mass and cost per Watt is discussed.
Simulated Space Environmental Effects on Thin Film Solar Array Components
NASA Technical Reports Server (NTRS)
Finckenor, Miria; Carr, John; SanSoucie, Michael; Boyd, Darren; Phillips, Brandon
2017-01-01
The Lightweight Integrated Solar Array and Transceiver (LISA-T) experiment consists of thin-film, low mass, low volume solar panels. Given the variety of thin solar cells and cover materials and the lack of environmental protection afforded by typical thick coverglasses, a series of tests were conducted in Marshall Space Flight Center's Space Environmental Effects Facility to evaluate the performance of these materials. Candidate thin polymeric films and nitinol wires used for deployment were also exposed. Simulated space environment exposures were selected based on SSP 30425 rev. B, "Space Station Program Natural Environment Definition for Design" or AIAA Standard S-111A-2014, "Qualification and Quality Requirements for Space Solar Cells." One set of candidate materials were exposed to 5 eV atomic oxygen and concurrent vacuum ultraviolet (VUV) radiation for low Earth orbit simulation. A second set of materials were exposed to 1 MeV electrons. A third set of samples were exposed to 50, 500, and 750 keV energy protons, and a fourth set were exposed to >2,000 hours of ultraviolet radiation. A final set was rapidly thermal cycled between -50 and +120 C. This test series provides data on enhanced power generation, particularly for small satellites with reduced mass and volume resources. Performance versus mass and cost per Watt is discussed.
IPO: a tool for automated optimization of XCMS parameters.
Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph
2015-04-16
Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .
Fundamentals of endoscopic surgery: creation and validation of the hands-on test.
Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M
2014-03-01
The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high-stakes examinations, and, together with the knowledge component, may help contribute to the definition and determination of competence in endoscopy.
Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing
2005-01-01
Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436
McKim, James M.; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa
2016-01-01
Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose–response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimension-ality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals’ potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced "false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. PMID:26046447
Development and validation of an all-cause mortality risk score in type 2 diabetes.
Yang, Xilin; So, Wing Yee; Tong, Peter C Y; Ma, Ronald C W; Kong, Alice P S; Lam, Christopher W K; Ho, Chung Shun; Cockram, Clive S; Ko, Gary T C; Chow, Chun-Chung; Wong, Vivian C W; Chan, Juliana C N
2008-03-10
Diabetes reduces life expectancy by 10 to 12 years, but whether death can be predicted in type 2 diabetes mellitus remains uncertain. A prospective cohort of 7583 type 2 diabetic patients enrolled since 1995 were censored on July 30, 2005, or after 6 years of follow-up, whichever came first. A restricted cubic spline model was used to check data linearity and to develop linear-transforming formulas. Data were randomly assigned to a training data set and to a test data set. A Cox model was used to develop risk scores in the test data set. Calibration and discrimination were assessed in the test data set. A total of 619 patients died during a median follow-up period of 5.51 years, resulting in a mortality rate of 18.69 per 1000 person-years. Age, sex, peripheral arterial disease, cancer history, insulin use, blood hemoglobin levels, linear-transformed body mass index, random spot urinary albumin-creatinine ratio, and estimated glomerular filtration rate at enrollment were predictors of all-cause death. A risk score for all-cause mortality was developed using these predictors. The predicted and observed death rates in the test data set were similar (P > .70). The area under the receiver operating characteristic curve was 0.85 for 5 years of follow-up. Using the risk score in ranking cause-specific deaths, the area under the receiver operating characteristic curve was 0.95 for genitourinary death, 0.85 for circulatory death, 0.85 for respiratory death, and 0.71 for neoplasm death. Death in type 2 diabetes mellitus can be predicted using a risk score consisting of commonly measured clinical and biochemical variables. Further validation is needed before clinical use.
Luechtefeld, Thomas; Maertens, Alexandra; McKim, James M; Hartung, Thomas; Kleensang, Andre; Sá-Rocha, Vanessa
2015-11-01
Supervised learning methods promise to improve integrated testing strategies (ITS), but must be adjusted to handle high dimensionality and dose-response data. ITS approaches are currently fueled by the increasing mechanistic understanding of adverse outcome pathways (AOP) and the development of tests reflecting these mechanisms. Simple approaches to combine skin sensitization data sets, such as weight of evidence, fail due to problems in information redundancy and high dimensionality. The problem is further amplified when potency information (dose/response) of hazards would be estimated. Skin sensitization currently serves as the foster child for AOP and ITS development, as legislative pressures combined with a very good mechanistic understanding of contact dermatitis have led to test development and relatively large high-quality data sets. We curated such a data set and combined a recursive variable selection algorithm to evaluate the information available through in silico, in chemico and in vitro assays. Chemical similarity alone could not cluster chemicals' potency, and in vitro models consistently ranked high in recursive feature elimination. This allows reducing the number of tests included in an ITS. Next, we analyzed with a hidden Markov model that takes advantage of an intrinsic inter-relationship among the local lymph node assay classes, i.e. the monotonous connection between local lymph node assay and dose. The dose-informed random forest/hidden Markov model was superior to the dose-naive random forest model on all data sets. Although balanced accuracy improvement may seem small, this obscures the actual improvement in misclassifications as the dose-informed hidden Markov model strongly reduced " false-negatives" (i.e. extreme sensitizers as non-sensitizer) on all data sets. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric
2016-09-01
The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.
Characteristics of respiratory outbreaks in care homes during four influenza seasons, 2011-2015.
Gallagher, N; Johnston, J; Crookshanks, H; Nugent, C; Irvine, N
2018-06-01
Influenza and other respiratory infections can spread rapidly and cause severe morbidity and mortality in care home settings. This study describes the characteristics of respiratory outbreaks in care homes in Northern Ireland during a four-year period, and aims to identify factors that predict which respiratory outbreaks are more likely to be positively identified as influenza. Epidemiological, virological, and clinical characteristics of outbreaks during the study period were described. Variables collected at notification were compared to identify predictors for an outbreak testing positive for influenza. t-Tests and χ 2 -tests were used to compare means and proportions respectively; significance level was set at 95%. During the four seasons, 95 respiratory outbreaks were reported in care homes, 70 of which were confirmed as influenza. More than 1000 cases were reported, with 135 associated hospitalizations and 22 deaths. Vaccination uptake in residents was consistently high (mean: 86%); however, in staff it was poorly reported, and, when reported, consistently low (mean: 14%). Time to notification and number of cases at notification were both higher than expected according to national recommendations for reporting outbreaks. No clinically significant predictors of a positive influenza outbreak were identified. Respiratory outbreaks in care homes were associated with significant morbidity and mortality, despite high vaccination uptake. The absence of indicators at notification of an outbreak to accurately predict influenza infection highlights the need for prompt reporting and laboratory testing. Raising staff awareness, training in the management of respiratory outbreaks in accordance with national guidance, and improvement of staff vaccination uptake are recommended. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg
Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Evaluating the uniformity of color spaces and performance of color difference formulae
NASA Astrophysics Data System (ADS)
Lian, Yusheng; Liao, Ningfang; Wang, Jiajia; Tan, Boneng; Liu, Zilong
2010-11-01
Using small color difference data sets (Macadam ellipses dataset and RIT-DuPont suprathreshold color difference ellipses dataset), and large color difference data sets (Munsell Renovation Data and OSA Uniform Color Scales dataset), the uniformity of several color spaces and performance of color difference formulae based on these color spaces are evaluated. The color spaces used are CIELAB, DIN99d, IPT, and CIECAM02-UCS. It is found that the uniformity of lightness is better than saturation and hue. Overall, for all these color spaces, the uniformity in the blue area is inferior to the other area. The uniformity of CIECAM02-UCS is superior to the other color spaces for the whole color-difference range from small to large. The uniformity of CIELAB and IPT for the large color difference data sets is better than it for the small color difference data sets, but the DIN99d is opposite. Two common performance factors (PF/3 and STRESS) and the statistical F-test are calculated to test the performance of color difference formula. The results show that the performance of color difference formulae based on these four color spaces is consistent with the uniformity of these color spaces.
Huang, Heng-Tsung Danny
2016-08-01
This research explored the test-taking strategies associated with the Test of English for International Communication Speaking Test (TOEIC-S) and their relationship with test performance. Capitalizing on two sets of TOEIC-S and a custom-made strategy inventory, the researcher collected data from a total of 215 Taiwanese English learners consisting of 84 males and 131 females with an average age of 20.1 years (SD = 2.6). Quantitative data analysis gave rise to three major findings. First, TOEIC-S test-taking strategy use constituted a multi-faceted construct that involved multiple types of strategic behaviors. Second, these strategic behaviors matched those allowing test-takers to communicate both in real life and in the workplace. Third, communication strategy use and cognitive strategy use both contributed significantly to TOEIC-S performance. © The Author(s) 2016.
Thermal-Structural Analysis of PICA Tiles for Solar Tower Test
NASA Technical Reports Server (NTRS)
Agrawal, Parul; Empey, Daniel M.; Squire, Thomas H.
2009-01-01
Thermal protection materials used in spacecraft heatshields are subjected to severe thermal and mechanical loading environments during re-entry into earth atmosphere. In order to investigate the reliability of PICA tiles in the presence of high thermal gradients as well as mechanical loads, the authors designed and conducted solar-tower tests. This paper presents the design and analysis work for this tests series. Coupled non-linear thermal-mechanical finite element analyses was conducted to estimate in-depth temperature distribution and stress contours for various cases. The first set of analyses performed on isolated PICA tile showed that stresses generated during the tests were below the PICA allowable limit and should not lead to any catastrophic failure during the test. The tests results were consistent with analytical predictions. The temperature distribution and magnitude of the measured strains were also consistent with predicted values. The second test series is designed to test the arrayed PICA tiles with various gap-filler materials. A nonlinear contact method is used to model the complex geometry with various tiles. The analyses for these coupons predict the stress contours in PICA and inside gap fillers. Suitable mechanical loads for this architecture will be predicted, which can be applied during the test to exceed the allowable limits and demonstrate failure modes. Thermocouple and strain-gauge data obtained from the solar tower tests will be used for subsequent analyses and validation of FEM models.
Ingvertsen, Simon Toft; Jensen, Marina Bergen; Magid, Jakob
2011-01-01
Urban stormwater runoff is often of poor quality, impacting aquatic ecosystems and limiting the use of stormwater runoff for recreational purposes. Several stormwater treatment facilities (STFs) are in operation or at the pilot testing stage, but their efficiencies are neither well documented nor easily compared due to the complex contaminant profile of stormwater and the highly variable runoff hydrograph. On the basis of a review of available data sets on urban stormwater quality and environmental contaminant behavior, we suggest a few carefully selected contaminant parameters (the minimum data set) to be obligatory when assessing and comparing the efficiency of STFs. Consistent use of the minimum data set in all future monitoring schemes for STFs will ensure broad-spectrum testing at low costs and strengthen comparability among facilities. The proposed minimum data set includes: (i) fine fraction of suspended solids (<63 μm), (ii) total concentrations of zinc and copper, (iii) total concentrations of phenanthrene, fluoranthene, and benzo(b,k)fluoranthene, and (iv) total concentrations of phosphorus and nitrogen. Indicator pathogens and other specific contaminants (i.e., chromium, pesticides, phenols) may be added if recreational or certain catchment-scale objectives are to be met. Issues that need further investigation have been identified during the iterative process of developing the minimum data set. by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
NASA Technical Reports Server (NTRS)
Mantus, M.; Pardo, H.
1973-01-01
Computer programming, data processing, and a correlation study that employed data collected in the first phase test were used to demonstrate that standard test procedures and equipment could be used to collect a significant number of transfer functions from tests of the Lunar Module test article LTA-11. The testing consisted of suspending the vehicle from the apex fittings of the outrigger trusses through a set of air springs to simulate the free-free state. Impulsive loadings were delivered, one at a time, at each of the landing gear's attachment points, in three mutually perpendicular directions; thus a total of 36 impulses were applied to the vehicle. Time histories of each pulse were recorded on magnetic tape along with 40 channels of strain gage response and 28 channels of accelerometer response. Since an automated data processing system was not available, oscillograph playbacks were made of all 2400 time histories as a check on the validity of the data taken. In addition, one channel of instrumentation was processed to determine its response to a set of forcing functions from a prior LTA-11 drop test. This prediction was compared with drop test results as a first measure of accuracy.
Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display
NASA Technical Reports Server (NTRS)
Ferreira, Pedro Henrique
2014-01-01
My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.
Questionnaire-based assessment of executive functioning: Psychometrics.
Castellanos, Irina; Kronenberger, William G; Pisoni, David B
2018-01-01
The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.
Attributional style and the generality of learned helplessness.
Alloy, L B; Peterson, C; Abramson, L Y; Seligman, M E
1984-03-01
According to the logic of the attribution reformulation of learned helplessness, the interaction of two factors influences whether helplessness experienced in one situation will transfer to a new situation. The model predicts that people who exhibit a style of attributing negative outcomes to global factors will show helplessness deficits in new situations that are either similar or dissimilar to the original situation in which they were helpless. In contrast, people who exhibit a style of attributing negative outcomes to only specific factors will show helplessness deficits in situations that are similar, but not dissimilar, to the original situation in which they were helpless. To test these predictions, we conducted two studies in which undergraduates with either a global or specific attributional style for negative outcomes were given one of three pretreatments in the typical helplessness triadic design: controllable bursts of noise, uncontrollable bursts of noise, or no noise. In Experiment 1, students were tested for helplessness deficits in a test situation similar to the pretreatment setting, whereas in Experiment 2, they were tested in a test situation dissimilar to the pretreatment setting. The findings were consistent with predictions of the reformulated helplessness theory.
Dynamic MTF, an innovative test bench for detector characterization
NASA Astrophysics Data System (ADS)
Emmanuel, Rossi; Raphaël, Lardière; Delmonte, Stephane
2017-11-01
PLEIADES HR are High Resolution satellites for Earth observation. Placed at 695km they reach a 0.7m spatial resolution. To allow such performances, the detectors are working in a TDI mode (Time and Delay Integration) which consists in a continuous charge transfer from one line to the consecutive one while the image is passing on the detector. The spatial resolution, one of the most important parameter to test, is characterized by the MTF (Modulation Transfer Function). Usually, detectors are tested in a staring mode. For a higher level of performances assessment, a dedicated bench has been set-up, allowing detectors' MTF characterization in the TDI mode. Accuracy and reproducibility are impressive, opening the door to new perspectives in term of HR imaging systems testing.
NASA Astrophysics Data System (ADS)
Del Seppia, C.; Mezzasalma, L.; Messerotti, M.; Cordelli, A.; Ghione, S.
2009-01-01
We have previously reported that the exposure to an abnormal magnetic field simulating the one encountered by the International Space Station (ISS) orbiting around the Earth may enhance autonomic response to emotional stimuli. Here we report the results of the second part of that study which tested whether this field also affects cognitive functions. Twenty-four volunteers participated in the study, 12 exposed to the natural geomagnetic field and 12 to the magnetic field encountered by ISS. The test protocol consisted of a set of eight tests chosen from a computerized test battery for the assessment of attentional performance. The duration of exposure was 90 min. No effect of exposure to ISS magnetic field was observed on attentional performance.
NASA Astrophysics Data System (ADS)
Vergino, Eileen S.
Soviet seismologists have published descriptions of 96 nuclear explosions conducted from 1961 through 1972 at the Semipalatinsk test site, in Kazakhstan, central Asia [Bocharov et al., 1989]. With the exception of releasing news about some of their peaceful nuclear explosions (PNEs) the Soviets have never before published such a body of information.To estimate the seismic yield of a nuclear explosion it is necessary to obtain a calibrated magnitude-yield relationship based on events with known yields and with a consistent set of seismic magnitudes. U.S. estimation of Soviet test yields has been done through application of relationships to the Soviet sites based on the U.S. experience at the Nevada Test Site (NTS), making some correction for differences due to attenuation and near-source coupling of seismic waves.
The Parachute System Recovery of the Orion Pad Abort Test 1
NASA Technical Reports Server (NTRS)
Machin, Ricardo; Evans, Carol; Madsen, Chris; Morris, Aaron
2011-01-01
The Orion Pad Abort Test 1 was conducted at the US Army White Sands Missile range in May 2010. The capsule was successfully recovered using the original design for the parachute recovery system, referred to as the CEV Parachute Assembly System (CPAS). The CPAS was designed to a set of requirements identified prior to the development of the PA-1 test; these requirements were not entirely consistent with the design of the PA-1 test. This presentation will describe the original CPAS design, how the system was modified to accommodate the PA-1 requirements, and what special analysis had to be performed to demonstrate positive margins for the CPAS. The presentation will also discuss the post test analysis and how it compares to the models that were used to design the system.
NASA Astrophysics Data System (ADS)
Simonis, I.; Alameh, N.; Percivall, G.
2012-04-01
The GEOSS Architecture Implementation Pilots (AIP) develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure (GCI) and the broader GEOSS architecture through an evolutionary development process consisting of a set of phases. Each phase addresses a set of Societal Benefit Areas (SBA) and geoinformatic topics. The first three phases consisted of architecture refinements based on interactions with users; component interoperability testing; and SBA-driven demonstrations. The fourth phase (AIP-4) documented here focused on fostering interoperability arrangements and common practices for GEOSS by facilitating access to priority earth observation data sources and by developing and testing specific clients and mediation components to enable such access. Additionally, AIP-4 supported the development of a thesaurus for earth observation parameters and tutorials to guide data providers to make their data available through GEOSS. The results of AIP-4 are documented in two engineering reports and captured in a series of videos posted online. Led by the Open Geospatial Consortium (OGC), AIP-4 built on contributions from over 60 organizations. This wide portfolio helped testing interoperability arrangements in a highly heterogeneous environment. AIP-4 participants cooperated closely to test available data sets, access services, and client applications in multiple workflows and set ups. Eventually, AIP-4 improved the accessibility of GEOSS datasets identified as supporting Critical Earth Observation Priorities by the GEO User Interface Committee (UIC), and increased the use of the data through promoting availability of new data services, clients, and applications. During AIP-4, A number of key earth observation data sources have been made available online at standard service interfaces, discovered using brokered search approaches, and processed and visualized in generalized client applications. AIP-4 demonstrated the level of interoperability that can be achieved using currently available standards and corresponding products and implementations. The AIP-4 integration testing process proved that the integration of heterogeneous data resources available via interoperability arrangements such as WMS, WFS, WCS and WPS indeed works. However, the integration often required various levels of customizations on the client side to accommodate for variations in the service implementations. Those variations seem to be based on both malfunctioning service implementations as well as varying interpretations of or inconsistencies in existing standards. Other interoperability issues identified revolve around missing metadata or using unrecognized identifiers in the description of GEOSS resources. Once such issues are resolved, continuous compliance testing is necessary to ensure minimizing variability of implementations. Once data providers can choose from a set of enhanced implementations for offering their data using consistent interoperability arrangements, the barrier to client and decision support implementation developers will be lowered, leading to true leveraging of earth observation data through GEOSS. AIP-4 results, lessons learnt from previous AIPs 1-3 and close coordination with the Infrastructure Implementation Board (IIB), the successor of the Architecture and Data Committee (ADC), form the basis in the current preparation phase for the next Architecture Implementation Pilot, AIP-5. The Call For Participation will be launched in February and the pilot will be conducted from May to November 2012. The current planning foresees a scenario- oriented approach, with possible scenarios coming from the domains of disaster management, health (including air quality and waterborne diseases), water resource observations, energy, biodiversity and climate change, and agriculture.
Performances of a HGCDTE APD Based Detector with Electric Cooling for 2-μm DIAL/IPDA Applications
NASA Astrophysics Data System (ADS)
Dumas, A.; Rothman, J.; Gibert, F.; Lasfargues, G.; Zanatta, J.-P.; Edouart, D.
2016-06-01
In this work we report on design and testing of an HgCdTe Avalanche Photodiode (APD) detector assembly for lidar applications in the Short Wavelength Infrared Region (SWIR : 1,5 - 2 μm). This detector consists in a set of diodes set in parallel -making a 200 μm large sensitive area- and connected to a custom high gain TransImpedance Amplifier (TIA). A commercial four stages Peltier cooler is used to reach an operating temperature of 185K. Crucial performances for lidar use are investigated : linearity, dynamic range, spatial homogeneity, noise and resistance to intense illumination.
Detecting Abrupt Changes in a Piecewise Locally Stationary Time Series
Last, Michael; Shumway, Robert
2007-01-01
Non-stationary time series arise in many settings, such as seismology, speech-processing, and finance. In many of these settings we are interested in points where a model of local stationarity is violated. We consider the problem of how to detect these change-points, which we identify by finding sharp changes in the time-varying power spectrum. Several different methods are considered, and we find that the symmetrized Kullback-Leibler information discrimination performs best in simulation studies. We derive asymptotic normality of our test statistic, and consistency of estimated change-point locations. We then demonstrate the technique on the problem of detecting arrival phases in earthquakes. PMID:19190715
Gapinski, Mary Ann; Sheetz, Anne H
2014-10-01
The National Association of School Nurses' research priorities include the recommendation that data reliability, quality, and availability be addressed to advance research in child and school health. However, identifying a national school nursing data set has remained a challenge for school nurses, school nursing leaders, school nurse professional organizations, and state school nurse consultants. While there is much agreement that school nursing data (with associated data integrity) is an incredibly powerful tool for multiple uses, the content of a national data set must be developed. In 1993, recognizing the unique power of data, Massachusetts began addressing the need for consistent school nurse data collection. With more than 20 years' experience--and much experimentation, pilot testing, and system modification--Massachusetts is now ready to share its data collection system and certain key indicators with other states, thus offering a beginning foundation for a national school nursing data set. © The Author(s) 2014.
Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z
2015-01-01
Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.
Ngo-Malabo, Elodie Teclaire; Ngoupo, Paul Alain; Sadeuh-Mba, Serge Alain; Akongnwi, Emmanuel; Banaï, Robert; Ngono, Laure; Bilong-Bilong, Charles Felix; Kfutwah, Anfumbom; Njouom, Richard
2017-01-01
First line antiretroviral therapy in a resource-limited setting consists of nucleotide and non-nucleotide reverse transcriptase inhibitors. Protease inhibitors are the hub of second line therapy. The decision to change antiretroviral therapy for a patient is frequently presumptive because of the lack of genotypic resistance tests in routine follow-up. We describe here the resistance profiles observed in patients with varying terms of antiretroviral therapy in Cameroon after implementation of HIV genotypic resistance testing in routine practice. HIV genotypic resistance testing was carried out on consecutive samples received between August 2013 and November 2015. Protease (Prot) and reverse transcriptase (Rt) genes of the HIV genome were amplified, sequenced and analyzed for drug resistance mutations following the algorithm set up by the French National Agency for research on HIV/AIDS and viral hepatitis. Specimens from a total of 167 patients infected with non-B HIV subtypes were received during the study period. Overall 61.7% patients had viral loads of more than 3log copies/ml, suggesting treatment failure. Among the 72 patients on first line, 56 (77.8%) were resistant to Lamivudine, 57 (79.1%) to Efavirenz and 58 (80.6%) to Nevirapine. Overall, more patients (75.0%) on first line antiretroviral therapy harbored multi-drug resistance compared to their counterparts on second line (25.8%). This study revealed that a group of patients with antiretroviral therapy failure harbored multi-drug resistance mutations related to the majority of drugs in the first line regimen. Therefore, HIV resistance testing could be a useful tool to improve HIV care in resource limited settings like Cameroon where treatment options are limited. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Lubell, Yoel; Althaus, Thomas; Blacksell, Stuart D.; Paris, Daniel H.; Mayxay, Mayfong; Pan-Ngum, Wirichada; White, Lisa J.; Day, Nicholas P. J.; Newton, Paul N.
2016-01-01
Background Malaria accounts for a small fraction of febrile cases in increasingly large areas of the malaria endemic world. Point-of-care tests to improve the management of non-malarial fevers appropriate for primary care are few, consisting of either diagnostic tests for specific pathogens or testing for biomarkers of host response that indicate whether antibiotics might be required. The impact and cost-effectiveness of these approaches are relatively unexplored and methods to do so are not well-developed. Methods We model the ability of dengue and scrub typhus rapid tests to inform antibiotic treatment, as compared with testing for elevated C-Reactive Protein (CRP), a biomarker of host-inflammation. Using data on causes of fever in rural Laos, we estimate the proportion of outpatients that would be correctly classified as requiring an antibiotic and the likely cost-effectiveness of the approaches. Results Use of either pathogen-specific test slightly increased the proportion of patients correctly classified as requiring antibiotics. CRP testing was consistently superior to the pathogen-specific tests, despite heterogeneity in causes of fever. All testing strategies are likely to result in higher average costs, but only the scrub typhus and CRP tests are likely to be cost-effective when considering direct health benefits, with median cost per disability adjusted life year averted of approximately $48 USD and $94 USD, respectively. Conclusions Testing for viral infections is unlikely to be cost-effective when considering only direct health benefits to patients. Testing for prevalent bacterial pathogens can be cost-effective, having the benefit of informing not only whether treatment is required, but also as to the most appropriate antibiotic; this advantage, however, varies widely in response to heterogeneity in causes of fever. Testing for biomarkers of host inflammation is likely to be consistently cost-effective despite high heterogeneity, and can also offer substantial reductions in over-use of antimicrobials in viral infections. PMID:27027303
NASA Astrophysics Data System (ADS)
Rawles, Christopher; Thurber, Clifford
2015-08-01
We present a simple, fast, and robust method for automatic detection of P- and S-wave arrivals using a nearest neighbours-based approach. The nearest neighbour algorithm is one of the most popular time-series classification methods in the data mining community and has been applied to time-series problems in many different domains. Specifically, our method is based on the non-parametric time-series classification method developed by Nikolov. Instead of building a model by estimating parameters from the data, the method uses the data itself to define the model. Potential phase arrivals are identified based on their similarity to a set of reference data consisting of positive and negative sets, where the positive set contains examples of analyst identified P- or S-wave onsets and the negative set contains examples that do not contain P waves or S waves. Similarity is defined as the square of the Euclidean distance between vectors representing the scaled absolute values of the amplitudes of the observed signal and a given reference example in time windows of the same length. For both P waves and S waves, a single pass is done through the bandpassed data, producing a score function defined as the ratio of the sum of similarity to positive examples over the sum of similarity to negative examples for each window. A phase arrival is chosen as the centre position of the window that maximizes the score function. The method is tested on two local earthquake data sets, consisting of 98 known events from the Parkfield region in central California and 32 known events from the Alpine Fault region on the South Island of New Zealand. For P-wave picks, using a reference set containing two picks from the Parkfield data set, 98 per cent of Parkfield and 94 per cent of Alpine Fault picks are determined within 0.1 s of the analyst pick. For S-wave picks, 94 per cent and 91 per cent of picks are determined within 0.2 s of the analyst picks for the Parkfield and Alpine Fault data set, respectively. For the Parkfield data set, our method picks 3520 P-wave picks and 3577 S-wave picks out of 4232 station-event pairs. For the Alpine Fault data set, the method picks 282 P-wave picks and 311 S-wave picks out of a total of 344 station-event pairs. For our testing, we note that the vast majority of station-event pairs have analyst picks, although some analyst picks are excluded based on an accuracy assessment. Finally, our tests suggest that the method is portable, allowing the use of a reference set from one region on data from a different region using relatively few reference picks.
Zebrowitz, Leslie A.; White, Benjamin; Wieneke, Kristin
2009-01-01
White participants were exposed to other-race or own-race faces to test the generalized mere exposure hypothesis in the domain of face perception, namely that exposure to a set of faces yields increased liking for similar faces that have never been seen. In Experiment 1, rapid supraliminal exposures to Asian faces increased White participants' subsequent liking for a different set of Asian faces. In Experiment 2, subliminal exposures to Black faces increased White participants' subsequent liking for a different set of Black faces. The findings are consistent with prominent explanations for mere exposure effects as well as with the familiar face overgeneralization hypothesis that prejudice derives in part from negative reactions to faces that deviate from the familiar own-race prototype, PMID:19584948
Zebrowitz, Leslie A; White, Benjamin; Wieneke, Kristin
2008-01-01
White participants were exposed to other-race or own-race faces to test the generalized mere exposure hypothesis in the domain of face perception, namely that exposure to a set of faces yields increased liking for similar faces that have never been seen. In Experiment 1, rapid supraliminal exposures to Asian faces increased White participants' subsequent liking for a different set of Asian faces. In Experiment 2, subliminal exposures to Black faces increased White participants' subsequent liking for a different set of Black faces. The findings are consistent with prominent explanations for mere exposure effects as well as with the familiar face overgeneralization hypothesis that prejudice derives in part from negative reactions to faces that deviate from the familiar own-race prototype.
NASA Technical Reports Server (NTRS)
Rakoczy, John; Heater, Daniel; Lee, Ashley
2013-01-01
Marshall Space Flight Center's (MSFC) Small Projects Rapid Integration and Test Environment (SPRITE) is a Hardware-In-The-Loop (HWIL) facility that provides rapid development, integration, and testing capabilities for small projects (CubeSats, payloads, spacecraft, and launch vehicles). This facility environment focuses on efficient processes and modular design to support rapid prototyping, integration, testing and verification of small projects at an affordable cost, especially compared to larger type HWIL facilities. SPRITE (Figure 1) consists of a "core" capability or "plant" simulation platform utilizing a graphical programming environment capable of being rapidly re-configured for any potential test article's space environments, as well as a standard set of interfaces (i.e. Mil-Std 1553, Serial, Analog, Digital, etc.). SPRITE also allows this level of interface testing of components and subsystems very early in a program, thereby reducing program risk.
NASA Technical Reports Server (NTRS)
Dankanich, John W.; Swiatek, Michael W.; Yim, John T.
2012-01-01
The electric propulsion community has been implored to establish and implement a set of universally applicable test standards during the research, development, and qualification of electric propulsion systems. Existing practices are fallible and result in testing variations which leads to suspicious results, large margins in application, or aversion to mission infusion. Performance measurements and life testing under appropriate conditions can be costly and lengthy. Measurement practices must be consistent, accurate, and repeatable. Additionally, the measurements must be universally transportable across facilities throughout the development, qualification, spacecraft integration and on-orbit performance. A preliminary step to progress towards universally applicable testing standards is outlined for facility pressure measurements and effective pumping speed calculations. The standard has been applied to multiple facilities at the NASA Glenn Research Center. Test results and analyses of universality of measurements are presented herein.
NASA Astrophysics Data System (ADS)
Wilson, Christopher David
Despite the emphasis in modern zoos and aquaria on conservation and environmental education, we know very little about what people learn in these settings, and even less about how they learn it. Research on informal learning in settings such as zoos has suffered from a lack of theory, with few connections being made to theories of learning in formal settings, or to theories regarding the nature of the educational goals. This dissertation consists of three parts: the development and analysis of a test instrument designed to measure constructs of environmental learning in zoos; the application of the test instrument along with qualitative data collection in an evaluation designed to measure the effectiveness of a zoo's education programs; and the analysis of individually matched pre- and post-test data to examine how environmental learning takes place, with respect to the constructivist view of learning, as well as theories of environmental learning and the barriers to pro-environmental behavior. The test instrument consisted of 40 items split into four scales: environmental knowledge, attitudes toward the environment, support for conservation, and environmentally responsible behavior. A model-driven approach was used to develop the instrument, which was analyzed using Item Response Theory and the Rasch dichotomous measurement model. After removal of two items with extremely high difficulty, the instrument was found to be unidimensional and sufficiently reliable. The results of the IRT analyses are interpreted with respect to a modern validity framework. The evaluation portion of this study applied this test instrument to measuring the impact of zoo education programs on 750 fourth through seventh grade students. Qualitative data was collected from program observations and teacher surveys, and a comparison was also made between programs that took place at the zoo, and those that took place in the school classroom, thereby asking questions regarding the role of setting in environmental education. It was found that students in both program types significantly increased their environmental knowledge as a result of the program, but only students in the school-based programs significantly improved their attitudes towards the environment. Analyzing by grade, seventh grade students scored significantly lower on all aspects of the test than the younger students, suggesting a detrimental effect of novel settings on learning in adolescents. Teacher survey data suggests that teachers place great importance on how the education program would fit in with their school-based curriculum, but did little to integrate the program into their classroom teaching. Observations of the programs revealed some logistical issues, and some concerns regarding the zoo instructors' use of curriculum materials. Analyzing the test data from a constructivist perspective revealed that students with high incoming environmental attitudes had significant increases in environmental knowledge. That is, students with positive attitudes towards the environment are predisposed to engage in learning about the environment. Some gender-specific findings are also discussed.
Ruan, W June; Goldstein, Risë B; Chou, S Patricia; Smith, Sharon M; Saha, Tulshi D; Pickering, Roger P; Dawson, Deborah A; Huang, Boji; Stinson, Frederick S; Grant, Bridget F
2008-01-01
This study presents test-retest reliability statistics and information on internal consistency for new diagnostic modules and risk factors for alcohol, drug, and psychiatric disorders from the Alcohol Use Disorder and Associated Disabilities Interview Schedule-IV (AUDADIS-IV). Test-retest statistics were derived from a random sample of 1899 adults selected from 34,653 respondents who participated in the 2004-2005 Wave 2 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Internal consistency of continuous scales was assessed using the entire Wave 2 NESARC. Both test and retest interviews were conducted face-to-face. Test-retest and internal consistency results for diagnoses and symptom scales associated with posttraumatic stress disorder, attention-deficit/hyperactivity disorder, and borderline, narcissistic, and schizotypal personality disorders were predominantly good (kappa>0.63; ICC>0.69; alpha>0.75) and reliability for risk factor measures fell within the good to excellent range (intraclass correlations=0.50-0.94; alpha=0.64-0.90). The high degree of reliability found in this study suggests that new AUDADIS-IV diagnostic measures can be useful tools in research settings. The availability of highly reliable measures of risk factors for alcohol, drug, and psychiatric disorders will contribute to the validity of conclusions drawn from future research in the domains of substance use disorder and psychiatric epidemiology.
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
Robotic Range Clearance Competition (R2C2)
2011-10-01
unexploded ordnance (UXO). A large part of the debris field consists of ferrous metal objects that magnetic 39 Distribution A: Approved for public...was set at 7 degrees above horizontal based on terrain around the Base station. We used the BSUBR file for all fields except the Subsurface...and subsurface clearance test areas had numerous pieces of simulated unexploded ordinance (SUXO) buried at random locations around the field . These
[The importance of using the computer in treating children with strabismus and amblyopia].
Tatarinov, S A; Amel'ianova, S G; Kashchenko, T P; Lakomkin, V I; Avuchenkova, T N; Galich, V I
1993-01-01
A method for therapy of strabismus and amblyopia with the use of IBM PC AT type computer is suggested. It consists in active interaction of a patient with various test objects on the monitor and is realized via a special set of programs. Clinical indications for the use of a new method are defined. Its use yielded good results in 82 of 97 children.
Castiello, Luciano; Sabatino, Marianna; Zhao, Yingdong; Tumaini, Barbara; Ren, Jiaqiang; Ping, Jin; Wang, Ena; Wood, Lauren V; Marincola, Francesco M; Puri, Raj K; Stroncek, David F
2013-02-01
Cell-based immunotherapies are among the most promising approaches for developing effective and targeted immune response. However, their clinical usefulness and the evaluation of their efficacy rely heavily on complex quality control assessment. Therefore, rapid systematic methods are urgently needed for the in-depth characterization of relevant factors affecting newly developed cell product consistency and the identification of reliable markers for quality control. Using dendritic cells (DCs) as a model, we present a strategy to comprehensively characterize manufactured cellular products in order to define factors affecting their variability, quality and function. After generating clinical grade human monocyte-derived mature DCs (mDCs), we tested by gene expression profiling the degrees of product consistency related to the manufacturing process and variability due to intra- and interdonor factors, and how each factor affects single gene variation. Then, by calculating for each gene an index of variation we selected candidate markers for identity testing, and defined a set of genes that may be useful comparability and potency markers. Subsequently, we confirmed the observed gene index of variation in a larger clinical data set. In conclusion, using high-throughput technology we developed a method for the characterization of cellular therapies and the discovery of novel candidate quality assurance markers.
IND - THE IND DECISION TREE PACKAGE
NASA Technical Reports Server (NTRS)
Buntine, W.
1994-01-01
A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which has good prediction of classes on new data. Standard algorithms are CART (by Breiman Friedman, Olshen and Stone) and ID3 and its successor C4 (by Quinlan). As well as reimplementing parts of these algorithms and offering experimental control suites, IND also introduces Bayesian and MML methods and more sophisticated search in growing trees. These produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or it may be omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. IND can be operated in a CART-like mode (but without regression trees, surrogate splits or multivariate splits), and in a mode like the early version of C4. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consists of four basic kinds of routines: data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The generation routines are used to build classifiers. The test routines are used to evaluate classifiers and to classify data using a classifier. And the display routines are used to display classifiers in various formats. IND is written in C-language for Sun4 series computers. It consists of several programs with controlling shell scripts. Extensive UNIX man entries are included. IND is designed to be used on any UNIX system, although it has only been thoroughly tested on SUN platforms. The standard distribution medium for IND is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in PostScript format is included on the distribution medium. IND was developed in 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan
This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less
Measurement of shower development and its Molière radius with a four-plane LumiCal test set-up
NASA Astrophysics Data System (ADS)
Abramowicz, H.; Abusleme, A.; Afanaciev, K.; Benhammou, Y.; Bortko, L.; Borysov, O.; Borysova, M.; Bozovic-Jelisavcic, I.; Chelkov, G.; Daniluk, W.; Dannheim, D.; Elsener, K.; Firlej, M.; Firu, E.; Fiutowski, T.; Ghenescu, V.; Gostkin, M.; Hempel, M.; Henschel, H.; Idzik, M.; Ignatenko, A.; Ishikawa, A.; Kananov, S.; Karacheban, O.; Klempt, W.; Kotov, S.; Kotula, J.; Kozhevnikov, D.; Kruchonok, V.; Krupa, B.; Kulis, Sz.; Lange, W.; Leonard, J.; Lesiak, T.; Levy, A.; Levy, I.; Lohmann, W.; Lukic, S.; Moron, J.; Moszczynski, A.; Neagu, A. T.; Nuiry, F.-X.; Pandurovic, M.; Pawlik, B.; Preda, T.; Rosenblat, O.; Sailer, A.; Schumm, B.; Schuwalow, S.; Smiljanic, I.; Smolyanskiy, P.; Swientek, K.; Terlecki, P.; Uggerhoj, U. I.; Wistisen, T. N.; Wojton, T.; Yamamoto, H.; Zawiejski, L.; Zgura, I. S.; Zhemchugov, A.
2018-02-01
A prototype of a luminometer, designed for a future e^+e^- collider detector, and consisting at present of a four-plane module, was tested in the CERN PS accelerator T9 beam. The objective of this beam test was to demonstrate a multi-plane tungsten/silicon operation, to study the development of the electromagnetic shower and to compare it with MC simulations. The Molière radius has been determined to be 24.0 ± 0.6 (stat.) ± 1.5 (syst.) mm using a parametrization of the shower shape. Very good agreement was found between data and a detailed Geant4 simulation.
NASA Astrophysics Data System (ADS)
Knapkiewicz, P.
2013-03-01
The technology and preliminary qualitative tests of silicon-glass microreactors with embedded pressure and temperature sensors are presented. The concept of microreactors for leading highly exothermic reactions, e.g. nitration of hydrocarbons, and design process-included computer-aided simulations are described in detail. The silicon-glass microreactor chip consisting of two micromixers (multistream micromixer), reaction channels, cooling/heating chambers has been proposed. The microreactor chip was equipped with a set of pressure and temperature sensors and packaged. Tests of mixing quality, pressure drops in channels, heat exchange efficiency and dynamic behavior of pressure and temperature sensors were documented. Finally, two applications were described.
Electromagnetic interference in electrical systems of motor vehicles
NASA Astrophysics Data System (ADS)
Dziubiński, M.; Drozd, A.; Adamiec, M.; Siemionek, E.
2016-09-01
Electronic ignition system affects the electronic equipment of the vehicle by electric and magnetic fields. The measurement of radio electromagnetic interference originating from the ignition system affecting the audiovisual test bench was carried out with a variable speed of the ignition system. The paper presents measurements of radio electromagnetic interference in automobiles. In order to determine the level of electromagnetic interference, the audiovisual test bench was equipped with a set of meters for power consumption and assessment of the level of electromagnetic interference. Measurements of the electromagnetic interference level within the audiovisual system were performed on an experimental test bench consisting of the ignition system, starting system and charging system with an alternator and regulator.
NASA Technical Reports Server (NTRS)
Dankanich, John W.; Walker, Mitchell; Swiatek, Michael W.; Yim, John T.
2013-01-01
The electric propulsion community has been implored to establish and implement a set of universally applicable test standards during the research, development, and qualification of electric propulsion systems. Variability between facility-to-facility and more importantly ground-to-flight performance can result in large margins in application or aversion to mission infusion. Performance measurements and life testing under appropriate conditions can be costly and lengthy. Measurement practices must be consistent, accurate, and repeatable. Additionally, the measurements must be universally transportable across facilities throughout the development, qualification, spacecraft integration, and on-orbit performance. A recommended practice for making pressure measurements, pressure diagnostics, and calculating effective pumping speeds with justification is presented.
Development of advanced lightweight containment systems
NASA Technical Reports Server (NTRS)
Stotler, C.
1981-01-01
Parametric type data were obtained on advanced lightweight containment systems. These data were used to generate design methods and procedures necessary for the successful development of such systems. The methods were then demonstrated through the design of a lightweight containment system for a CF6 size engine. The containment concept evaluated consisted basically of a lightweight structural sandwich shell wrapped with dry Kevlar cloth. The initial testing was directed towards the determination of the amount of Kevlar required to result in threshold containment for a specific set of test conditions. A relationship was then developed between the thickness required and the energy of the released blade so that the data could be used to design for conditions other than those tested.
Quantum adiabatic machine learning
NASA Astrophysics Data System (ADS)
Pudenz, Kristen L.; Lidar, Daniel A.
2013-05-01
We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. All quantum processing is strictly limited to two-qubit interactions so as to ensure physical feasibility. We apply and illustrate this approach in detail to the problem of software verification and validation, with a specific example of the learning phase applied to a problem of interest in flight control systems. Beyond this example, the algorithm can be used to attack a broad class of anomaly detection problems.
Willemse, Elias J; Joubert, Johan W
2016-09-01
In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.
Improvement of analytical dynamic models using modal test data
NASA Technical Reports Server (NTRS)
Berman, A.; Wei, F. S.; Rao, K. V.
1980-01-01
A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.
Schambach, Joachim; Rossewij, M. J.; Sielewicz, K. M.; ...
2016-12-28
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. Furthermore, this contribution describes laboratory and radiation testing results with this prototype board set.
Teaching calculus using module based on cooperative learning strategy
NASA Astrophysics Data System (ADS)
Arbin, Norazman; Ghani, Sazelli Abdul; Hamzah, Firdaus Mohamad
2014-06-01
The purpose of the research is to evaluate the effectiveness of a module which utilizes the cooperative learning for teaching Calculus for limit, derivative and integral. The sample consists of 50 semester 1 students from the Science Programme (AT 16) Sultan Idris Education University. A set of questions of related topics (pre and post) has been used as an instrument to collect data. The data is analyzed using inferential statistics involving the paired sample t-test and the independent t-test. The result shows that students have positive inclination towards the modulein terms of understanding.
Investigation of ground reflection and impedance from flyover noise measurements
NASA Technical Reports Server (NTRS)
Chapkis, R. L.; Marsh, A. H.
1978-01-01
An extensive series of flyover noise tests was conducted for the primary purpose of studying meteorological effects on propagation of aircraft noise. The test airplane, a DC 9-10, flew several level-flight passes at various heights over a taxiway. Two microphone stations were located under the flight path. A total of 37 runs was selected for analysis and processed to obtain a consistant set of 1/3 octave band sound pressure levels at half-second intervals. The goal of the present study was to use the flyover noise data to deduce acoustical reflection coefficients and hence, acoustical impedances.
A Study of Reflected Sonic Booms Using Airborne Measurements
NASA Technical Reports Server (NTRS)
Kantor, Samuel R.; Cliatt, Larry J.
2017-01-01
In support of ongoing efforts to bring commercial supersonic flight to the public, the Sonic Booms in Atmospheric Turbulence (SonicBAT) flight test conducted at NASA Armstrong Flight Research Center. During this test, airborne sonic boom measurements were made using an instrumented TG-14 motor glider, called the Airborne Acoustic Measurement Platform (AAMP).During the flight program, the AAMP was consistently able to measure the sonic boom wave that was reflected off of the ground, in addition to the incident wave, resulting in the creation of a completely unique data set of airborne sonic boom reflection measurements.
NASA Astrophysics Data System (ADS)
Schambach, J.; Rossewij, M. J.; Sielewicz, K. M.; Aglieri Rinella, G.; Bonora, M.; Ferencei, J.; Giubilato, P.; Vanat, T.
2016-12-01
The ALICE Collaboration is preparing a major detector upgrade for the LHC Run 3, which includes the construction of a new silicon pixel based Inner Tracking System (ITS). The ITS readout system consists of 192 readout boards to control the sensors and their power system, receive triggers, and deliver sensor data to the DAQ. To prototype various aspects of this readout system, an FPGA based carrier board and an associated FMC daughter card containing the CERN Gigabit Transceiver (GBT) chipset have been developed. This contribution describes laboratory and radiation testing results with this prototype board set.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
Implications of crater distributions on Venus
NASA Technical Reports Server (NTRS)
Kaula, W. M.
1993-01-01
The horizontal locations of craters on Venus are consistent with randomness. However, (1) randomness does not make crater counts useless for age indications; (2) consistency does not imply necessity or optimality; and (3) horizontal location is not the only reference frame against which to test models. Re (1), the apparent smallness of resurfacing areas means that a region on the order of one percent of the planet with a typical number of craters, 5-15, will have a range of feature ages of several 100 My. Re (2), models of resurfacing somewhat similar to Earth's can be found that are also consistent and more optimal than random: i.e., resurfacing occurring in clusters, that arise and die away in lime intervals on the order of 50 My. These agree with the observation that there are more areas of high crater density, and fewer of moderate density, than optimal for random. Re (3), 799 crater elevations were tested; there are more at low elevations and fewer at high elevations than optimal for random: i.e., 54.6 percent below the median. Only one of 40 random sets of 799 was as extreme.
Garcia, Ediberto; Newfang, Daniel; Coyle, Jayme P; Blake, Charles L; Spencer, John W; Burrelli, Leonard G; Johnson, Giffe T; Harbison, Raymond D
2018-07-01
Three independently conducted asbestos exposure evaluations were conducted using wire gauze pads similar to standard practice in the laboratory setting. All testing occurred in a controlled atmosphere inside an enclosed chamber simulating a laboratory setting. Separate teams consisting of a laboratory technician, or technician and assistant simulated common tasks involving wire gauze pads, including heating and direct wire gauze manipulation. Area and personal air samples were collected and evaluated for asbestos consistent with the National Institute of Occupational Safety Health method 7400 and 7402, and the Asbestos Hazard Emergency Response Act (AHERA) method. Bulk gauze pad samples were analyzed by Polarized Light Microscopy and Transmission Electron Microscopy to determine asbestos content. Among air samples, chrysotile asbestos was the only fiber found in the first and third experiments, and tremolite asbestos for the second experiment. None of the air samples contained asbestos in concentrations above the current permissible regulatory levels promulgated by OSHA. These findings indicate that the level of asbestos exposure when working with wire gauze pads in the laboratory setting is much lower than levels associated with asbestosis or asbestos-related lung cancer and mesothelioma. Copyright © 2018. Published by Elsevier Inc.
Distribution of verbal and physical violence for same and opposite genders among adolescents.
Winstok, Zeev; Enosh, Guy
2008-09-01
The present study was set up to test the perceived distribution of verbal and physical violent behaviors among same- and opposite-genders. More specifically, those perceived violent behaviors are examined as the outcome of adolescents' cost-risk goals. The study assumes two conflicting social goals: Whereas the goal of risk reduction may motivate withdrawal from conflict, and decrease the prevalence of violent events, the goal of pursuing social status may motivate initiation and/or retaliation, thus increasing the prevalence of violence. The study is based on a sample of 155 high-school students that recorded the frequency of observing violent events in their peer group over a one-week period. Findings demonstrate that for males, opponent gender had a primary effect on violence distribution. Males exhibited violence against males more frequently than against females. This result is consistent with the assumption that males set a higher priority to pursuing social status. For females, verbal violence was more frequent than physical forms of aggression. This is consistent with the assumption that females set a higher priority on avoiding risk. These results are discussed from an evolutionary cost-risk perspective.
Multi-Beam Approach for Accelerating Alignment and Calibration of HyspIRI-Like Imaging Spectrometers
NASA Technical Reports Server (NTRS)
Eastwood, Michael L.; Green, Robert O.; Mouroulis, Pantazis; Hochberg, Eric B.; Hein, Randall C.; Kroll, Linley A.; Geier, Sven; Coles, James B.; Meehan, Riley
2012-01-01
A paper describes an optical stimulus that produces more consistent results, and can be automated for unattended, routine generation of data analysis products needed by the integration and testing team assembling a high-fidelity imaging spectrometer system. One key attribute of the system is an arrangement of pick-off mirrors that provides multiple input beams (five in this implementation) to simultaneously provide stimulus light to several field angles along the field of view of the sensor under test, allowing one data set to contain all the information that previously required five data sets to be separately collected. This stimulus can also be fed by quickly reconfigured sources that ultimately provide three data set types that would previously be collected separately using three different setups: Spectral Response Function (SRF), Cross-track Response Function (CRF), and Along-track Response Function (ARF), respectively. This method also lends itself to expansion of the number of field points if less interpolation across the field of view is desirable. An absolute minimum of three is required at the beginning stages of imaging spectrometer alignment.
Taming parallel I/O complexity with auto-tuning
Behzad, Babak; Luu, Huong Vu Thanh; Huchette, Joseph; ...
2013-11-17
We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, andmore » 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. In conclusion, we consistently demonstrate I/O write speedups between 2x and 100x for test configurations.« less
Shock characterization of TOAD pins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weirick, L.J.; Navarro, N.J.
1995-08-01
The purpose of this program was to characterize Time Of Arrival Detectors (TOAD) pins response to shock loading with respect to risetime, amplitude, repeatability and consistency. TOAD pins were subjected to impacts of 35 to 420 kilobars amplitude and approximately 1 ms pulse width to investigate the timing spread of four pins and the voltage output profile of the individual pins. Sets of pins were also aged at 45{degrees}, 60{degrees}, and 80{degrees}C for approximately nine weeks before shock testing at 315 kilobars impact stress. Four sets of pins were heated to 50.2{degrees}C (125{degrees}F) for approximately two hours and then impactedmore » at either 50 or 315 kilobars. Also, four sets of pins were aged at 60{degrees}C for nine weeks and then heated to 50.2{degrees}C before shock testing at 50 and 315 kilobars impact stress, respectively. Particle velocity measurements at the contact point between the stainless steel targets and TOAD pins were made using a Velocity Interferometer System for Any Reflector (VISAR) to monitor both the amplitude and profile of the shock waves.« less
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
Once-through integral system (OTIS): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J R
1986-09-01
A scaled experimental facility, designated the once-through integral system (OTIS), was used to acquire post-small break loss-of-coolant accident (SBLOCA) data for benchmarking system codes. OTIS was also used to investigate the application of the Abnormal Transient Operating Guidelines (ATOG) used in the Babcock and Wilcox (B and W) designed nuclear steam supply system (NSSS) during the course of an SBLOCA. OTIS was a single-loop facility with a plant to model power scale factor of 1686. OTIS maintained the key elevations, approximate component volumes, and loop flow resistances, and simulated the major component phenomena of a B and W raised-loop nuclearmore » plant. A test matrix consisting of 15 tests divided into four categories was performed. The largest group contained 10 tests and was defined to parametrically obtain an extensive set of plant-typical experimental data for code benchmarking. Parameters such as leak size, leak location, and high-pressure injection (HPI) shut-off head were individually varied. The remaining categories were specified to study the impact of the ATOGs (2 tests), to note the effect of guard heater operation on observed phenomena (2 tests), and to provide a data set for comparison with previous test experience (1 test). A summary of the test results and a detailed discussion of Test 220100 is presented. Test 220100 was the nominal or reference test for the parametric studies. This test was performed with a scaled 10-cm/sup 2/ leak located in the cold leg suction piping.« less
The influence of shyness on children's test performance.
Crozier, W Ray; Hostettler, Kirsten
2003-09-01
Research has shown that shy children differ from their peers not only in their use of language in routine social encounters but also in formal assessments of their language development, including psychometric tests of vocabulary. There has been little examination of factors contributing to these individual differences. To investigate cognitive-competence and social anxiety interpretations of differences in children's performance on tests of vocabulary. To examine the performance of shy and less shy children under different conditions of test administration, individually with an examiner or among their peers within the familiar classroom setting. The sample consisted of 240 Year 5 pupils (122 male, 118 female) from 24 primary schools. Shy and less shy children, identified by teacher nomination and checklist ratings, completed vocabulary and mental arithmetic tests in one of three conditions, in a between-subjects design. The conditions varied individual and group administration, and oral and written responses. The conditions of test administration influenced the vocabulary test performance of shy children. They performed significantly more poorly than their peers in the two face-to-face conditions but not in the group test condition. A comparable trend for the arithmetic test was not statistically significant. Across the sample as a whole, shyness correlated significantly with test scores. Shyness does influence children's cognitive test performance and its impact is larger when children are tested face-to-face rather than in a more anonymous group setting. The results are of significance for theories of shyness and have implications for the assessment of schoolchildren.
Identifying genetic variants that affect viability in large cohorts
Berisa, Tomaz; Day, Felix R.; Perry, John R. B.
2017-01-01
A number of open questions in human evolutionary genetics would become tractable if we were able to directly measure evolutionary fitness. As a step towards this goal, we developed a method to examine whether individual genetic variants, or sets of genetic variants, currently influence viability. The approach consists in testing whether the frequency of an allele varies across ages, accounting for variation in ancestry. We applied it to the Genetic Epidemiology Research on Adult Health and Aging (GERA) cohort and to the parents of participants in the UK Biobank. Across the genome, we found only a few common variants with large effects on age-specific mortality: tagging the APOE ε4 allele and near CHRNA3. These results suggest that when large, even late-onset effects are kept at low frequency by purifying selection. Testing viability effects of sets of genetic variants that jointly influence 1 of 42 traits, we detected a number of strong signals. In participants of the UK Biobank of British ancestry, we found that variants that delay puberty timing are associated with a longer parental life span (P~6.2 × 10−6 for fathers and P~2.0 × 10−3 for mothers), consistent with epidemiological studies. Similarly, variants associated with later age at first birth are associated with a longer maternal life span (P~1.4 × 10−3). Signals are also observed for variants influencing cholesterol levels, risk of coronary artery disease (CAD), body mass index, as well as risk of asthma. These signals exhibit consistent effects in the GERA cohort and among participants of the UK Biobank of non-British ancestry. We also found marked differences between males and females, most notably at the CHRNA3 locus, and variants associated with risk of CAD and cholesterol levels. Beyond our findings, the analysis serves as a proof of principle for how upcoming biomedical data sets can be used to learn about selection effects in contemporary humans. PMID:28873088
ERIC Educational Resources Information Center
Tannenbaum, Richard J.; Kannan, Priya
2015-01-01
Angoff-based standard setting is widely used, especially for high-stakes licensure assessments. Nonetheless, some critics have claimed that the judgment task is too cognitively complex for panelists, whereas others have explicitly challenged the consistency in (replicability of) standard-setting outcomes. Evidence of consistency in item judgments…
Monitoring and Acquisition Real-time System (MARS)
NASA Technical Reports Server (NTRS)
Holland, Corbin
2013-01-01
MARS is a graphical user interface (GUI) written in MATLAB and Java, allowing the user to configure and control the Scalable Parallel Architecture for Real-Time Acquisition and Analysis (SPARTAA) data acquisition system. SPARTAA not only acquires data, but also allows for complex algorithms to be applied to the acquired data in real time. The MARS client allows the user to set up and configure all settings regarding the data channels attached to the system, as well as have complete control over starting and stopping data acquisition. It provides a unique "Test" programming environment, allowing the user to create tests consisting of a series of alarms, each of which contains any number of data channels. Each alarm is configured with a particular algorithm, determining the type of processing that will be applied on each data channel and tested against a defined threshold. Tests can be uploaded to SPARTAA, thereby teaching it how to process the data. The uniqueness of MARS is in its capability to be adaptable easily to many test configurations. MARS sends and receives protocols via TCP/IP, which allows for quick integration into almost any test environment. The use of MATLAB and Java as the programming languages allows for developers to integrate the software across multiple operating platforms.
A Repeated Power Training Enhances Fatigue Resistance While Reducing Intraset Fluctuations.
Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Moras, Gerard; Arjol-Serrano, José Luis; Mendez-Villanueva, Alberto
2018-04-04
Oliver, GS, Julio, TF, Moras, G, José Luis, AS, and Alberto, MV. A repeated power training enhances fatigue resistance while reducing intraset fluctuations. J Strength Cond Res XX(X): 000-000, 2018-The present study analyzed the effects of adding an upper-body repeated power ability (RPA) training to habitual strength training sessions. Twenty young elite male basketball players were randomly allocated into a control group (CON, n = 10) or repeated power group (RPG, n = 10) and evaluated by 1 repetition maximum (1RM), incremental load, and RPA tests in the bench press exercise before and after a 7-week period and a 4-week cessation period. Repeated power group performed 1-3 blocks of 5 sets of 5 repetitions using the load that maximized power output with 30 seconds and 3 minute of passive recovery between sets and blocks, respectively. Between-group analysis showed substantial greater improvements in RPG compared with CON in: best set (APB), last set (APL), mean power over 5 sets (APM), percentage of decrement, fluctuation decrease during APL and RPA index (APLpost/APBpre) during the RPA test (effect size [ES] = 0.64-1.86), and 1RM (ES = 0.48) and average power at 80% of 1RM (ES = 1.11) in the incremental load test. The improvements of APB and APM were almost perfectly correlated. In conclusion, RPA training represents an effective method to mainly improve fatigue resistance together with the novel finding of a better consistency in performance (measured as reduced intraset power fluctuations) at the end of a dynamic repeated effort.
Munigala, Satish; Jackups, Ronald R; Poirier, Robert F; Liang, Stephen Y; Wood, Helen; Jafarzadeh, S Reza; Warren, David K
2018-01-20
Urinalysis and urine culture are commonly ordered tests in the emergency department (ED). We evaluated the impact of removal of order sets from the 'frequently ordered test' in the computerised physician order entry system (CPOE) on urine testing practices. We conducted a before (1 September to 20 October 2015) and after (21 October to 30 November 2015) study of ED patients. The intervention consisted of retaining 'urinalysis with reflex to microscopy' as the only urine test in a highly accessible list of frequently ordered tests in the CPOE system. All other urine tests required use of additional order screens via additional mouse clicks. The frequency of urine testing before and after the intervention was compared, adjusting for temporal trends. During the study period, 6499 (28.2%) of 22 948 ED patients had ≥1 urine test ordered. Urine testing rates for all ED patients decreased in the post intervention period for urinalysis (291.5 pre intervention vs 278.4 per 1000 ED visits post intervention, P=0.03), urine microscopy (196.5vs179.5, P=0.001) and urine culture (54.3vs29.7, P<0.001). When adjusted for temporal trends, the daily culture rate per 1000 ED visits decreased by 46.6% (-46.6%, 95% CI -66.2% to -15.6%), but urinalysis (0.4%, 95% CI -30.1 to 44.4%), microscopy (-6.5%, 95% CI -36.0% to 36.6%) and catheterised urine culture rates (17.9%, 95% CI -16.9 to 67.4) were unchanged. A simple intervention of retaining only 'urinalysis with reflex to microscopy' and removing all other urine tests from the 'frequently ordered' window of the ED electronic order set decreased urine cultures ordered by 46.6% after accounting for temporal trends. Given the injudicious use of antimicrobial therapy for asymptomatic bacteriuria, findings from our study suggest that proper design of electronic order sets plays a vital role in reducing excessive ordering of urine cultures. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Fitzgerald, J.J.; Detwiler, C.G. Jr.
1960-05-24
A description is given of a personnel neutron dosimeter capable of indicating the complete spectrum of the neutron dose received as well as the dose for each neutron energy range therein. The device consists of three sets of indium foils supported in an aluminum case. The first set consists of three foils of indium, the second set consists of a similar set of indium foils sandwiched between layers of cadmium, whereas the third set is similar to the second set but is sandwiched between layers of polyethylene. By analysis of all the foils the neutron spectrum and the total dose from neutrons of all energy levels can be ascertained.
A multicenter study benchmarks software tools for label-free proteome quantification.
Navarro, Pedro; Kuharev, Jörg; Gillet, Ludovic C; Bernhardt, Oliver M; MacLean, Brendan; Röst, Hannes L; Tate, Stephen A; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I; Aebersold, Ruedi; Tenzer, Stefan
2016-11-01
Consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH 2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from sequential window acquisition of all theoretical fragment-ion spectra (SWATH)-MS, which uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test data sets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation-window setups. For consistent evaluation, we developed LFQbench, an R package, to calculate metrics of precision and accuracy in label-free quantitative MS and report the identification performance, robustness and specificity of each software tool. Our reference data sets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics.
Tests of neutrino interaction models with the MicroBooNE detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rafique, Aleena
2018-01-01
I measure a large set of observables in inclusive charged current muon neutrino scattering on argon with the MicroBooNE liquid argon time projection chamber operating at Fermilab. I evaluate three neutrino interaction models based on the widely used GENIE event generator using these observables. The measurement uses a data set consisting of neutrino interactions with a final state muon candidate fully contained within the MicroBooNE detector. These data were collected in 2016 with the Fermilab Booster Neutrino Beam, which has an average neutrino energy ofmore » $800$ MeV, using an exposure corresponding to $$5.0\\times10^{19}$$ protons-on-target. The analysis employs fully automatic event selection and charged particle track reconstruction and uses a data-driven technique to separate neutrino interactions from cosmic ray background events. I find that GENIE models consistently describe the shapes of a large number of kinematic distributions for fixed observed multiplicity, but I show an indication that the observed multiplicity fractions deviate from GENIE expectations.« less
Assessment of tautomer distribution using the condensed reaction graph approach
NASA Astrophysics Data System (ADS)
Gimadiev, T. R.; Madzhidov, T. I.; Nugmanov, R. I.; Baskin, I. I.; Antipin, I. S.; Varnek, A.
2018-03-01
We report the first direct QSPR modeling of equilibrium constants of tautomeric transformations (logK T ) in different solvents and at different temperatures, which do not require intermediate assessment of acidity (basicity) constants for all tautomeric forms. The key step of the modeling consisted in the merging of two tautomers in one sole molecular graph ("condensed reaction graph") which enables to compute molecular descriptors characterizing entire equilibrium. The support vector regression method was used to build the models. The training set consisted of 785 transformations belonging to 11 types of tautomeric reactions with equilibrium constants measured in different solvents and at different temperatures. The models obtained perform well both in cross-validation (Q2 = 0.81 RMSE = 0.7 logK T units) and on two external test sets. Benchmarking studies demonstrate that our models outperform results obtained with DFT B3LYP/6-311 ++ G(d,p) and ChemAxon Tautomerizer applicable only in water at room temperature.
Computer-aided detection of breast masses: Four-view strategy for screening mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei Jun; Chan Heangping; Zhou Chuan
2011-04-15
Purpose: To improve the performance of a computer-aided detection (CAD) system for mass detection by using four-view information in screening mammography. Methods: The authors developed a four-view CAD system that emulates radiologists' reading by using the craniocaudal and mediolateral oblique views of the ipsilateral breast to reduce false positives (FPs) and the corresponding views of the contralateral breast to detect asymmetry. The CAD system consists of four major components: (1) Initial detection of breast masses on individual views, (2) information fusion of the ipsilateral views of the breast (referred to as two-view analysis), (3) information fusion of the corresponding viewsmore » of the contralateral breast (referred to as bilateral analysis), and (4) fusion of the four-view information with a decision tree. The authors collected two data sets for training and testing of the CAD system: A mass set containing 389 patients with 389 biopsy-proven masses and a normal set containing 200 normal subjects. All cases had four-view mammograms. The true locations of the masses on the mammograms were identified by an experienced MQSA radiologist. The authors randomly divided the mass set into two independent sets for cross validation training and testing. The overall test performance was assessed by averaging the free response receiver operating characteristic (FROC) curves of the two test subsets. The FP rates during the FROC analysis were estimated by using the normal set only. The jackknife free-response ROC (JAFROC) method was used to estimate the statistical significance of the difference between the test FROC curves obtained with the single-view and the four-view CAD systems. Results: Using the single-view CAD system, the breast-based test sensitivities were 58% and 77% at the FP rates of 0.5 and 1.0 per image, respectively. With the four-view CAD system, the breast-based test sensitivities were improved to 76% and 87% at the corresponding FP rates, respectively. The improvement was found to be statistically significant (p<0.0001) by JAFROC analysis. Conclusions: The four-view information fusion approach that emulates radiologists' reading strategy significantly improves the performance of breast mass detection of the CAD system in comparison with the single-view approach.« less
Relative Humidity in Limited Streamer Tubes for Stanford Linear Accelerator Center's BaBar Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, M.I.; /MIT; Convery, M.
2005-12-15
The BABAR Detector at the Stanford Linear Accelerator Center studies the decay of B mesons created in e{sup +}e{sup -} collisions. The outermost layer of the detector, used to detect muons and neutral hadrons created during this process, is being upgraded from Resistive Plate Chambers (RPCs) to Limited Streamer Tubes (LSTs). The standard-size LST tube consists of eight cells, where a silver-plated wire runs down the center of each. A large potential difference is placed between the wires and ground. Gas flows through a series of modules connected with tubing, typically four. LSTs must be carefully tested before installation, asmore » it will be extremely difficult to repair any damage once installed in the detector. In the testing process, the count rate in most modules showed was stable and consistent with cosmic ray rate over an approximately 500 V operating range between 5400 to 5900 V. The count in some modules, however, was shown to unexpectedly spike near the operation point. In general, the modules through which the gas first flows did not show this problem, but those further along the gas chain were much more likely to do so. The suggestion was that this spike was due to higher humidity in the modules furthest from the fresh, dry inflowing gas, and that the water molecules in more humid modules were adversely affecting the modules' performance. This project studied the effect of humidity in the modules, using a small capacitive humidity sensor (Honeywell). The sensor provided a humidity-dependent output voltage, as well as a temperature measurement from a thermistor. A full-size hygrometer (Panametrics) was used for testing and calibrating the Honeywell sensors. First the relative humidity of the air was measured. For the full calibration, a special gas-mixing setup was used, where relative humidity of the LST gas mixture could be varied from almost dry to almost fully saturated. With the sensor calibrated, a set of sensors was used to measure humidity vs. time in the LSTs. The sensors were placed in two sets of LST modules, one gas line flowing through each set. These modules were tested for count rate v. voltage while simultaneously measuring relative humidity in each module. One set produced expected readings, while the other showed the spike in count rate. The relative humidity in the two sets of modules looked very similar, but it rose significantly for modules further along the gas chain.« less
Natural science modules with SETS approach to improve students’ critical thinking ability
NASA Astrophysics Data System (ADS)
Budi, A. P. S.; Sunarno, W.; Sugiyarto
2018-05-01
SETS (Science, Environment, Technology and Society) approach for learning is important to be developed for middle school, since it can improve students’ critical thinking ability. This research aimed to determine feasibility and the effectiveness of Natural Science Module with SETS approach to increase their critical thinking ability. The module development was done by invitation, exploration, explanation, concept fortifying, and assessment. Questionnaire and test performed including pretest and posttest with control group design were used as data collection technique in this research. Two classes were selected randomly as samples and consisted of 32 students in each group. Descriptive data analysis was used to analyze the module feasibility and t-test was used to analyze their critical thinking ability. The results showed that the feasibility of the module development has a very good results based on assessment of the experts, practitioners and peers. Based on the t-test results, there was significant difference between control class and experiment class (0.004), with n-gain score of control and the experiment class respectively 0.270 (low) and 0.470 (medium). It showed that the module was more effective than the textbook. It was able to improve students’ critical thinking ability and appropriate to be used in learning process.
Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias
2013-06-01
The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.
Jezova, D; Hlavacova, N; Dicko, I; Solarikova, P; Brezina, I
2016-07-01
Repeated or chronic exposure to stressors is associated with changes in neuroendocrine responses depending on the type, intensity, number and frequency of stress exposure as well as previous stress experience. The aim of the study was to test the hypothesis that salivary cortisol and cardiovascular responses to real-life psychosocial stressors related to public performance can cross-adapt with responses to psychosocial stress induced by public speech under laboratory setting. The sample consisted of 22 healthy male volunteers, which were either actors, more precisely students of dramatic arts or non-actors, students of other fields. The stress task consisted of 15 min anticipatory preparation phase and 15 min of public speech on an emotionally charged topic. The actors, who were accustomed to public speaking, responded with a rise in salivary cortisol as well as blood pressure to laboratory public speech. The values of salivary cortisol, systolic blood pressure and state anxiety were lower in actors compared to non-actors. Unlike non-actors, subjects with experience in public speaking did not show stress-induced rise in the heart rate. Evaluation of personality traits revealed that actors scored significantly higher in extraversion than the subjects in the non-actor group. In conclusion, neuroendocrine responses to real-life stressors in actors can partially cross-adapt with responses to psychosocial stress under laboratory setting. The most evident adaptation was at the level of heart rate responses. The public speech tasks may be of help in evaluation of the ability to cope with stress in real life in artists by simple laboratory testing.
NASA Astrophysics Data System (ADS)
Skinner, Ellen; Saxton, Emily; Currie, Cailin; Shusterman, Gwen
2017-11-01
As part of long-standing efforts to promote undergraduates' success in science, researchers have investigated the instructional strategies and motivational factors that promote student learning and persistence in science coursework and majors. This study aimed to create a set of brief measures that educators and researchers can use as tools to examine the undergraduate motivational experience in science classes. To identify key motivational processes, we drew on self-determination theory (SDT), which holds that students have fundamental needs - to feel competent, related, and autonomous - that fuel their intrinsic motivation. When educational experiences meet these needs, students engage more energetically and learn more, cumulatively contributing to a positive identity as a scientist. Based on information provided by 1013 students from 8 classes in biology, chemistry, and physics, we constructed conceptually focused and psychometrically sound survey measures of three sets of motivational factors: (1) students' appraisals of their own competence, autonomy, and relatedness; (2) the quality of students' behavioural and emotional engagement in academic work; and (3) students' emerging identities as scientists, including their science identity, purpose in science, and science career plans. Using an iterative confirmatory process, we tested short item sets for unidimensionality and internal consistency, and then cross-validated them. Tests of measurement invariance showed that scales were generally comparable across disciplines. Most importantly, scales and final course grades showed correlations consistent with predictions from SDT. These measures may provide a window on the student motivational experience for educators, researchers, and interventionists who aim to improve the quality of undergraduate science teaching and learning.
NASA Astrophysics Data System (ADS)
Gentilucci, Matteo; Bisci, Carlo; Fazzini, Massimiliano; Tognetti, Danilo
2016-04-01
The analysis is focused on more than 100 meteorological recording stations located in the Province of Macerata (Marche region, Adriatic side of Central Italy) and in its neighbours; it aims to check the time series of their climatological data (temperatures and precipitations), covering about one century of observations, in order to remove or rectify any errors. This small area (about 2.800Km2) features many different climate types, because of its varied topography ranging, moving westward, from the Adriatic coast to the Appennines (over 2.100m of altitude). In this irregular context, it is difficult to establish a common procedure for each sector; therefore, it has been followed the general guidelines of the WMO, with some important difference (mostly in the method). Data are classified on the basis of validation codes (VC): missing datum (VC=-1), correct or verified datum (VC=0), datum under investigation (VC=1), datum removed after the analysis (VC=2), datum reconstructed through interpolation or by estimating the errors of digitization (VC=3). The first step was the "Logical Control", consisting in the investigation of gross errors of digitization: the data found in this phase of the analysis has been removed without any other control (VC=2). The second step, represented by the "Internal Consistency Check", leads to the elimination (VC=2) of all the data out of range, estimated on the basis of the climate zone for each investigated variable. The third one is the "Tolerance Test", carried out comparing each datum with the historical record it belongs to, in order to apply this test, the normal distribution of data has been evaluated. The "Tolerance Test" usually defines only suspect data (VC=1) to be verified with further tests, such as the "Temporal Consistency" and the "Spatial Consistency". The "Temporal Consistency" allows an evaluation of the time sequence of data, setting a specified range for each station basing upon its historical records. Data out of range have been considered under investigation (VC=1). Data are finally compared with the ones contemporaneously recorded in a set of neighboring meteorological stations through the "Spatial Consistency" test, thus eliminating every suspicious datum (recoded VC=2 or VC=0, depending upon the results of this analysis). This procedure uses a series of different statistic steps to avoid uncertainties: at its end, all the investigated data are either accepted (VC=0) or refused (VC=2). Refused and missing data (VC=-1 and VC=2) have been reconstructed through interpolation using co-kriging techniques (assigning VC=3), when necessary, in the final stage of the process. All the above procedure has been developed using a database managing software in a GIS (ESRI ArcGIS ®) environment. The refused data are 1.286 in 77.021 (1,67%) for the precipitations and 375 in 1.821.054 for the temperatures (0,02%).
Williams, Cory A.; Leib, Kenneth J.
2005-01-01
In 2003, the U.S. Geological Survey, in cooperation with Delta County, initiated a study to characterize streamflow gainloss in a reach of Terror Creek, in the vicinity of a mine-permit area planned for future coal mining. This report describes the methods of the study and includes results from a comparison of two sets of streamflow measurements using tracer techniques following the constant-rate injection method. Two measurement sets were used to characterize the streamflow gain-loss associated with reservoir-supplemented streamflow conditions and with natural base-flow conditions. A comparison of the measurement sets indicates that the streamflow gain-loss characteristics of the Terror Creek study reach are consistent between the two hydrologic conditions evaluated. A substantial streamflow gain occurs between measurement locations 4 and 5 in both measurement sets, and streamflow is lost between measurement locations 5 and 7 (measurement set 1, measurement location 6 not visited) and 5 and 6 (measurement set 2). A comparison of the measurement sets above and below the mine-permit area (measurement locations 3 and 7) shows a consistent loss of 0.37 and 0.31 cubic foot per second (representing 5- and 12-percent streamflow losses normalized to measurement location 3) for measurement sets 1 and 2, respectively. This indicates that similar streamflow losses occur both during reservoir-supplemented and natural base-flow conditions, with a mean streamflow loss of 0.34 cubic foot per second for measurement sets 1 and 2. Findings from a previous investigation support the observed streamflow loss between measurement locations 3 and 7 in this study. The findings from the previous investigation indicate a streamflow loss of 0.59 cubic foot per second occurs between these measurement locations. Statistical testing of the differences in streamflow between measurement locations 3 and 7 indicates that there is a discernible streamflow loss. The p-value of 0.0236 for the parametric paired t-test indicates that there is a 2.36-percent probability of observing a sample mean difference of 0.34 cubic foot per second if the population mean is zero. The p-value of 0.125 for the nonparametric exact Wilcoxon signed rank test indicates that there is a 12.5-percent probability of observing a sample mean difference this large if the population mean is zero. The similarity in streamflow gain-loss between measurement sets indicates that the process controlling streamflow may be the same between the two hydrologic conditions evaluated. Gains between measurement locations 4 and 5 may be related to hyporheic flow from tributaries that were dry during the study. No other obvious sources of surface water were identified during the investigation. The cause for the observed streamflow loss between measurement locations 5 and 6 is unknown but may be related to mapped local faulting, 100 years of coal mining in the area, and aquifer recharge.
Fernandes, Linda; Storheim, Kjersti; Lochting, Ida; Grotle, Margreth
2012-06-22
Pain catastrophizing has been found to be an important predictor of disability and days lost from work in patients with low back pain. The most commonly used outcome measure to identify pain catastrophizing is the Pain Catastrophizing Scale (PCS). To enable the use of the PCS in clinical settings and research in Norwegian speaking patients, the PCS had to be translated. The purpose of this study was therefore to translate and cross-culturally adapt the PCS into Norwegian and to test internal consistency, construct validity and reproducibility of the PCS. The PCS was translated before it was tested for psychometric properties. Patients with subacute or chronic non-specific low back pain aged 18 years or more were recruited from primary and secondary care. Validity of the PCS was assessed by evaluating data quality (missing, floor and ceiling effects), principal components analysis, internal consistency (Cronbach's alpha), and construct validity (Spearman's rho). Reproducibility analyses included standard error of measurement, minimum detectable change, limits of agreement, and intraclass correlation coefficients. A total of 38 men and 52 women (n = 90), with a mean (SD) age of 47.6 (11.7) years, were included for baseline testing. A subgroup of 61 patients was included for test-retest assessments. The Norwegian PCS was easy-to-comprehend. The principal components analysis supported a three-factor structure, internal consistency was satisfactory for the PCS total score (α 0.90) and the subscales rumination (α 0.83) and helplessness (α 0.86), but not for the subscale magnification (α 0.53). In total, 86% of the correlation analyses were in accordance with predefined hypothesis. The reliability analyses showed intraclass correlation coefficients of 0.74 - 0.87 for the PCS total score and subscales. The PCS total score (range 0-52 points) showed a standard error of measurement of 4.6 points and a 95% minimum detectable change estimate of 12.8 points. The Norwegian PCS total score showed acceptable psychometric properties in terms of comprehensibility, consistency, construct validity, and reproducibility when applied to patients with subacute or chronic LBP from different clinical settings. Our study support the use of the PCS total score for clinical or research purposes identifying or evaluating pain catastrophizing.
Irisin and exercise training in humans - results from a randomized controlled training trial.
Hecksteden, Anne; Wegmann, Melissa; Steffen, Anke; Kraushaar, Jochen; Morsch, Arne; Ruppenthal, Sandra; Kaestner, Lars; Meyer, Tim
2013-11-05
The recent discovery of a new myokine (irisin) potentially involved in health-related training effects has gained great attention, but evidence for a training-induced increase in irisin remains preliminary. Therefore, the present study aimed to determine whether irisin concentration is increased after regular exercise training in humans. In a randomized controlled design, two guideline conforming training interventions were studied. Inclusion criteria were age 30 to 60 years, <1 hour/week regular activity, non-smoker, and absence of major diseases. 102 participants could be included in the analysis. Subjects in the training groups exercised 3 times per week for 26 weeks. The minimum compliance was defined at 70%. Aerobic endurance training (AET) consisted of 45 minutes of walking/running at 60% heart rate reserve. Strength endurance training (SET) consisted of 8 machine-based exercises (2 sets of 15 repetitions with 100% of the 20 repetition maximum). Serum irisin concentrations in frozen serum samples were determined in a single blinded measurement immediately after the end of the training study. Physical performance provided positive control for the overall efficacy of training. Differences between groups were tested for significance using analysis of variance. For post hoc comparisons with the control group, Dunnett's test was used. Maximum performance increased significantly in the training groups compared with controls (controls: ±0.0 ± 0.7 km/h; AET: 1.1 ± 0.6 km/h, P < 0.01; SET: +0.5 ± 0.7 km/h, P = 0.01). Changes in irisin did not differ between groups (controls: 101 ± 81 ng/ml; AET: 44 ± 93 ng/ml; SET: 60 ± 92 ng/ml; in both cases: P = 0.99 (one-tailed testing), 1-β error probability = 0.7). The general upward trend was mainly accounted for by a negative association of irisin concentration with the storage duration of frozen serum samples (P < 0.01, β = -0.33). After arithmetically eliminating this confounder, the differences between groups remained non-significant. A training-induced increase in circulating irisin could not be confirmed, calling into question its proposed involvement in health-related training effects. Because frozen samples are prone to irisin degradation over time, positive results from uncontrolled trials might exclusively reflect the longer storage of samples from initial tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uehara, Takeki, E-mail: takeki.uehara@shionogi.co.jp; Toxicogenomics Informatics Project, National Institute of Biomedical Innovation, 7-6-8 Asagi, Ibaraki, Osaka 567-0085; Minowa, Yohsuke
2011-09-15
The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificitymore » in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model enables us to detect genotoxic as well as non-genotoxic hepatocarcinogens.« less
NASA Astrophysics Data System (ADS)
Jatzeck, Bernhard Michael
2000-10-01
The application of the Luus-Jaakola direct search method to the optimization of stand-alone hybrid energy systems consisting of wind turbine generators (WTG's), photovoltaic (PV) modules, batteries, and an auxiliary generator was examined. The loads for these systems were for agricultural applications, with the optimization conducted on the basis of minimum capital, operating, and maintenance costs. Five systems were considered: two near Edmonton, Alberta, and one each near Lethbridge, Alberta, Victoria, British Columbia, and Delta, British Columbia. The optimization algorithm used hourly data for the load demand, WTG output power/area, and PV module output power. These hourly data were in two sets: seasonal (summer and winter values separated) and total (summer and winter values combined). The costs for the WTG's, PV modules, batteries, and auxiliary generator fuel were full market values. To examine the effects of price discounts or tax incentives, these values were lowered to 25% of the full costs for the energy sources and two-thirds of the full cost for agricultural fuel. Annual costs for a renewable energy system depended upon the load, location, component costs, and which data set (seasonal or total) was used. For one Edmonton load, the cost for a renewable energy system consisting of 27.01 m2 of WTG area, 14 PV modules, and 18 batteries (full price, total data set) was 6873/year. For Lethbridge, a system with 22.85 m2 of WTG area, 47 PV modules, and 5 batteries (reduced prices, seasonal data set) cost 2913/year. The performance of renewable energy systems based on the obtained results was tested in a simulation using load and weather data for selected days. Test results for one Edmonton load showed that the simulations for most of the systems examined ran for at least 17 hours per day before failing due to either an excessive load on the auxiliary generator or a battery constraint being violated. Additional testing indicated that increasing the generator capacity and reducing the maximum allowed battery charge current during the time of the day at which these failures occurred allowed the simulation to successfully operate.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
An uncommon case of random fire-setting behavior associated with Todd paralysis: a case report.
Kanehisa, Masayuki; Morinaga, Katsuhiko; Kohno, Hisae; Maruyama, Yoshihiro; Ninomiya, Taiga; Ishitobi, Yoshinobu; Tanaka, Yoshihiro; Tsuru, Jusen; Hanada, Hiroaki; Yoshikawa, Tomoya; Akiyoshi, Jotaro
2012-08-31
The association between fire-setting behavior and psychiatric or medical disorders remains poorly understood. Although a link between fire-setting behavior and various organic brain disorders has been established, associations between fire setting and focal brain lesions have not yet been reported. Here, we describe the case of a 24-year-old first time arsonist who suffered Todd's paralysis prior to the onset of a bizarre and random fire-setting behavior. A case of a 24-year-old man with a sudden onset of a bizarre and random fire-setting behavior is reported. The man, who had been arrested on felony arson charges, complained of difficulties concentrating and of recent memory disturbances with leg weakness. A video-EEG recording demonstrated a close relationship between the focal motor impairment and a clear-cut epileptic ictal discharge involving the bilateral motor cortical areas. The SPECT result was statistically analyzed by comparing with standard SPECT images obtained from our institute (easy Z-score imaging system; eZIS). eZIS revealed hypoperfusion in cingulate cortex, basal ganglia and hyperperfusion in frontal cortex,. A neuropsychological test battery revealed lower than normal scores for executive function, attention, and memory, consistent with frontal lobe dysfunction. The fire-setting behavior and Todd's paralysis, together with an unremarkable performance on tests measuring executive function fifteen months prior, suggested a causal relationship between this organic brain lesion and the fire-setting behavior. The case describes a rare and as yet unreported association between random, impulse-driven fire-setting behavior and damage to the brain and suggests a disconnection of frontal lobe structures as a possible pathogenic mechanism.
Zhao, Chunyu; Burge, James H
2007-12-24
Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.
TIE: an ability test of emotional intelligence.
Śmieja, Magdalena; Orzechowski, Jarosław; Stolarski, Maciej S
2014-01-01
The Test of Emotional Intelligence (TIE) is a new ability scale based on a theoretical model that defines emotional intelligence as a set of skills responsible for the processing of emotion-relevant information. Participants are provided with descriptions of emotional problems, and asked to indicate which emotion is most probable in a given situation, or to suggest the most appropriate action. Scoring is based on the judgments of experts: professional psychotherapists, trainers, and HR specialists. The validation study showed that the TIE is a reliable and valid test, suitable for both scientific research and individual assessment. Its internal consistency measures were as high as .88. In line with theoretical model of emotional intelligence, the results of the TIE shared about 10% of common variance with a general intelligence test, and were independent of major personality dimensions.
NASA Technical Reports Server (NTRS)
Miller, D. P.; Prahst, P. S.
1994-01-01
An axial compressor test rig has been designed for the operation of small turbomachines. The inlet region consisted of a long flowpath region with two series of support struts and a flapped inlet guide vane. A flow test was run to calibrate and determine the source and magnitudes of the loss mechanisms in the inlet for a highly loaded two-stage axial compressor test. Several flow conditions and IGV angle settings were established in which detailed surveys were completed. Boundary layer bleed was also provided along the casing of the inlet behind the support struts and ahead of the IGV. A detailed discussion of the flowpath design along with a summary of the experimental results are provided in Part 1.
Del Seppia, Cristina; Mezzasalma, Lorena; Messerotti, Mauro; Cordelli, Alessandro; Ghione, Sergio
2009-01-01
We have previously reported that the exposure to an abnormal magnetic field simulating the one encountered by the International Space Station (ISS) orbiting around the Earth may enhance autonomic response to emotional stimuli. Here we report the results of the second part of that study which tested whether this field also affects cognitive functions. Twenty-four volunteers participated in the study, 12 exposed to the natural geomagnetic field and 12 to the magnetic field encountered by ISS. The test protocol consisted of a set of eight tests chosen from a computerized test battery for the assessment of attentional performance. The duration of exposure was 90 min. No effect of exposure to ISS magnetic field was observed on attentional performance. (c) 2008 Wiley-Liss, Inc.
Shokouhi, Parisa; Rivière, Jacques; Lake, Colton R; Le Bas, Pierre-Yves; Ulrich, T J
2017-11-01
The use of nonlinear acoustic techniques in solids consists in measuring wave distortion arising from compliant features such as cracks, soft intergrain bonds and dislocations. As such, they provide very powerful nondestructive tools to monitor the onset of damage within materials. In particular, a recent technique called dynamic acousto-elasticity testing (DAET) gives unprecedented details on the nonlinear elastic response of materials (classical and non-classical nonlinear features including hysteresis, transient elastic softening and slow relaxation). Here, we provide a comprehensive set of linear and nonlinear acoustic responses on two prismatic concrete specimens; one intact and one pre-compressed to about 70% of its ultimate strength. The two linear techniques used are Ultrasonic Pulse Velocity (UPV) and Resonance Ultrasound Spectroscopy (RUS), while the nonlinear ones include DAET (fast and slow dynamics) as well as Nonlinear Resonance Ultrasound Spectroscopy (NRUS). In addition, the DAET results correspond to a configuration where the (incoherent) coda portion of the ultrasonic record is used to probe the samples, as opposed to a (coherent) first arrival wave in standard DAET tests. We find that the two visually identical specimens are indistinguishable based on parameters measured by linear techniques (UPV and RUS). On the contrary, the extracted nonlinear parameters from NRUS and DAET are consistent and orders of magnitude greater for the damaged specimen than those for the intact one. This compiled set of linear and nonlinear ultrasonic testing data including the most advanced technique (DAET) provides a benchmark comparison for their use in the field of material characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
Sensory stimulation augments the effects of massed practice training in persons with tetraplegia.
Beekhuizen, Kristina S; Field-Fote, Edelle C
2008-04-01
To compare functional changes and cortical neuroplasticity associated with hand and upper extremity use after massed (repetitive task-oriented practice) training, somatosensory stimulation, massed practice training combined with somatosensory stimulation, or no intervention, in persons with chronic incomplete tetraplegia. Participants were randomly assigned to 1 of 4 groups: massed practice training combined with somatosensory peripheral nerve stimulation (MP+SS), somatosensory peripheral nerve stimulation only (SS), massed practice training only (MP), and no intervention (control). University medical school setting. Twenty-four subjects with chronic incomplete tetraplegia. Intervention sessions were 2 hours per session, 5 days a week for 3 weeks. Massed practice training consisted of repetitive practice of functional tasks requiring skilled hand and upper-extremity use. Somatosensory stimulation consisted of median nerve stimulation with intensity set below motor threshold. Pre- and post-testing assessed changes in functional hand use (Jebsen-Taylor Hand Function Test), functional upper-extremity use (Wolf Motor Function Test), pinch grip strength (key pinch force), sensory function (monofilament testing), and changes in cortical excitation (motor evoked potential threshold). The 3 groups showed significant improvements in hand function after training. The MP+SS and SS groups had significant improvements in upper-extremity function and pinch strength compared with the control group, but only the MP+SS group had a significant change in sensory scores compared with the control group. The MP+SS and MP groups had greater change in threshold measures of cortical excitability. People with chronic incomplete tetraplegia obtain functional benefits from massed practice of task-oriented skills. Somatosensory stimulation appears to be a valuable adjunct to training programs designed to improve hand and upper-extremity function in these subjects.
Chen, Hongda; Werner, Simone; Butt, Julia; Zörnig, Inka; Knebel, Phillip; Michel, Angelika; Eichmüller, Stefan B; Jäger, Dirk; Waterboer, Tim; Pawlita, Michael; Brenner, Hermann
2016-03-29
Novel blood-based screening tests are strongly desirable for early detection of colorectal cancer (CRC). We aimed to identify and evaluate autoantibodies against tumor-associated antigens as biomarkers for early detection of CRC. 380 clinically identified CRC patients and samples of participants with selected findings from a cohort of screening colonoscopy participants in 2005-2013 (N=6826) were included in this analysis. Sixty-four serum autoantibody markers were measured by multiplex bead-based serological assays. A two-step approach with selection of biomarkers in a training set, and validation of findings in a validation set, the latter exclusively including participants from the screening setting, was applied. Anti-MAGEA4 exhibited the highest sensitivity for detecting early stage CRC and advanced adenoma. Multi-marker combinations substantially increased sensitivity at the price of a moderate loss of specificity. Anti-TP53, anti-IMPDH2, anti-MDM2 and anti-MAGEA4 were consistently included in the best-performing 4-, 5-, and 6-marker combinations. This four-marker panel yielded a sensitivity of 26% (95% CI, 13-45%) for early stage CRC at a specificity of 90% (95% CI, 83-94%) in the validation set. Notably, it also detected 20% (95% CI, 13-29%) of advanced adenomas. Taken together, the identified biomarkers could contribute to the development of a useful multi-marker blood-based test for CRC early detection.
2011-01-01
Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503
Diagnosis of asthma: diagnostic testing.
Brigham, Emily P; West, Natalie E
2015-09-01
Asthma is a heterogeneous disease, encompassing both atopic and non-atopic phenotypes. Diagnosis of asthma is based on the combined presence of typical symptoms and objective tests of lung function. Objective diagnostic testing consists of 2 components: (1) demonstration of airway obstruction, and (2) documentation of variability in degree of obstruction. A review of current guidelines and literature was performed regarding diagnostic testing for asthma. Spirometry with bronchodilator reversibility testing remains the mainstay of asthma diagnostic testing for children and adults. Repetition of the test over several time points may be necessary to confirm airway obstruction and variability thereof. Repeated peak flow measurement is relatively simple to implement in a clinical and home setting. Bronchial challenge testing is reserved for patients in whom the aforementioned testing has been unrevealing but clinical suspicion remains, though is associated with low specificity. Demonstration of eosinophilic inflammation, via fractional exhaled nitric oxide measurement, or atopy, may be supportive of atopic asthma, though diagnostic utility is limited particularly in nonatopic asthma. All efforts should be made to confirm the diagnosis of asthma in those who are being presumptively treated but have not had objective measurements of variability in the degree of obstruction. Multiple testing modalities are available for objective confirmation of airway obstruction and variability thereof, consistent with a diagnosis of asthma in the appropriate clinical context. Providers should be aware that both these characteristics may be present in other disease states, and may not be specific to a diagnosis of asthma. © 2015 ARS-AAOA, LLC.
Hayward, Christopher S; Salamonsen, Robert; Keogh, Anne M; Woodard, John; Ayre, Peter; Prichard, Roslyn; Kotlyar, Eugene; Macdonald, Peter S; Jansz, Paul; Spratt, Phillip
2015-09-01
Left ventricular assist devices are crucial in rehabilitation of patients with end-stage heart failure. Whether cardiopulmonary function is enhanced with higher pump output is unknown. 10 patients (aged 39±16 years, mean±SD) underwent monitored adjustment of pump speed to determine minimum safe low speed and maximum safe high speed at rest. Patients were then randomized to these speed settings and underwent three 6-minute walk tests (6MWT) and symptom-limited cardiopulmonary stress tests (CPX) on separate days. Pump speed settings (low, normal and high) resulted in significantly different resting pump flows of 4.43±0.6, 5.03±0.94, and 5.72±1.2 l/min (P<.001). There was a significant enhancement of pump flows (greater at higher speed settings) with exercise (P<0.05). Increased pump speed was associated with a trend to increased 6MWT distance (P=.10); and CPX exercise time (p=.27). Maximum workload achieved and peak oxygen consumption were significantly different comparing low to high pump speed settings only (P<.05). N-terminal-pro-B-type natriuretic peptide release was significantly reduced at higher pump speed with exercise (P<.01). We have found that alteration of pump speed setting resulted in significant variation in estimated pump flow. The high-speed setting was associated with lower natriuretic hormone release consistent with lower myocardial wall stress. This did not, however, improve exercise tolerance.
Intelligent Predictor of Energy Expenditure with the Use of Patch-Type Sensor Module
Li, Meina; Kwak, Keun-Chang; Kim, Youn-Tae
2012-01-01
This paper is concerned with an intelligent predictor of energy expenditure (EE) using a developed patch-type sensor module for wireless monitoring of heart rate (HR) and movement index (MI). For this purpose, an intelligent predictor is designed by an advanced linguistic model (LM) with interval prediction based on fuzzy granulation that can be realized by context-based fuzzy c-means (CFCM) clustering. The system components consist of a sensor board, the rubber case, and the communication module with built-in analysis algorithm. This sensor is patched onto the user's chest to obtain physiological data in indoor and outdoor environments. The prediction performance was demonstrated by root mean square error (RMSE). The prediction performance was obtained as the number of contexts and clusters increased from 2 to 6, respectively. Thirty participants were recruited from Chosun University to take part in this study. The data sets were recorded during normal walking, brisk walking, slow running, and jogging in an outdoor environment and treadmill running in an indoor environment, respectively. We randomly divided the data set into training (60%) and test data set (40%) in the normalized space during 10 iterations. The training data set is used for model construction, while the test set is used for model validation. The experimental results revealed that the prediction error on treadmill running simulation was improved by about 51% and 12% in comparison to conventional LM for training and checking data set, respectively. PMID:23202166
Computer-assisted instruction in programming: AID
NASA Technical Reports Server (NTRS)
Friend, J.; Atkinson, R. C.
1971-01-01
Lessons for training students on how to program and operate computers to and AID language are given. The course consists of a set of 50 lessons, plus summaries, reviews, tests, and extra credit problems. No prior knowledge is needed for the course, the only requirement being a strong background in algebra. A student manual, which includes instruction for operating the instructional program and a glossary of terms used in the course, is included in the appendices.
Free-electron laser simulations on the MPP
NASA Technical Reports Server (NTRS)
Vonlaven, Scott A.; Liebrock, Lorie M.
1987-01-01
Free electron lasers (FELs) are of interest because they provide high power, high efficiency, and broad tunability. FEL simulations can make efficient use of computers of the Massively Parallel Processor (MPP) class because most of the processing consists of applying a simple equation to a set of identical particles. A test version of the KMS Fusion FEL simulation, which resides mainly in the MPPs host computer and only partially in the MPP, has run successfully.
Low-level Laser Therapy for Traumatic Brain Injury
2014-10-01
performance and consists of a photodiode power sensor within a black plastic housing unit to confirm the output of each LED matrix in the helmet. We tested...could be significantly reversed by low level light therapy (LLLT) in vitro study. The effect of LLLT was furthered by a combination with metabolic...it will indicate both a mechanisms of action and provide a strategy for monitoring the effect of LLLT in clinical settings (for example, using
Bocquet, S.; Saro, A.; Mohr, J. J.; ...
2015-01-30
Here, we present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We usemore » the full SPTCL data set (SZ clusters+σ v+Y X) to measure σ 8(Ωm/0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger Σm ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find Σm ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the eΣxpansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = –1).« less
NASA Astrophysics Data System (ADS)
Bocquet, S.; Saro, A.; Mohr, J. J.; Aird, K. A.; Ashby, M. L. N.; Bautz, M.; Bayliss, M.; Bazin, G.; Benson, B. A.; Bleem, L. E.; Brodwin, M.; Carlstrom, J. E.; Chang, C. L.; Chiu, I.; Cho, H. M.; Clocchiatti, A.; Crawford, T. M.; Crites, A. T.; Desai, S.; de Haan, T.; Dietrich, J. P.; Dobbs, M. A.; Foley, R. J.; Forman, W. R.; Gangkofner, D.; George, E. M.; Gladders, M. D.; Gonzalez, A. H.; Halverson, N. W.; Hennig, C.; Hlavacek-Larrondo, J.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Jones, C.; Keisler, R.; Knox, L.; Lee, A. T.; Leitch, E. M.; Liu, J.; Lueker, M.; Luong-Van, D.; Marrone, D. P.; McDonald, M.; McMahon, J. J.; Meyer, S. S.; Mocanu, L.; Murray, S. S.; Padin, S.; Pryke, C.; Reichardt, C. L.; Rest, A.; Ruel, J.; Ruhl, J. E.; Saliwanchik, B. R.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Spieler, H. G.; Stalder, B.; Stanford, S. A.; Staniszewski, Z.; Stark, A. A.; Story, K.; Stubbs, C. W.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Williamson, R.; Zahn, O.; Zenteno, A.
2015-02-01
We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ~16% higher masses. We use the full SPTCL data set (SZ clusters+σ v +Y X) to measure σ8(Ωm/0.27)0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPTCL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (~44% and ~23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ωm = 0.299 ± 0.009 and σ8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = -1.007 ± 0.065, demonstrating that the expansion and the growth histories are consistent with a ΛCDM universe (γ = 0.55; w = -1).
Development and psychometric testing of the Clinical Learning Organisational Culture Survey (CLOCS).
Henderson, Amanda; Creedy, Debra; Boorman, Rhonda; Cooke, Marie; Walker, Rachel
2010-10-01
This paper describes the development and psychometric testing of the Clinical Learning Organisational Culture Survey (CLOCS) that measures prevailing beliefs and assumptions important for learning to occur in the workplace. Items from a tool that measured motivation in workplace learning were adapted to the nursing practice context. The tool was tested in the clinical setting, and then further modified to enhance face and content validity. Registered nurses (329) across three major Australian health facilities were surveyed between June 2007 and September 2007. An exploratory factor analysis identified five concepts--recognition, dissatisfaction, affiliation, accomplishment, and influence. VALIDITY AND RELIABILITY: Internal consistency measures of reliability revealed that four concepts had good internal consistency: recognition (alpha=.914), dissatisfaction (alpha=.771), affiliation (alpha=.801), accomplishment (alpha=.664), but less so for influence (alpha=.529). This tool effectively measures recognition, affiliation and accomplishment--three concepts important for learning in practice situations, as well as dissatisfied staff across all these domains. Testing of additional influence items identify that this concept is difficult to delineate. The CLOCS can effectively inform leaders about concepts inherent in the culture important for maximising learning by staff. Crown Copyright © 2009. Published by Elsevier Ltd. All rights reserved.
Centaur Standard Shroud (CSS) static ultimate load structural tests
NASA Technical Reports Server (NTRS)
1975-01-01
A series of tests were conducted on the jettisonable metallic shroud used on the Titan/Centaur launch vehicle to verify its structural capabilities and to evaluate its structural interaction with the Centaur stage. A flight configured shroud and the interfacing Titan/Centaur structural assemblies were subjected to tests consisting of combinations of applied axial and shear loads to design ultimate values, including a set of tests on thermal conditions and two dynamic response tests to verify the analytical stiffness model. The strength capabilities were demonstrated at ultimate (125 percent of design limit) loads. It was also verified that the spring rate of the flight configured shroud-to-Centaur forward structural deflections of the specimen became nonlinear, as expected, above limit load values. This test series qualification program verified that the Titan/Centaur shroud and the Centaur and Titan interface components are qualified structurally at design ultimate loads.
Contextual interference effects on the acquisition of skill and strength of the bench press.
Naimo, Marshall A; Zourdos, Michael C; Wilson, Jacob M; Kim, Jeong-Su; Ward, Emery G; Eccles, David W; Panton, Lynn B
2013-06-01
The purpose of this study was to investigate contextual interference effects on skill acquisition and strength gains during the learning of the bench press movement. Twenty-four healthy, college-aged males and females were stratified to control, high contextual interference (HCI), and low contextual interference (LCI) groups. Treatment groups were provided with written and visual instruction on proper bench press form and practiced the bench press and dart throwing for four weeks. Within each session, LCI performed all bench press sets before undertaking dart-throws. HCI undertook dart-throws immediately following each set of bench press. Control only did testing. Measurements, including one repetition maximum (1RM), checklist scores based on video recordings of participants' 1RM's, and dart-throw test scores were taken at pre-test, 1 week, 2 week, post-test, and retention test. Results were consistent with the basic premise of the contextual interference effect. LCI had significant improvements in percent 1RM and checklist scores during training, but were mostly absent after training (post-test and retention test). HCI had significant improvements in percent 1RM and checklist scores both during and after training. Thus, HCI may augment strength and movement skill on the bench press since proper technique is an important component of resistance exercise movements. Copyright © 2013 Elsevier B.V. All rights reserved.
Lee, Won Jun; Kim, Sang Cheol; Lee, Seul Ji; Lee, Jeongmi; Park, Jeong Hill; Yu, Kyung-Sang; Lim, Johan; Kwon, Sung Won
2014-01-01
Based on the process of carcinogenesis, carcinogens are classified as either genotoxic or non-genotoxic. In contrast to non-genotoxic carcinogens, many genotoxic carcinogens have been reported to cause tumor in carcinogenic bioassays in animals. Thus evaluating the genotoxicity potential of chemicals is important to discriminate genotoxic from non-genotoxic carcinogens for health care and pharmaceutical industry safety. Additionally, investigating the difference between the mechanisms of genotoxic and non-genotoxic carcinogens could provide the foundation for a mechanism-based classification for unknown compounds. In this study, we investigated the gene expression of HepG2 cells treated with genotoxic or non-genotoxic carcinogens and compared their mechanisms of action. To enhance our understanding of the differences in the mechanisms of genotoxic and non-genotoxic carcinogens, we implemented a gene set analysis using 12 compounds for the training set (12, 24, 48 h) and validated significant gene sets using 22 compounds for the test set (24, 48 h). For a direct biological translation, we conducted a gene set analysis using Globaltest and selected significant gene sets. To validate the results, training and test compounds were predicted by the significant gene sets using a prediction analysis for microarrays (PAM). Finally, we obtained 6 gene sets, including sets enriched for genes involved in the adherens junction, bladder cancer, p53 signaling pathway, pathways in cancer, peroxisome and RNA degradation. Among the 6 gene sets, the bladder cancer and p53 signaling pathway sets were significant at 12, 24 and 48 h. We also found that the DDB2, RRM2B and GADD45A, genes related to the repair and damage prevention of DNA, were consistently up-regulated for genotoxic carcinogens. Our results suggest that a gene set analysis could provide a robust tool in the investigation of the different mechanisms of genotoxic and non-genotoxic carcinogens and construct a more detailed understanding of the perturbation of significant pathways.
Lee, Won Jun; Kim, Sang Cheol; Lee, Seul Ji; Lee, Jeongmi; Park, Jeong Hill; Yu, Kyung-Sang; Lim, Johan; Kwon, Sung Won
2014-01-01
Based on the process of carcinogenesis, carcinogens are classified as either genotoxic or non-genotoxic. In contrast to non-genotoxic carcinogens, many genotoxic carcinogens have been reported to cause tumor in carcinogenic bioassays in animals. Thus evaluating the genotoxicity potential of chemicals is important to discriminate genotoxic from non-genotoxic carcinogens for health care and pharmaceutical industry safety. Additionally, investigating the difference between the mechanisms of genotoxic and non-genotoxic carcinogens could provide the foundation for a mechanism-based classification for unknown compounds. In this study, we investigated the gene expression of HepG2 cells treated with genotoxic or non-genotoxic carcinogens and compared their mechanisms of action. To enhance our understanding of the differences in the mechanisms of genotoxic and non-genotoxic carcinogens, we implemented a gene set analysis using 12 compounds for the training set (12, 24, 48 h) and validated significant gene sets using 22 compounds for the test set (24, 48 h). For a direct biological translation, we conducted a gene set analysis using Globaltest and selected significant gene sets. To validate the results, training and test compounds were predicted by the significant gene sets using a prediction analysis for microarrays (PAM). Finally, we obtained 6 gene sets, including sets enriched for genes involved in the adherens junction, bladder cancer, p53 signaling pathway, pathways in cancer, peroxisome and RNA degradation. Among the 6 gene sets, the bladder cancer and p53 signaling pathway sets were significant at 12, 24 and 48 h. We also found that the DDB2, RRM2B and GADD45A, genes related to the repair and damage prevention of DNA, were consistently up-regulated for genotoxic carcinogens. Our results suggest that a gene set analysis could provide a robust tool in the investigation of the different mechanisms of genotoxic and non-genotoxic carcinogens and construct a more detailed understanding of the perturbation of significant pathways. PMID:24497971
Are cosmological data sets consistent with each other within the Λ cold dark matter model?
NASA Astrophysics Data System (ADS)
Raveri, Marco
2016-02-01
We use a complete and rigorous statistical indicator to measure the level of concordance between cosmological data sets, without relying on the inspection of the marginal posterior distribution of some selected parameters. We apply this test to state of the art cosmological data sets, to assess their agreement within the Λ cold dark matter model. We find that there is a good level of concordance between all the experiments with one noticeable exception. There is substantial evidence of tension between the cosmic microwave background temperature and polarization measurements of the Planck satellite and the data from the CFHTLenS weak lensing survey even when applying ultraconservative cuts. These results robustly point toward the possibility of having unaccounted systematic effects in the data, an incomplete modeling of the cosmological predictions or hints toward new physical phenomena.
Static investigation of two STOL nozzle concepts with pitch thrust-vectoring capability
NASA Technical Reports Server (NTRS)
Mason, M. L.; Burley, J. R., II
1986-01-01
A static investigation of the internal performance of two short take-off and landing (STOL) nozzle concepts with pitch thrust-vectoring capability has been conducted. An axisymmetric nozzle concept and a nonaxisymmetric nozzle concept were tested at dry and afterburning power settings. The axisymmetric concept consisted of a circular approach duct with a convergent-divergent nozzle. Pitch thrust vectoring was accomplished by vectoring the approach duct without changing the nozzle geometry. The nonaxisymmetric concept consisted of a two dimensional convergent-divergent nozzle. Pitch thrust vectoring was implemented by blocking the nozzle exit and deflecting a door in the lower nozzle flap. The test nozzle pressure ratio was varied up to 10.0, depending on model geometry. Results indicate that both pitch vectoring concepts produced resultant pitch vector angles which were nearly equal to the geometric pitch deflection angles. The axisymmetric nozzle concept had only small thrust losses at the largest pitch deflection angle of 70 deg., but the two-dimensional convergent-divergent nozzle concept had large performance losses at both of the two pitch deflection angles tested, 60 deg. and 70 deg.
Testing interconnected VLSI circuits in the Big Viterbi Decoder
NASA Technical Reports Server (NTRS)
Onyszchuk, I. M.
1991-01-01
The Big Viterbi Decoder (BVD) is a powerful error-correcting hardware device for the Deep Space Network (DSN), in support of the Galileo and Comet Rendezvous Asteroid Flyby (CRAF)/Cassini Missions. Recently, a prototype was completed and run successfully at 400,000 or more decoded bits per second. This prototype is a complex digital system whose core arithmetic unit consists of 256 identical very large scale integration (VLSI) gate-array chips, 16 on each of 16 identical boards which are connected through a 28-layer, printed-circuit backplane using 4416 wires. Special techniques were developed for debugging, testing, and locating faults inside individual chips, on boards, and within the entire decoder. The methods are based upon hierarchical structure in the decoder, and require that chips or boards be wired themselves as Viterbi decoders. The basic procedure consists of sending a small set of known, very noisy channel symbols through a decoder, and matching observables against values computed by a software simulation. Also, tests were devised for finding open and short-circuited wires which connect VLSI chips on the boards and through the backplane.
NASA Technical Reports Server (NTRS)
Mintz, Toby; Maslowski, Edward A.; Colozza, Anthony; McFarland, Willard; Prokopius, Kevin P.; George, Patrick J.; Hussey, Sam W.
2010-01-01
The Lunar Surface Power Distribution Network Study team worked to define, breadboard, build and test an electrical power distribution system consistent with NASA's goal of providing electrical power to sustain life and power equipment used to explore the lunar surface. A testbed was set up to simulate the connection of different power sources and loads together to form a mini-grid and gain an understanding of how the power systems would interact. Within the power distribution scheme, each power source contributes to the grid in an independent manner without communication among the power sources and without a master-slave scenario. The grid consisted of four separate power sources and the accompanying power conditioning equipment. Overall system design and testing was performed. The tests were performed to observe the output and interaction of the different power sources as some sources are added and others are removed from the grid connection. The loads on the system were also varied from no load to maximum load to observe the power source interactions.
Face Recognition by Metropolitan Police Super-Recognisers.
Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike
2016-01-01
Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.
Xu, Hui; Gong, Weiliang; Syltebo, Larry; Lutze, Werner; Pegg, Ian L
2014-08-15
The binary furnace slag-metakaolin DuraLith geopolymer waste form, which has been considered as one of the candidate waste forms for immobilization of certain Hanford secondary wastes (HSW) from the vitrification of nuclear wastes at the Hanford Site, Washington, was extended to a ternary fly ash-furnace slag-metakaolin system to improve workability, reduce hydration heat, and evaluate high HSW waste loading. A concentrated HSW simulant, consisting of more than 20 chemicals with a sodium concentration of 5 mol/L, was employed to prepare the alkaline activating solution. Fly ash was incorporated at up to 60 wt% into the binder materials, whereas metakaolin was kept constant at 26 wt%. The fresh waste form pastes were subjected to isothermal calorimetry and setting time measurement, and the cured samples were further characterized by compressive strength and TCLP leach tests. This study has firstly established quantitative linear relationships between both initial and final setting times and hydration heat, which were never discovered in scientific literature for any cementitious waste form or geopolymeric material. The successful establishment of the correlations between setting times and hydration heat may make it possible to efficiently design and optimize cementitious waste forms and industrial wastes based geopolymers using limited testing results. Copyright © 2014 Elsevier B.V. All rights reserved.
Measurement of acoustic attenuation in South Pole ice
NASA Astrophysics Data System (ADS)
IceCube Collaboration; Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Gustafsson, L.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration
2011-01-01
Using the South Pole Acoustic Test Setup (SPATS) and a retrievable transmitter deployed in holes drilled for the IceCube experiment, we have measured the attenuation of acoustic signals by South Pole ice at depths between 190 m and 500 m. Three data sets, using different acoustic sources, have been analyzed and give consistent results. The method with the smallest systematic uncertainties yields an amplitude attenuation coefficient α = 3.20 ± 0.57 km-1 between 10 and 30 kHz, considerably larger than previous theoretical estimates. Expressed as an attenuation length, the analyses give a consistent result for λ ≡ 1/α of ˜300 m with 20% uncertainty. No significant depth or frequency dependence has been found.
Development of the Attitudes to Domestic Violence Questionnaire for Children and Adolescents.
Fox, Claire L; Gadd, David; Sim, Julius
2015-09-01
To provide a more robust assessment of the effectiveness of a domestic abuse prevention education program, a questionnaire was developed to measure children's attitudes to domestic violence. The aim was to develop a short questionnaire that would be easy to use for practitioners but, at the same time, sensitive enough to pick up on subtle changes in young people's attitudes. We therefore chose to ask children about different situations in which they might be willing to condone domestic violence. In Study 1, we tested a set of 20 items, which we reduced by half to a set of 10 items. The factor structure of the scale was explored and its internal consistency was calculated. In Study 2, we tested the factor structure of the 10-item Attitudes to Domestic Violence (ADV) Scale in a separate calibration sample. Finally, in Study 3, we then assessed the test-retest reliability of the 10-item scale. The ADV Questionnaire is a promising tool to evaluate the effectiveness of domestic abuse education prevention programs. However, further development work is necessary. © The Author(s) 2014.
Load Transmission Through Artificial Hip Joints due to Stress Wave Loading
NASA Astrophysics Data System (ADS)
Tanabe, Y.; Uchiyama, T.; Yamaoka, H.; Ohashi, H.
Since wear of the polyethylene (Ultra High Molecular Weight Polyethylene or UHMWPE) acetabular cup is considered to be the main cause of loosening of the artificial hip joint, the cross-linked UHMWPE with high durability to wear has been developed. This paper deals with impact load transmission through the complex of an artificial hip joint consisting of a UHMWPE acetabular cup (or liner), a metallic femoral head and stem. Impact compressive tests on the complex were performed using the split-Hopkinson pressure bar apparatus. To investigate the effects of material (conventional or cross-linked UHMWPE), size and setting angle of the liner, and test temperature on force transmission, the impact load transmission ratio (ILTR) was experimentally determined. The ILTR decreased with an increase of the setting angle independent of material and size of the liner, and test temperature. The ILTR values at 37°C were larger than those at 24 °C and 60°C. The ILTR also appeared to be affected by the type of material as well as size of the liner.
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Stiles, J. A.; Moore, R. K.; Holtzman, J. C.
1981-01-01
Image simulation techniques were employed to generate synthetic aperture radar images of a 17.7 km x 19.3 km test site located east of Lawrence, Kansas. The simulations were performed for a space SAR at an orbital altitude of 600 km, with the following sensor parameters: frequency = 4.75 GHz, polarization = HH, and angle of incidence range = 7 deg to 22 deg from nadir. Three sets of images were produced corresponding to three different spatial resolutions; 20 m x 20 m with 12 looks, 100 m x 100 m with 23 looks, and 1 km x 1 km with 1000 looks. Each set consisted of images for four different soil moisture distributions across the test site. Results indicate that, for the agricultural portion of the test site, the soil moisture in about 90% of the pixels can be predicted with an accuracy of = + or - 20% of field capacity. Among the three spatial resolutions, the 1 km x 1 km resolution gave the best results for most cases, however, for very dry soil conditions, the 100 m x 100 m resolution was slightly superior.
Finding and testing network communities by lumped Markov chains.
Piccardi, Carlo
2011-01-01
Identifying communities (or clusters), namely groups of nodes with comparatively strong internal connectivity, is a fundamental task for deeply understanding the structure and function of a network. Yet, there is a lack of formal criteria for defining communities and for testing their significance. We propose a sharp definition that is based on a quality threshold. By means of a lumped Markov chain model of a random walker, a quality measure called "persistence probability" is associated to a cluster, which is then defined as an "α-community" if such a probability is not smaller than α. Consistently, a partition composed of α-communities is an "α-partition." These definitions turn out to be very effective for finding and testing communities. If a set of candidate partitions is available, setting the desired α-level allows one to immediately select the α-partition with the finest decomposition. Simultaneously, the persistence probabilities quantify the quality of each single community. Given its ability in individually assessing each single cluster, this approach can also disclose single well-defined communities even in networks that overall do not possess a definite clusterized structure.
Exploring the general motor ability construct.
Ibrahim, Halijah; Heard, N Paul; Blanksby, Brian
2011-10-01
Malaysian students ages 12 to 15 years (N = 330; 165 girls, 165 boys) took the Australian Institute of Sport Talent Identification Test (AIST) and the Balance and Movement Coordination Test (BMC), developed specifically to identify sport talent in Malaysian adolescents. To investigate evidence for general aptitude ("g") in motor ability, a higher-order factor analysis was applied to the motor skills subtests from the AIST and BMC. First-order principal components analysis indicated that scores for the adolescent boys and girls could be described by similar sets of specific motor abilities. In particular, sets of skills identified as Movement Coordination and Postural Control were found, with Balancing Ability also emerging. For the girls, a factor labeled Static Balance was indicated. However, for the boys a more general balance ability labeled Kinesthetic Integration was found, along with an ability labeled Explosive Power. These first-order analyses accounted for 45% to 60% of the variance in the scores on the motor skills tests for the boys and girls, respectively. Separate second-order factor analyses for the boys and girls extracted a single higher-order factor, which was consistent with the existence of a motoric "g".