NASA Technical Reports Server (NTRS)
Leblanc, Thierry; McDermid, Iain S.; McGee, Thomas G.; Twigg, Laurence W.; Sumnicht, Grant K.; Whiteman, David N.; Rush, Kurt D.; Cadirola, Martin P.; Venable, Demetrius D.; Connell, R.;
2008-01-01
The Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE, MOHAVE-II) inter-comparison campaigns took place at the Jet Propulsion Laboratory (JPL) Table Mountain Facility (TMF, 34.5(sup o)N) in October 2006 and 2007 respectively. Both campaigns aimed at evaluating the capability of three Raman lidars for the measurement of water vapor in the upper troposphere and lower stratosphere (UT/LS). During each campaign, more than 200 hours of lidar measurements were compared to balloon borne measurements obtained from 10 Cryogenic Frost-point Hygrometer (CFH) flights and over 50 Vaisala RS92 radiosonde flights. During MOHAVE, fluorescence in all three lidar receivers was identified, causing a significant wet bias above 10-12 km in the lidar profiles as compared to the CFH. All three lidars were reconfigured after MOHAVE, and no such bias was observed during the MOHAVE-II campaign. The lidar profiles agreed very well with the CFH up to 13-17 km altitude, where the lidar measurements become noise limited. The results from MOHAVE-II have shown that the water vapor Raman lidar will be an appropriate technique for the long-term monitoring of water vapor in the UT/LS given a slight increase in its power-aperture, as well as careful calibration.
NASA Technical Reports Server (NTRS)
Leblanc, Thierry; McDermid, I. S.; Vomel, H.; Whiteman, D.; Twigg, Larry; McGee, T. G.
2008-01-01
1. MOHAVE+MOHAVE II = very successful. 2. MOHAVE -> Fluorescence was found to be inherent to all three participating lidars. 3. MOHAVE II -> Fluorescence was removed and agreement with CFH was extremely good up to 16-18 km altitude. 4. MOHAVE II -> Calibration tests revealed unsuspected shortfalls of widely used techniques, with important implications for their applicability to longterm measurements. 5. A factor of 5 in future lidar signal-to-noise ratio is reasonably achievable. When this level is achieved water vapor Raman lidar will become a key instrument for the long-term monitoring of water vapor in the UT/LS
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Mohave (Colorado River), Ariz.-Nev. 162.220 Section 162.220 Navigation and Navigable Waters COAST GUARD... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... generated by the Project. The approved Project includes up to 243 wind turbine generators and associated..., operation, maintenance, and decommissioning of the Project to BP Wind Energy; and for the BLM to issue a ROW...; AZA32315AA] Notice of Availability of the Record of Decision for the Mohave County Wind Farm Project, Mohave...
John L. Anderson
2001-01-01
The white-margined penstemon (Penstemon albomarginatus Jones) is a rare Mohave Desert species with an unusual tripartite distribution with disjunct localities in Arizona, California, and Nevada. The Arizona population is the largest single population occurring with a range of 15 miles by 5 miles in Dutch Flat near Yucca, Arizona in Mohave County. The land ownership...
The project MOHAVE tracer study: study design, data quality, and overview of results
NASA Astrophysics Data System (ADS)
Green, Mark C.
In the winter and summer of 1992, atmospheric tracer studies were conducted in support of project MOHAVE, a visibility study in the southwestern United States. The primary goal of project MOHAVE is to determine the effects of the Mohave power plant and other sources upon visibility at Grand Canyon National Park. Perfluorocarbon tracers (PFTs) were released from the Mohave power plant and other locations and monitored at about 30 sites. The tracer data are being used for source attribution analysis and for evaluation of transport and dispersion models and receptor models. Collocated measurements showed the tracer data to be of high quality and suitable for source attribution analysis and model evaluation. The results showed strong influences of channeling by the Colorado River canyon during both winter and summer. Flow from the Mohave power plant was usually to the south, away from the Grand Canyon in winter and to the northeast, toward the Grand Canyon in summer. Tracer released at Lake Powell in winter was found to often travel downstream through the entire length of the Grand Canyon. Data from summer tracer releases in southern California demonstrated the existence of a convergence zone in the western Mohave Desert.
Airborne and Ground-Based Measurements Using a High-Performance Raman Lidar
NASA Technical Reports Server (NTRS)
Whiteman, David N.; Rush, Kurt; Rabenhorst, Scott; Welch, Wayne; Cadirola, Martin; McIntire, Gerry; Russo, Felicita; Adam, Mariana; Venable, Demetrius; Connell, Rasheen;
2010-01-01
A high-performance Raman lidar operating in the UV portion of the spectrum has been used to acquire, for the first time using a single lidar, simultaneous airborne profiles of the water vapor mixing ratio, aerosol backscatter, aerosol extinction, aerosol depolarization and research mode measurements of cloud liquid water, cloud droplet radius, and number density. The Raman Airborne Spectroscopic Lidar (RASL) system was installed in a Beechcraft King Air B200 aircraft and was flown over the mid-Atlantic United States during July August 2007 at altitudes ranging between 5 and 8 km. During these flights, despite suboptimal laser performance and subaperture use of the telescope, all RASL measurement expectations were met, except that of aerosol extinction. Following the Water Vapor Validation Experiment Satellite/Sondes (WAVES_2007) field campaign in the summer of 2007, RASL was installed in a mobile trailer for groundbased use during the Measurements of Humidity and Validation Experiment (MOHAVE-II) field campaign held during October 2007 at the Jet Propulsion Laboratory s Table Mountain Facility in southern California. This ground-based configuration of the lidar hardware is called Atmospheric Lidar for Validation, Interagency Collaboration and Education (ALVICE). During theMOHAVE-II field campaign, during which only nighttime measurements were made, ALVICE demonstrated significant sensitivity to lower-stratospheric water vapor. Numerical simulation and comparisons with a cryogenic frost-point hygrometer are used to demonstrate that a system with the performance characteristics of RASL ALVICE should indeed be able to quantify water vapor well into the lower stratosphere with extended averaging from an elevated location like Table Mountain. The same design considerations that optimize Raman lidar for airborne use on a small research aircraft are, therefore, shown to yield significant dividends in the quantification of lower-stratospheric water vapor. The MOHAVE-II measurements, along with numerical simulation, were used to determine that the likely reason for the suboptimal airborne aerosol extinction performance during theWAVES_2007 campaign was amisaligned interference filter. With full laser power and a properly tuned interference filter,RASL is shown to be capable ofmeasuring themain water vapor and aerosol parameters with temporal resolutions of between 2 and 45 s and spatial resolutions ranging from 30 to 330 m from a flight altitude of 8 km with precision of generally less than 10%, providing performance that is competitive with some airborne Differential Absorption Lidar (DIAL) water vapor and High Spectral Resolution Lidar (HSRL) aerosol instruments. The use of diode-pumped laser technology would improve the performance of an airborne Raman lidar and permit additional instrumentation to be carried on board a small research aircraft. The combined airborne and ground-based measurements presented here demonstrate a level of versatility in Raman lidar that may be impossible to duplicate with any other single lidar technique.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-26
... Dam. The project will consist of up to 335 wind turbine generators (WTGs). Construction may consist of... County Wind Farm Project, Mohave County, AZ AGENCY: Bureau of Land Management, Interior. ACTION: Notice....gov/az/st/en/prog/energy/wind/mohave.html . In order to be included in the Draft EIS, all comments...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
33 CFR 162.220 - Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Hoover Dam, Lake Mead, and Lake... REGULATIONS § 162.220 Hoover Dam, Lake Mead, and Lake Mohave (Colorado River), Ariz.-Nev. (a) Lake Mead and... the axis of Hoover Dam and that portion of Lake Mohave (Colorado River) extending 4,500 feet...
ERIC Educational Resources Information Center
Northern Arizona Univ., Flagstaff. Educational Resources Management Center.
The major purpose of this study was to accumulate and organize pertinent information relative to the future status and goals of Mohave County Community College. The major objectives of the survey were to: (1) describe the growth and development of the Mohave County Community College, (2) describe the present educational program of the college, (3)…
Geologic map of the Mohave Mountains area, Mohave County, western Arizona
Howard, K.A.; Nielson, J.E.; Wilshire, W.G.; Nakata, J.K.; Goodge, J.W.; Reneau, Steven L.; John, Barbara E.; Hansen, V.L.
1999-01-01
Introduction The Mohave Mountains area surrounds Lake Havasu City, Arizona, in the Basin and Range physiographic province. The Mohave Mountains and the Aubrey Hills form two northwest-trending ranges adjacent to Lake Havasu (elevation 132 m; 448 ft) on the Colorado River. The low Buck Mountains lie northeast of the Mohave Mountains in the alluviated valley of Dutch Flat. Lowlands at Standard Wash separate the Mohave Mountains from the Bill Williams Mountains to the southeast. The highest point in the area is Crossman Peak in the Mohave Mountains, at an elevation of 1519 m (5148 ft). Arizona Highway 95 is now rerouted in the northwestern part of the map area from its position portrayed on the base map; it now also passes through the southern edge of the map area. Geologic mapping was begun in 1980 as part of a program to assess the mineral resource potential of Federal lands under the jurisdiction of the U.S. Bureau of Land Management (Light and others, 1983). Mapping responsibilities were as follows: Proterozoic and Mesozoic rocks, K.A. Howard; dikes, J.K. Nakata; Miocene section, J.E. Nielson; and surficial deposits, H.G. Wilshire. Earlier geologic mapping includes reconnaissance mapping by Wilson and Moore (1959). The present series of investigations has resulted in reports on the crystalline rocks and structure (Howard and others, 1982a), dikes (Nakata, 1982), Tertiary stratigraphy (Pike and Hansen, 1982; Nielson, 1986; Nielson and Beratan, 1990), surficial deposits (Wilshire and Reneau, 1992), tectonics (Howard and John, 1987; Beratan and others, 1990), geophysics (Simpson and others, 1986), mineralization (Light and McDonnell, 1983; Light and others, 1983), field guides (Nielson, 1986; Howard and others, 1987), and geochronology (Nakata and others, 1990; Foster and others, 1990).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-17
.../energy/wind/mohave.html . The Project is proposed to consist of up to 283 turbines, access roads, and... be supplemented with internal access/service roads to each wind turbine. Proposed ancillary... all action alternatives, Project features within the wind-farm site would include turbines aligned...
Sen. Reid, Harry [D-NV
2011-09-23
Senate - 09/23/2011 Submitted in the Senate, considered, and agreed to without amendment and with a preamble by Unanimous Consent. (All Actions) Tracker: This bill has the status Agreed to in SenateHere are the steps for Status of Legislation:
NASA Astrophysics Data System (ADS)
Leblanc, T.; McDermid, I. S.; Pérot, K.
2010-12-01
Ozone and water vapor signatures of a stratospheric intrusion were simultaneously observed by the Jet Propulsion Laboratory lidars located at Table Mountain Facility, California (TMF, 34.4N, 117.7W) during the Measurements of Humidity in the Atmosphere and Validation Experiments (MOHAVE-2009) campaign in October 2009. These observations are placed in the context of the meridional displacement and folding of the tropopause, and resulting contrast in the properties of the air masses sampled by lidar. The lidar observations are supported by model data, specifically potential vorticity fields advected by the high-resolution transport model MIMOSA, and by 10-day backward isentropic trajectories. The ozone and water vapor anomalies measured by lidar were largely anti-correlated, and consistent with the assumption of a wet and ozone-poor subtropical upper troposphere, and a dry and ozone-rich extra-tropical lowermost stratosphere. However, it is shown that this anti-correlation relation collapsed just after the stratospheric intrusion event of October 20, suggesting mixed air embedded along the subtropical jet stream and sampled by lidar during its displacement south of TMF (tropopause fold). The ozone-PV expected positive correlation relation held strongly throughout the measurement period, including when a lower polar stratospheric filament passed over TMF just after the stratospheric intrusion. The numerous highly-correlated signatures observed during this event demonstrate the strong capability of the water vapor and ozone lidars at TMF, and provide new confidence in the future detection by lidar of long-term variability of water vapor and ozone in the Upper Troposphere-Lower Stratosphere (UTLS).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-27
... squirrel was found to consume leaves of annual and perennial plants, their fruits and seeds, fungi, and... mechanisms; or (e) Other natural or manmade factors affecting its continued existence. (4) Information on... affect the conservation of the Mohave ground squirrel. (6) Information on the population status of...
NASA Technical Reports Server (NTRS)
Stiller, Gabrielle; Kiefer, M.; Eckert, E.; von Clarmann, T.; Kellmann, S.; Garcia-Comas, M.; Funke, B.; Leblanc, T.; Fetzer, E.; Froidevaux, L.;
2012-01-01
MIPAS observations of temperature, water vapor, and ozone in October 2009 as derived with the scientific level-2 processor run by Karlsruhe Institute of Technology (KIT), Institute for Meteorology and Climate Research (IMK) and CSIC, Instituto de Astrofisica de Andalucia (IAA) and retrieved from version 4.67 level-1b data have been compared to co-located field campaign observations obtained during the MOHAVE-2009 campaign at the Table Mountain Facility near Pasadena, California in October 2009. The MIPAS measurements were validated regarding any potential biases of the profiles, and with respect to their precision estimates. The MOHAVE-2009 measurement campaign provided measurements of atmospheric profiles of temperature, water vapor/relative humidity, and ozone from the ground to the mesosphere by a suite of instruments including radiosondes, ozonesondes, frost point hygrometers, lidars, microwave radiometers and Fourier transform infrared (FTIR) spectrometers. For MIPAS temperatures (version V4O_T_204), no significant bias was detected in the middle stratosphere; between 22 km and the tropopause MIPAS temperatures were found to be biased low by up to 2 K, while below the tropopause, they were found to be too high by the same amount. These findings confirm earlier comparisons of MIPAS temperatures to ECMWF data which revealed similar differences. Above 12 km up to 45 km, MIPAS water vapor (version V4O_H2O_203) is well within 10% of the data of all correlative instruments. The well-known dry bias of MIPAS water vapor above 50 km due to neglect of non-LTE effects in the current retrievals has been confirmed. Some instruments indicate that MIPAS water vapor might be biased high by 20 to 40% around 10 km (or 5 km below the tropopause), but a consistent picture from all comparisons could not be derived. MIPAS ozone (version V4O_O3_202) has a high bias of up to +0.9 ppmv around 37 km which is due to a non-identified continuum like radiance contribution. No further significant biases have been detected. Cross-comparison to co-located observations of other satellite instruments (Aura/MLS, ACE-FTS, AIRS) is provided as well.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
...; AZA32315] Notice of Segregation of Public Lands in the State of Arizona Associated With the Proposed Mohave... material sales acts, for a period of 2 years. This segregation is being made in connection with the BLM's... County Wind Farm Project (Proposed Project). This segregation covers approximately 38,016.60 acres of BLM...
Pearthree, Philip; House, P. Kyle
2014-01-01
Geologic investigations of late Miocene–early Pliocene deposits in Mohave and Cottonwood valleys provide important insights into the early evolution of the lower Colorado River system. In the latest Miocene these valleys were separate depocenters; the floor of Cottonwood Valley was ∼200 m higher than the floor of Mohave Valley. When Colorado River water arrived from the north after 5.6 Ma, a shallow lake in Cottonwood Valley spilled into Mohave Valley, and the river then filled both valleys to ∼560 m above sea level (asl) and overtopped the bedrock divide at the southern end of Mohave Valley. Sediment-starved water spilling to the south gradually eroded the outlet as siliciclastic Bouse deposits filled the lake upstream. When sediment accumulation reached the elevation of the lowering outlet, continued erosion of the outlet resulted in recycling of stored lacustrine sediment into downstream basins; depth of erosion of the outlet and upstream basins was limited by the water levels in downstream basins. The water level in the southern Bouse basin was ∼300 m asl (modern elevation) at 4.8 Ma. It must have drained and been eroded to a level <150 m asl soon after that to allow for deep erosion of bedrock divides and basins upstream, leading to removal of large volumes of Bouse sediment prior to massive early Pliocene Colorado River aggradation. Abrupt lowering of regional base level due to spilling of a southern Bouse lake to the Gulf of California could have driven observed upstream river incision without uplift. Rapid uplift of the entire region immediately after 4.8 Ma would have been required to drive upstream incision if the southern Bouse was an estuary.
Lake Mohave Geophysical Survey 2002: GIS Data Release
Cross, VeeAnn A.; Foster, David S.; Twichell, David C.
2005-01-01
This CD-ROM contains sidescan-sonar imagery, sub-bottom reflection profiles, and an interpretive map derived from these data. These data were collected in Lake Mohave, a reservoir behind the Davis Dam and below the Hoover Dam on the Colorado River. These data are veiwable within an Environmental system Research Institute, Inc. (ESRI) Geographic Information system (GIS) ArcView 3.2 project file stored on this CD-ROM
Cantú, Esteban; Mallela, Sahiti; Nyguen, Matthew; Báez, Raúl; Parra, Victoria; Johnson, Rachel; Wilson, Kyle; Suntravat, Montamas; Lucena, Sara; Rodríguez-Acosta, Alexis; Sánchez, Elda E.
2016-01-01
Snake venoms are known to have different venom compositions and toxicity, but differences can also be found within populations of the same species contributing to the complexity of treatment of envenomated victims. One of the first well-documented intraspecies venom variations comes from the Mohave rattlesnake (Crotalus scutulatus scutulatus). Initially, three types of venoms were described; type A venom is the most toxic as a result of ~45% Mojave toxin in the venom composition, type B lacks the Mojave toxin but contains over 50% of snake venom metalloproteases (SVMPs). Also, type A + B venom contains a combination of Mojave toxin and SVMP. The use of an anti-disintegrin antibody in a simple Enzyme-Linked Immunosorbent Assay (ELISA) can be used to identify the difference between the venoms of the type A, B, and A+B Mohave rattlesnakes. This study implements the use of an anti-recombinant disintegrin polyclonal antibody (ARDPA) for the detection of disintegrins and ADAMs (a disintegrin and metalloproteases) in individual crude snake venoms of Mohave rattlesnakes (Crotalus scutulatus scutulatus) of varying geographical locations. After correlation with Western blots, coagulation activity and LD50 data, it was determined that the antibody allows for a quick and cost-efficient identification of venom types. PMID:27989783
A synthesis of aquatic science for management of Lakes Mead and Mohave
Rosen, Michael R.; Turner, Kent; Goodbred, Steven L.; Miller, Jennell M.
2012-01-01
Lakes Mead and Mohave, which are the centerpieces of Lake Mead National Recreation Area, provide many significant benefits that have made the modern development of the Southwestern United States possible. Lake Mead is the largest reservoir by volume in the nation and it supplies critical storage of water supplies for more than 25 million people in three Western States (California, Arizona, and Nevada). Storage within Lake Mead supplies drinking water and the hydropower to provide electricity for major cities including Las Vegas, Phoenix, Los Angeles, Tucson, and San Diego, and irrigation of more than 2.5 million acres of croplands. Lake Mead is arguably the most important reservoir in the nation because of its size and the services it delivers to the Western United States. This Circular includes seven chapters. Chapter 1 provides a short summary of the overall findings and management implications for Lakes Mead and Mohave that can be used to guide the reader through the rest of the Circular. Chapter 2 introduces the environmental setting and characteristics of Lakes Mead and Mohave and provides a brief management context of the lakes within the Colorado River system as well as overviews of the geological bedrock and sediment accumulations of the lakes. Chapter 3 contains summaries of the operational and hydrologic characteristics of Lakes Mead and Mohave. Chapter 4 provides information on water quality, including discussion on the monitoring of contaminants and sediments within the reservoirs. Chapter 5 describes aquatic biota and wildlife, including food-web dynamics, plankton, invertebrates, fish, aquatic birds, and aquatic vegetation. Chapter 6 outlines threats and stressors to the health of Lake Mead aquatic ecosystems that include a range of environmental contaminants, invasive species, and climate change. Chapter 7 provides a more detailed summary of overall findings that are presented in Chapter 1; and it contains a more detailed discussion on associated management implications, additional research, and monitoring needs.
Geologic map of the Callville Bay Quadrangle, Clark County, Nevada, and Mohave County, Arizona
Anderson, R. Ernest
2003-01-01
Report: 139 Map Scale: 1:24,000 Map Type: colored geologic map A 1:24,000-scale, full-color geologic map and four cross sections of the Callville Bay 7-minute quadrangle in Clark County, Nevada and Mohave County, Arizona. An accompanying text describes 21 stratigraphic units of Paleozoic and Mesozoic sedimentary rocks and 40 units of Cenozoic sedimentary, volcanic, and intrusive rocks. It also discusses the structural setting, framework, and history of the quadrangle and presents a model for its tectonic development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Elda E.; Galan, Jacob A.; Russell, William K.
2006-04-01
Disintegrins and disintegrin-like proteins are molecules found in the venom of four snake families (Atractaspididae, Elapidae, Viperidae, and Colubridae). The disintegrins are nonenzymatic proteins that inhibit cell-cell interactions, cell-matrix interactions, and signal transduction, and may have potential in the treatment of strokes, heart attacks, cancers, and osteoporosis. Prior to 1983, the venom of Crotalus scutulatus scutulatus (Mohave Rattlesnake) was known to be only neurotoxic; however, now there is evidence that these snakes can contain venom with: (1) neurotoxins; (2) hemorrhagins; and (3) both neurotoxins and hemorrhagins. In this study, two disintegrins, mojastin 1 and mojastin 2, from the venom ofmore » a Mohave rattlesnake collected in central Arizona (Pinal County), were isolated and characterized. The disintegrins in these venoms were identified by mass-analyzed laser desorption ionization/time-of-flight/time-of-flight (MALDI/TOF/TOF) mass spectrometry as having masses of 7.436 and 7.636 kDa. Their amino acid sequences are similar to crotratroxin, a disintegrin isolated from the venom of the western diamondback rattlesnake (C. atrox). The amino acid sequence of mojastin 1 was identical to the amino acid sequence of a disintegrin isolated from the venom of the Timber rattlesnake (C. horridus). The disintegrins from the Mohave rattlesnake venom were able to inhibit ADP-induced platelet aggregation in whole human blood both having IC{sub 5}s of 13.8 nM, but were not effective in inhibiting the binding of human urinary bladder carcinoma cells (T24) to fibronectin.« less
Gasoline-Related Compounds in Lakes Mead and Mohave, Nevada, 2004-06
Lico, Michael S.; Johnson, B. Thomas
2007-01-01
The distribution of man-made organic compounds, specifically gasoline-derived compounds, was investigated from 2004 to 2006 in Lakes Mead and Mohave and one of its tributary streams, Las Vegas Wash. Compounds contained in raw gasoline (benzene, toluene, ethylbenzene, xylenes; also known as BTEX compounds) and those produced during combustion of gasoline (polycyclic aromatic hydrocarbon compounds; also known as PAH compounds) were detected at every site sampled in Lakes Mead and Mohave. Water-quality analyses of samples collected during 2004-06 indicate that motorized watercraft are the major source of these organic compounds to the lakes. Concentrations of BTEX increase as the boating season progresses and decrease to less than detectable levels during the winter when few boats are on the water. Volatilization and microbial degradation most likely are the primary removal mechanisms for BTEX compounds in the lakes. Concentrations of BTEX compounds were highest at sampling points near marinas or popular launching areas. Methyl tert-butyl ether (MTBE) was detected during 2004 but concentrations decreased to less than the detection level during the latter part of the study; most likely due to the removal of MTBE from gasoline purchased in California. Distribution of PAH compounds was similar to that of BTEX compounds, in that, concentrations were highest at popular boating areas and lowest in areas where fewer boats traveled. PAH concentrations were highest at Katherine Landing and North Telephone Cove in Lake Mohave where many personal watercraft with carbureted two-stroke engines ply the waters. Lake-bottom sediment is not a sink for PAH as indicated by the low concentrations detected in sediment samples from both lakes. PAH compounds most likely are removed from the lakes by photochemical degradation. PAH compounds in Las Vegas Wash, which drains the greater Las Vegas metropolitan area, were present in relatively high concentrations in sediment from the upstream reaches. Concentrations of PAH compounds were low in water and sediment samples collected farther downstream, thus the bottom sediment in the upstream part of the wash may be an effective trap for these compounds. Bioavailable PAH compounds were present in all samples as determined using the Fluoroscan method. Microtox acute toxicity profiles indicated that Callville Bay in Lake Mead and the two Lake Mohave sites had only minor evidence that toxic compounds are present.
Numerical Simulations of Airflows and Tracer Transport in the Southwestern United States.
NASA Astrophysics Data System (ADS)
Yamada, Tetsuji
2000-03-01
Project MOHAVE (Measurement of Haze and Visual Effects) produced a unique set of tracer data over the southwestern United States. During the summer of 1992, a perfluorocarbon tracer gas was released from the Mohave Power Project (MPP), a large coal-fired facility in southern Nevada. Three-dimensional atmospheric models, the Higher-Order Turbulence Model for Atmospheric Circulation-Random Puff Transport and Diffusion (HOTMAC-RAPTAD), were used to simulate the concentrations of tracer gas that were observed during a portion of the summer intensive period of Project MOHAVE. The study area extended from northwestern Arizona to southern Nevada and included Lake Mead, the Colorado River Valley, the Grand Canyon National Park, and MPP. The computational domain was 368 km in the east-west direction by 252 km in the north-south direction. Rawinsonde and radar wind profiler data were used to provide initial and boundary conditions to HOTMAC simulations. HOTMAC with a horizontal grid spacing of 4 km was able to simulate the diurnal variations of drainage and upslope flows along the Grand Canyon and Colorado River Valley. HOTMAC also captured the diurnal variations of turbulence, which played important roles for the transport and diffusion simulations by RAPTAD. The modeled tracer gas concentrations were compared with observations. The model's performance was evaluated statistically.
NASA Technical Reports Server (NTRS)
Whiteman, D. N.; Cadirola, M.; Venable, D.; Calhoun, M.; Miloshevich, L; Vermeesch, K.; Twigg, L.; Dirisu, A.; Hurst, D.; Hall, E.;
2012-01-01
The MOHAVE-2009 campaign brought together diverse instrumentation for measuring atmospheric water vapor. We report on the participation of the ALVICE (Atmospheric Laboratory for Validation, Interagency Collaboration and Education) mobile laboratory in the MOHAVE-2009 campaign. In appendices we also report on the performance of the corrected Vaisala RS92 radiosonde measurements during the campaign, on a new radiosonde based calibration algorithm that reduces the influence of atmospheric variability on the derived calibration constant, and on other results of the ALVICE deployment. The MOHAVE-2009 campaign permitted the Raman lidar systems participating to discover and address measurement biases in the upper troposphere and lower stratosphere. The ALVICE lidar system was found to possess a wet bias which was attributed to fluorescence of insect material that was deposited on the telescope early in the mission. Other sources of wet biases are discussed and data from other Raman lidar systems are investigated, revealing that wet biases in upper tropospheric (UT) and lower stratospheric (LS) water vapor measurements appear to be quite common in Raman lidar systems. Lower stratospheric climatology of water vapor is investigated both as a means to check for the existence of these wet biases in Raman lidar data and as a source of correction for the bias. A correction technique is derived and applied to the ALVICE lidar water vapor profiles. Good agreement is found between corrected ALVICE lidar measurments and those of RS92, frost point hygrometer and total column water. The correction is offered as a general method to both quality control Raman water vapor lidar data and to correct those data that have signal-dependent bias. The influence of the correction is shown to be small at regions in the upper troposphere where recent work indicates detection of trends in atmospheric water vapor may be most robust. The correction shown here holds promise for permitting useful upper tropospheric water vapor profiles to be consistently measured by Raman lidar within NDACC (Network for the Detection of Atmospheric Composition Change) and elsewhere, despite the prevalence of instrumental and atmospheric effects that can contaminate the very low signal to noise measurements in the UT.
NASA Astrophysics Data System (ADS)
White, W. H.; Farber, R. J.; Malm, W. C.; Nuttall, M.; Pitchford, M. L.; Schichtel, B. A.
2012-08-01
Few electricity generating stations received more environmental scrutiny during the last quarter of the twentieth century than did the Mohave Power Project (MPP), a coal-fired facility near Grand Canyon National Park. Terhorst and Berkman (2010) examine regional aerosol monitoring data collected before and after the plant's 2006 retirement for retrospective evidence of MPP's impact on visibility in the Park. The authors' technical analysis is thoughtfully conceived and executed, but is misleadingly presented as discrediting previous studies and their interpretation by regulators. In reality the Terhorst-Berkman analysis validates a consensus on MPP's visibility impact that was established years before its closure, in a collaborative assessment undertaken jointly by Federal regulators and MPP's owners.
NASA Astrophysics Data System (ADS)
Leblanc, T.; Walsh, T. D.; McDermid, I. S.; Toon, G. C.; Blavier, J.-F.; Haines, B.; Read, W. G.; Herman, B.; Fetzer, E.; Sander, S.; Pongetti, T.; Whiteman, D. N.; McGee, T. G.; Twigg, L.; Sumnicht, G.; Venable, D.; Calhoun, M.; Dirisu, A.; Hurst, D.; Jordan, A.; Hall, E.; Miloshevich, L.; Vömel, H.; Straub, C.; Kampfer, N.; Nedoluha, G. E.; Gomez, R. M.; Holub, K.; Gutman, S.; Braun, J.; Vanhove, T.; Stiller, G.; Hauchecorne, A.
2011-05-01
The Measurements of Humidity in the Atmosphere and Validation Experiment (MOHAVE) 2009 campaign took place on 11-27 October 2009 at the JPL Table Mountain Facility in California (TMF). The main objectives of the campaign were to (1) validate the water vapor measurements of several instruments, including, three Raman lidars, two microwave radiometers, two Fourier-Transform spectrometers, and two GPS receivers (column water), (2) cover water vapor measurements from the ground to the mesopause without gaps, and (3) study upper tropospheric humidity variability at timescales varying from a few minutes to several days. A total of 58 radiosondes and 20 Frost-Point hygrometer sondes were launched. Two types of radiosondes were used during the campaign. Non negligible differences in the readings between the two radiosonde types used (Vaisala RS92 and InterMet iMet-1) made a small, but measurable impact on the derivation of water vapor mixing ratio by the Frost-Point hygrometers. As observed in previous campaigns, the RS92 humidity measurements remained within 5 % of the Frost-point in the lower and mid-troposphere, but were too dry in the upper troposphere. Over 270 h of water vapor measurements from three Raman lidars (JPL and GSFC) were compared to RS92, CFH, and NOAA-FPH. The JPL lidar profiles reached 20 km when integrated all night, and 15 km when integrated for 1 h. Excellent agreement between this lidar and the frost-point hygrometers was found throughout the measurement range, with only a 3 % (0.3 ppmv) mean wet bias for the lidar in the upper troposphere and lower stratosphere (UTLS). The other two lidars provided satisfactory results in the lower and mid-troposphere (2-5 % wet bias over the range 3-10 km), but suffered from contamination by fluorescence (wet bias ranging from 5 to 50 % between 10 km and 15 km), preventing their use as an independent measurement in the UTLS. The comparison between all available stratospheric sounders allowed to identify only the largest biases, in particular a 10 % dry bias of the Water Vapor Millimeter-wave Spectrometer compared to the Aura-Microwave Limb Sounder. No other large, or at least statistically significant, biases could be observed. Total Precipitable Water (TPW) measurements from six different co-located instruments were available. Several retrieval groups provided their own TPW retrievals, resulting in the comparison of 10 different datasets. Agreement within 7 % (0.7 mm) was found between all datasets. Such good agreement illustrates the maturity of these measurements and raises confidence levels for their use as an alternate or complementary source of calibration for the Raman lidars. Tropospheric and stratospheric ozone and temperature measurements were also available during the campaign. The water vapor and ozone lidar measurements, together with the advected potential vorticity results from the high-resolution transport model MIMOSA, allowed the identification and study of a deep stratospheric intrusion over TMF. These observations demonstrated the lidar strong potential for future long-term monitoring of water vapor in the UTLS.
NASA Astrophysics Data System (ADS)
Leblanc, T.; Walsh, T. D.; McDermid, I. S.; Toon, G. C.; Blavier, J.-F.; Haines, B.; Read, W. G.; Herman, B.; Fetzer, E.; Sander, S.; Pongetti, T.; Whiteman, D. N.; McGee, T. G.; Twigg, L.; Sumnicht, G.; Venable, D.; Calhoun, M.; Dirisu, A.; Hurst, D.; Jordan, A.; Hall, E.; Miloshevich, L.; Vömel, H.; Straub, C.; Kampfer, N.; Nedoluha, G. E.; Gomez, R. M.; Holub, K.; Gutman, S.; Braun, J.; Vanhove, T.; Stiller, G.; Hauchecorne, A.
2011-12-01
The Measurements of Humidity in the Atmosphere and Validation Experiment (MOHAVE) 2009 campaign took place on 11-27 October 2009 at the JPL Table Mountain Facility in California (TMF). The main objectives of the campaign were to (1) validate the water vapor measurements of several instruments, including, three Raman lidars, two microwave radiometers, two Fourier-Transform spectrometers, and two GPS receivers (column water), (2) cover water vapor measurements from the ground to the mesopause without gaps, and (3) study upper tropospheric humidity variability at timescales varying from a few minutes to several days. A total of 58 radiosondes and 20 Frost-Point hygrometer sondes were launched. Two types of radiosondes were used during the campaign. Non negligible differences in the readings between the two radiosonde types used (Vaisala RS92 and InterMet iMet-1) made a small, but measurable impact on the derivation of water vapor mixing ratio by the Frost-Point hygrometers. As observed in previous campaigns, the RS92 humidity measurements remained within 5% of the Frost-point in the lower and mid-troposphere, but were too dry in the upper troposphere. Over 270 h of water vapor measurements from three Raman lidars (JPL and GSFC) were compared to RS92, CFH, and NOAA-FPH. The JPL lidar profiles reached 20 km when integrated all night, and 15 km when integrated for 1 h. Excellent agreement between this lidar and the frost-point hygrometers was found throughout the measurement range, with only a 3% (0.3 ppmv) mean wet bias for the lidar in the upper troposphere and lower stratosphere (UTLS). The other two lidars provided satisfactory results in the lower and mid-troposphere (2-5% wet bias over the range 3-10 km), but suffered from contamination by fluorescence (wet bias ranging from 5 to 50% between 10 km and 15 km), preventing their use as an independent measurement in the UTLS. The comparison between all available stratospheric sounders allowed to identify only the largest biases, in particular a 10% dry bias of the Water Vapor Millimeter-wave Spectrometer compared to the Aura-Microwave Limb Sounder. No other large, or at least statistically significant, biases could be observed. Total Precipitable Water (TPW) measurements from six different co-located instruments were available. Several retrieval groups provided their own TPW retrievals, resulting in the comparison of 10 different datasets. Agreement within 7% (0.7 mm) was found between all datasets. Such good agreement illustrates the maturity of these measurements and raises confidence levels for their use as an alternate or complementary source of calibration for the Raman lidars. Tropospheric and stratospheric ozone and temperature measurements were also available during the campaign. The water vapor and ozone lidar measurements, together with the advected potential vorticity results from the high-resolution transport model MIMOSA, allowed the identification and study of a deep stratospheric intrusion over TMF. These observations demonstrated the lidar strong potential for future long-term monitoring of water vapor in the UTLS.
Mohave Valley Land Conveyance Act of 2011
Sen. McCain, John [R-AZ
2011-03-09
Senate - 01/13/2012 Placed on Senate Legislative Calendar under General Orders. Calendar No. 274. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
NASA Technical Reports Server (NTRS)
Beratan, K. K.; Blom, R. G.; Crippen, R. E.; Nielson, J. E.
1990-01-01
Enhanced Landsat TM images were used in conjunction with field work to investigate the regional correlation of Miocene rocks in the Colorado River extensional corridor of California and Arizona. Based on field investigations, four sequences of sedimentary and volcanic strata could be recognized in the Mohave Mountains (Arizona) and the eastern Whipple Mountains (California), which display significantly different relative volumes and organization of lithologies. The four sequences were also found to have distinctive appearances on the TM image. The recognition criteria derived from field mapping and image interpretation in the Mohave Mountains and Whipple Mountains were applied to an adjacent area in which stratigraphic affinities were less well known. The results of subsequent field work confirmed the stratigraphic and structural relations suggested by the Tm image analysis.
40 CFR 52.121 - Classification of regions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Intrastate (Mohave, Yuma) I III III III III Central Arizona Intrastate (Gila, Pinal) I IA III III III Southeast Arizona Intrastate (Cochise, Graham, Greenlee, Santa Cruz) I IA III III III [45 FR 67345, Oct. 10...
40 CFR 52.121 - Classification of regions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Intrastate (Mohave, Yuma) I III III III III Central Arizona Intrastate (Gila, Pinal) I IA III III III Southeast Arizona Intrastate (Cochise, Graham, Greenlee, Santa Cruz) I IA III III III [45 FR 67345, Oct. 10...
Mohave Valley Land Conveyance Act of 2010
Sen. McCain, John [R-AZ
2010-07-12
Senate - 09/29/2010 Committee on Energy and Natural Resources Subcommittee on Public Lands and Forests. Hearings held. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Sun Glint from Solar Electric Generating Stations
2004-05-26
These images, from 8 April 2003 show that depending upon the position of the Sun, the solar power stations in California Mohave Desert can reflect solar energy from their large, mirror-like surfaces directly toward one of NASA Terra cameras.
Williams, Van S.
1996-01-01
Original geologic data mapped by the author in 1995 and 1996 with emphasis on structures in Miocene basin-fill deposits of the Muddy Creek Formation that may control availability and quality of groundwater.
ACHP | Federal Register Notice
Historic Preservation Formal Comments Regarding the Bureau of Land Management's Mohave Valley Shooting Disposal near Bullhead City, Arizona. SUMMARY: The Advisory Council on Historic Preservation is soliciting public comment in preparation for issuing formal comments, under the National Historic Preservation Act
Enright, Michael
1996-01-01
The hydrologic data in this report were collected in Beaver Dam Wash and adjacent areas of Washington County, Utah, Lincoln County, Nevada, andMohave County, Arizona, from 1991 to 1995; some historical data from as far back as 1932 are included for comparative purposes. The data include records of about 100 wells, drillers' and geologic logs of selected wells, and results of chemical analyses of water from wells, springs, and surface-water sites. Discharge, water temperature, and specific-conductance measurements are reported for 33 surface-water and spring sites. Daily mean discharge data are reported for two U.S. Geological Survey streamflow-gaging stations on Beaver Dam Wash (1992-95). The data were collected as part of a study done by the U.S. Geological Survey in cooperation with the Utah Department of Natural Resources, Division of Water Resources; the Nevada Department of Conservation and Natural Resources; and the Arizona Department of Water Resources.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
... DEPARTMENT OF ENERGY Western Area Power Administration Notice of Cancellation of Environmental Impact Statement for the Interconnection of the Hualapai Valley Solar Project, Mohave County, AZ (DOE/EIS... Impact Statement. SUMMARY: The U.S. Department of Energy (DOE), Western Area Power Administration...
78 FR 29131 - Environmental Impacts Statements; Notice of Availability
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-17
...-202- 3960. EIS No. 20130126, Final EIS, BLM, AZ, Mohave County Wind Farm Project, Review Period Ends..., Contact: Doug Grupenhoff 406-827-0741. EIS No. 20130129, Draft EIS, USA, TX, Implementation of Energy, Water, and Solid Waste Sustainability Initiatives at Fort Bliss, Texas and New Mexico, Comment Period...
Interactive Televised Instruction: Factors To Consider.
ERIC Educational Resources Information Center
Hall, Charles W.
For the first 2 years of operation, the Instructional Television Services (ITS) at Mohave Community College, in Arizona, operated in a very traditional manner, utilizing two cameras and an operator at each site. To increase the efficiency of the television services, surveillance cameras were installed at sites and were operated from the district…
Native Americans: 23 Indian Biographies.
ERIC Educational Resources Information Center
Axford, Roger W.
The lives and careers of 24 contemporary American Indians, including Dr. Louis W. Ballard (musician and composer, Cherokee and Sioux); Charles Banks Wilson (artist and historian); Veronica L. Murdock (President of the National Congress of American Indians, Mohave); Peter MacDonald, Sr. (Chairman of the Navajo Tribal Council, Navajo); and Jim…
Processes and Planning Structure Required for Implementing a Collegewide Area Network.
ERIC Educational Resources Information Center
Lapenta, Susan; Lutz, Todd
Since 1984, Arizona's Mohave Community College (MCC) has implemented innovative educational technology to better serve students, including an instructional television system to serve remote locations and a distance learning program. In 1993, the college initiated a project to upgrade its technological capabilities through the establishment of a…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-04
... be supplemented with internal access/service roads to each wind turbine. Proposed ancillary... action alternatives, project features within the wind-farm site would include turbines aligned within... maximum of 283 turbines. The Alternative B wind-farm site would encompass approximately 30,872 acres of...
Chapter 2. The Intermountain setting
E. Durant McArthur; Sherel K. Goodrich
2004-01-01
This book is intended to assist range managers throughout the Intermountain West (fig. 1). The areas of greatest applicability are the Middle and Southern Rocky Mountains, Wyoming Basin, Columbia and Colorado Plateaus, and much of the basin and range physiographic provinces of Fenneman (1981) or about 14° latitude, from the Mohave, Sonoran, and Chihuahuan...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-20
... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLAZC01000.L14300000.ES0000.241A, AZA 34298... Public Land; Arizona AGENCY: Bureau of Land Management, Interior. ACTION: Notice of Realty Action. SUMMARY: The Mohave County Community College District (College) filed an application to lease/purchase...
Rep. Franks, Trent [R-AZ-2
2010-02-26
House - 04/26/2010 Referred to the Subcommittee on Immigration, Citizenship, Refugees, Border Security, and International Law. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
..., recreational support facilities, visitors' center, and hiking trails. The Bureau of Land Management (BLM) has... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLAZC03000 L14300000.ES0000.241A; AZA-34593... Public Land, Mohave County, AZ AGENCY: Bureau of Land Management, Interior. ACTION: Notice of realty...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
.... 1715 (c), and thus made subject to BLM classification and planning requirements. The BLM Kingman RMP...] Notice of Realty Action; Recreation and Public Purposes Act Classification; and Notice of Intent To... classification approximately 1.31 acres of public land located in Mohave County, Arizona, and has found the...
A Proximate Biological Survey of San Diego Bay, California
1975-01-01
Ulothrlx sp. (green algae) Viva lattsslma (sea lettuce) Yucca schldlgera (Mohave yucca) Zostera marina (eelgrass) B. Marine Invertebrates Porifera...Technical Director ADMINISTRATIVE INFORMATION The work reported here was performed by the Marine Knvironmental Manage- ment Office of the Naval...from military sources, will be eliminated by 1980, (4) A number of marine organisms, including commercially and recreationally important species, are
Mineral resources of the Mount Tipton Wilderness Study Area, Mohave County, Arizona
Greene, Robert C.; Turner, Robert L.; Jachens, Robert C.; Lawson, William A.; Almquist, Carl L.
1989-01-01
The Mount Tipton Wilderness Study Area (AZ-020-012/ 042) comprises 33,950 acres in Mohave County, Ariz. At the request of the U.S. Bureau of Land Management, this area was evaluated for identified mineral resources (known) and mineral resource potential (undiscovered). This work was carried out by the U.S. Bureau of Mines and the U.S. Geological Survey in 1984-87. In this report, the area studied is referred to as the "wilderness study area" or simply "the study area." There are no identified mineral resources in the study area. The southernmost part of the study area is adjacent to the Wallapai (Chloride) mining district and has low mineral resource potential for gold, silver, copper, lead, zinc, and molybdenum in hydrothermal veins. This area also has a low mineral resource potential for tungsten in vein deposits and for uranium in vein deposits or pegmatites. In the central part of the wilderness study area, one small area has low mineral resource potential for uranium in vein deposits or pegmatites and another small area has low resource potential for thorium in vein deposits. The entire study area has low resource potential for geothermal energy but no potential for oil or gas resources.
ERIC Educational Resources Information Center
Charles, Kayla D.; Sheaff, Shannon; Woods, Jann; Downey, Lisa
2016-01-01
Burgeoning student debt and the ability of programs to adequately prepare students for jobs that will allow them to repay that debt comprise a topic of great interest in the current higher education policy environment. A key accountability measure used by the Department of Education for more than two decades has been the student loan cohort…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... state that includes a reduced frequency of breathing, or apnea, reduced oxygen consumption, reduced body... Vol. 76 Thursday, No. 194 October 6, 2011 Part III Department of the Interior Fish and Wildlife... INTERIOR Fish and Wildlife Service 50 CFR Part 17 [Docket No. FWS-R8-ES-2010-0006; 92210-1111-0000-B2...
ERIC Educational Resources Information Center
Fay, George E., Comp.
The Museum of Anthropology of the University of Northern Colorado (formerly known as Colorado State College) has assembled a large number of Indian tribal charters, constitutions, and by-laws to be reproduced as a series of publications. Included in this volume are the amended charter and constitution of the Jicarilla Apache Tribe, Dulce, New…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-06-12
This article is a review of the efforts to site a low-level waste repository in California. To resolve a long-running dispute, the US Government has agreed to deed to the state 1000 acres of land in the Mohave Desert. It was noted that environmentalists are unhappy with this action, and it was also noted by the prime contractor (US Ecology) that the facility can be finished six to nine months after the legal challenges are finished.
Pease, V.; Hillhouse, J.W.; Wells, R.E.
2005-01-01
Paleomagnetic data from Miocene (???20 Ma) volcanic rocks and dikes of west central Arizona reveal the tilt history of Proterozoic crystalline rocks in the hanging wall of the Chemehuevi-Whipple Mountains detachment fault. We obtained magnetization data from dikes and flows in two structural blocks encompassing Crossman Peak and Standard Wash in the Mohave Mountains. In the Crossman block the dike swarm records two components of primary magnetization: (1) CNH, a normal polarity, high-unblocking-temperature or high-coercivity component (inclination, I = 48.5??, declination, D = 6.4??), and (2) CRHm, a reversed polarity, high-temperature or high-coercivity component (I = -33.6??, D = 197.5??). Argon age spectra imply that the dikes have not been reheated above 300??C since their emplacement, and a baked-contact test suggests that the magnetization is likely to be Miocene in age. CRHm deviates from the expected direction of the Miocene axial dipole field and is best explained as a result of progressive tilting about the strike of the overlying andesite flows. These data suggest that the Crossman block was tilted 60?? to the southwest prior to intrusion of the vertical dike swarm, and the block continued to tilt during a magnetic field reversal to normal polarity (CNH). Miocene dikes in the Crossman block are roughly coplanar, so the younger dikes with normal polarity magnetization intruded along planes of weakness parallel to the earlier reversed polarity swarm. An alternative explanation involves CNH magnetization being acquired later during hydrothermal alteration associated with the final stages of dike emplacement. In the Standard Wash block, the primary component of magnetization is a dual-polarity, high-temperature or high-coercivity component (SWHl, I = 7.2??,D= 0.7??). To produce agreement between the expected Miocene magnetic direction and the SWH component requires (1) correcting for a 56?? tilt about the strike of flow bedding and (2) removing a counterclockwise vertical-axis rotation of 20??. The two rotations restore the Standard Wash dikes to vertical, make parallel the dike layering in the Crossman and Standard Wash blocks, and align the strikes of bedding in both blocks. Geologic mapping, geochemical evidence, and paleomagnetic data are consistent with the upper plate of the Mohave Mountains having tilted in response to formation of the underlying detachment fault.
An Archaeological Sample Survey of the Alamo Reservoir Mohave and Yuma Counties, Arizona,
1977-09-01
large number may promote ecological degradation. Burro trails are numerous and well-used. The Bureau of Land Management has initiated a program...cultural ecological ftmmrk Pine! ly, the 3msa of Land Managment has pmepasd geal si rpots dealing with the archaeological resweas of the Knswar, Aqmusr...staghorn cholla, and ocotillo are also present. The site has been partially destroyed W highway contruction . An old jeep trail also crosses pen of the
Carson, Evan W; Turner, Thomas F; Saltzgiver, Melody J; Adams, Deborah; Kesner, Brian R; Marsh, Paul C; Pilger, Tyler J; Dowling, Thomas E
2016-11-01
As with many endangered, long-lived iteroparous fishes, survival of razorback sucker depends on a management strategy that circumvents recruitment failure that results from predation by non-native fishes. In Lake Mohave, AZ-NV, management of razorback sucker centers on capture of larvae spawned in the lake, rearing them in off-channel habitats, and subsequent release ("repatriation") to the lake when adults are sufficiently large to resist predation. The effects of this strategy on genetic diversity, however, remained uncertain. After correction for differences in sample size among groups, metrics of mitochondrial DNA (mtDNA; number of haplotypes, N H , and haplotype diversity, H D ) and microsatellite (number of alleles, N A , and expected heterozygosity, H E ) diversity did not differ significantly between annual samples of repatriated adults and larval year-classes or among pooled samples of repatriated adults, larvae, and wild fish. These findings indicate that the current management program thus far maintained historical genetic variation of razorback sucker in the lake. Because effective population size, N e , is closely tied to the small census population size (N c = ~1500-3000) of razorback sucker in Lake Mohave, this population will remain at risk from genetic, as well as demographic risk of extinction unless N c is increased substantially. © The American Genetic Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Recovery of perennial vegetation in military target sites in the eastern Mohave Desert, Arizona
Steiger, John W.; Webb, Robert H.
2000-01-01
The effect of the age of geomorphic surfaces on the recovery of desert vegetation in military target sites was studied in the Mohave and Cerbat Mountains of northwestern Arizona. The target sites were cleared of all vegetation during military exercises in 1942-1943 and have not been subsequently disturbed. The degree of recovery was measured by calculating percentage-similarity (PS) and correlation-coefficient indices on the basis of differences in cover, density, and volume of species growing in and out of each target site. PS values, ranging from 22.7 to 95.1 percent (100 percent = identical composition), indicate a wide range of recovery that is partially controlled by the edaphic properties of the geomorphic surfaces. Statistical analyses show a strong pattern that indicates a greater variability in the degree of recovery for sites on older surfaces than on younger surfaces and a weak pattern that indicates an inverse relation between the degree of recovery and geomorphic age. Comparisons of the different effects of target site construction on the edaphic characteristics of each target site provides an explanation for these patterns and suggests the soil properties critical to the recovery process. Statistically significant negative or positive response to disturbance for most species are independent of the age of the geomorphic surfaces; however, there is strong evidence for a shift in response for the common perennial species Acamptopappus sphaerocephalus, and to a lesser extent, Salazaria mexicana, Encelia farinosa, and Coldenia canescens, among different geomorphic surfaces.
Laney, R.L.
1981-01-01
The study is a geohydrologic reconnaissance of about 170 square miles in the Lake Mead National Recreation Area from Las Vegas Wash to Opal Mountain, Nevada. The study is one of a series that describes the geohydrology of the recreation area and that indentifies areas where water supplies can be developed. Precipitation in this arid area is about 5 inches per year. Streamflow is seasonal and extremely variable except for that in the Colorado River, which adjoins the area. Pan evaporation is more than 20 times greater than precipitation; therefore, regional ground-water supplies are meager except near the Colorado River, Lake Mead, and Lake Mohave. Large ground-water supplies can be developed near the river and lakes, and much smaller supplies may be obtained in a few favorable locations farther from the river and lakes. Ground water in most of the areas probably contains more than 1,000 milligrams per liter of dissolved solids, but water that contains less than 1,000 milligrams per liter of dissolved solids can be obtained within about 1 mile of the lakes. Crystalline rocks of metamorphic, intrusive and volcanic origin crop out in the area. These rocks are overlain by conglomerate and mudstone of the Muddy Creek Formation, gravel and conglomerate of the older alluvium, and sand and gravel of the Chemehuevi Formation and younger alluvium. The crystalline rocks, where sufficiently fractured, yield water to springs and would yield small amounts of water to favorably located wells. The poorly cemented and more permeable beds of the older alluvium, Chemehuevi Formation, and younger alluvium are the better potential aquifers, particularly along the Colorado River and Lakes Mead and Mohave. Thermal springs in the gorge of the Colorado River south of Hoover Dam discharge at least 2,580 acre-feet per year of water from the volcanic rocks and metamorphic and plutonic rocks. The discharge is much greater than could be infiltrated in the drainage basin above the springs. Transbasin movement of ground water probably occurs , and perhaps the larger part of the spring discharge is underflow from Eldorado Valley. The more favorable sites for ground-water development are along the shores of Lakes Mead and Mohave and are the Fire Mountain, Opal Mountain to Aztec Wash, and Hemenway Wash sites. Wells yielding several hundred gallons per minute of water of acceptable chemical quality can be developed at these sites. (USGS)
Beryl-bearing pegmatites in the Ruby Mountains and other areas in Nevada and northwestern Arizona
Olson, Jerry C.; Hinrichs, E. Neal
1960-01-01
Pegmatite occurs widely in Nevada and northwestern Arizona, but little mining has been done for such pegmatite minerals as mica, feldspar, beryl, and lepidolite. Reconnaissance for beryl-bearing pegmatite in Nevada and in part of Mohave County, Ariz., and detailed studies in the Dawley Canyon area, Elko County, Nev., have shown that beryl occurs in at least 11 districts in the region. Muscovite has been prospected or mined in the Ruby and Virgin Mountains, Nev., and in Mohave County, Ariz. Feldspar has been mined in the southern part of the region near Kingman, Ariz., and in Clark County, Nev. The pegmatites in the region range in age from Precambrian to late Mesozoic or Tertiary. Among the pegmatite minerals found or reported in the districts studied are beryl, chrysoberyl, scheelite, wolframite, garnet, tourmaline, fluorite, apatite, sphene, allanite, samarskite, euxenite, gadolinite, monazite, autunite, columbite-tantalite, lepidolite, molybdenite, and pyrite and other sulflde minerals. The principal beryl-bearing pegmatites examined are in the Oreana and Lakeview (Humboldt Canyon) areas, Pershing County; the Dawley Canyon area in the Ruby Mountains, Elko County, Nev.; and on the Hummingbird claims in the Virgin Mountains, Mohave County, Ariz. Beryl has also been reported in the Marietta district, Mineral County; the Sylvania district, Esmeralda County; near Crescent Peak and near Searchlight, Clark County, Nev.; and in the Painted Desert near Hoover Dam, Mohave County, Ariz. Pegmatites are abundant in the Ruby Mountains, chiefly north of the granite stock at Harrison Pass. In the Dawley Canyon area of 2.6 square miles at least 350 pegmatite dikes more than 1 foot thick were mapped, and beryl was found in small quantities in at least 100 of these dikes. Four of these dikes exceed 20 feet in thickness, and 1 is 55 feet thick. A few pegmatites were also examined in the Corral Creek, Gilbert Canyon, and Hankins Canyon areas in the Ruby Mountains.The pegmatite dikes in the Dawley Canyon area intrude granite and metamorphic rocks which consist chiefly of quartzite and schist of probable Early Cambrian age. The granite is of two types: a biotite-muscovite granite that forms the main mass of the stock and albite granite that occurs in the metamorphic rocks near the borders of the stock. The pegmatites were emplaced chiefly along fractures in the granite and along schistosity or bedding planes in the metamorphic rocks.Many of the Dawley Canyon pegmatite dikes are zoned, having several rock units of contrasting mineralogy or grain size formed successively from the walls inward. Aplitic units occur either as zones or in irregular positions in the pegmatite dikes and are a distinctive feature of the Dawley Canyon pegmatites. Some of the aplitic and fine-grained pegmatite units are characterized by thin layers of garnet crystals, forming many parallel bands on outcrop surfaces. The occurrence of aplitic and pegmatitic textures in the same dike presumably indicates abrupt changes in physical-chemical conditions during crystallization, such as changes in viscosity and in content of volatile constituents. Concentrations of 0.1 percent or more beryl, locally more than 1 percent, occur in certain zones in the Dawley Canyon pegmatites. Spectrographic analyses of 23 samples indicate that the BeO content ranges from 0.0017 to 0.003 percent in the albite granite, from ,0.0013 to 0.039 percent in aplitic units in pegmatite, from 0.0005 to 0.10 percent in coarse-grained pegmatite, and from less than 0.0001 to 0.0004 percent in massive quartz veins. The scheelite-beryl deposits at Oreana and in Humboldt Canyon, Pershing County, are rich in beryllium. Twelve samples from the Lakeview (Humboldt Canyon) deposit range from 0.018 to 0.11 percent BeO, but underground crosscuts have failed to intersect similar rock at depth. Beryl locally constitutes as much as 10 percent of the pegmatitic ore at Oreana. The beryl was not recovered during tungsten mining at Oreana and is now in the tailings of the mill at Toulon, Nev. The percentage of beryl is lower than the Oreana ore because of dilution by tailings from other ores milled at Toulon. Beryl has been found in many pegmatite dikes in the Virgin Mountains. Both beryl and chrysoberyl occur in dikes on the Hummingbird claims, north of Virgin Peak, in Mohave County, Ariz. Spectrographic analyses of 5 representative samples of the principal dike on the Hummingbird claims range from 0.055 to 0.11 percent BeO.
House, P.K.; Pearthree, P.A.; Perkins, M.E.
2008-01-01
Late Miocene and early Pliocene sediments exposed along the lower Colorado River near Laughlin, Nevada, contain evidence that establishment of this reach of the river after 5.6 Ma involved flooding from lake spillover through a bedrock divide between Cottonwood Valley to the north and Mohave Valley to the south. Lacustrine marls interfingered with and conformably overlying a sequence of post-5.6 Ma finegrained valley-fill deposits record an early phase of intermittent lacustrine inundation restricted to Cottonwood Valley. Limestone, mud, sand, and minor gravel of the Bouse Formation were subsequently deposited above an unconformity. At the north end of Mohave Valley, a coarse-grained, lithologically distinct fluvial conglomerate separates subaerial, locally derived fan deposits from subaqueous deposits of the Bouse Formation. We interpret this key unit as evidence for overtopping and catastrophic breaching of the paleodivide immediately before deep lacustrine inundation of both valleys. Exposures in both valleys reveal a substantial erosional unconformity that records drainage of the lake and predates the arrival of sediment of the through-going Colorado River. Subsequent river aggradation culminated in the Pliocene between 4.1 and 3.3 Ma. The stratigraphic associations and timing of this drainage transition are consistent with geochemical evidence linking lacustrine conditions to the early Colorado River, the timings of drainage integration and canyon incision on the Colorado Plateau, the arrival of Colorado River sand at its terminus in the Salton Trough, and a downstream-directed mode of river integration common in areas of crustal extension. ?? 2008 The Geological Society of America.
Spangler, Lawrence E.; Angeroth, Cory E.; Walton, Sarah J.
2008-01-01
Relations between the elevation of the static water level in wells and the elevation of the accounting surface within the Colorado River aquifer in the vicinity of Vidal, California, the Chemehuevi Indian Reservation, California, and on Mohave Mesa, Arizona, were used to determine which wells outside the flood plain of the Colorado River are presumed to yield water that will be replaced by water from the Colorado River. Wells that have a static water-level elevation equal to or below the elevation of the accounting surface are presumed to yield water that will be replaced by water from the Colorado River. Geographic Information System (GIS) interpolation tools were used to produce maps of areas where water levels are above, below, and near (within ? 0.84 foot) the accounting surface. Calculated water-level elevations and interpolated accounting-surface elevations were determined for 33 wells in the vicinity of Vidal, 16 wells in the Chemehuevi area, and 35 wells on Mohave Mesa. Water-level measurements generally were taken in the last 10 years with steel and electrical tapes accurate to within hundredths of a foot. A Differential Global Positioning System (DGPS) was used to determine land-surface elevations to within an operational accuracy of ? 0.43 foot, resulting in calculated water-level elevations having a 95-percent confidence interval of ? 0.84 foot. In the Vidal area, differences in elevation between the accounting surface and measured water levels range from -2.7 feet below to as much as 17.6 feet above the accounting surface. Relative differences between the elevation of the water level and the elevation of the accounting surface decrease from west to east and from north to south. In the Chemehuevi area, differences in elevation range from -3.7 feet below to as much as 8.7 feet above the accounting surface, which is established at 449.6 feet in the vicinity of Lake Havasu. In all of the Mohave Mesa area, the water-level elevation is near or below the elevation of the accounting surface. Differences in elevation between water levels and the accounting surface range from -0.2 to -11.3 feet, with most values exceeding -7.0 feet. In general, the ArcGIS Triangulated Irregular Network (TIN) Contour and Natural Neighbor tools reasonably represent areas where the elevation of water levels in wells is above, below, and near (within ? 0.84 foot) the elevation of the accounting surface in the Vidal and Chemehuevi study areas and accurately delineate areas around outlying wells and where anomalies exist. The TIN Contour tool provides a strict linear interpolation while the Natural Neighbor tool provides a smoothed interpolation. Using the default options in ArcGIS, the Inverse Distance Weighted (IDW) and Spline tools also reasonably represent areas above, below, and near the accounting surface in the Vidal and Chemehuevi areas. However, spatial extent of and boundaries between areas above, below, and near the accounting surface vary among the GIS methods, which results largely from the fundamentally different mathematical approaches used by these tools. The limited number and spatial distribution of wells in comparison to the size of the areas, and the locations and relative differences in elevation between water levels and the accounting surface of wells with anomalous water levels also influence the contouring by each of these methods. Qualitatively, the Natural Neighbor tool appears to provide the best representation of the difference between water-level and accounting-surface elevations in the study areas, on the basis of available well data.
Billingsley, G.H.
2000-01-01
This digital map database, compiled from previously published and unpublished data as well as new mapping by the author, represents the general distribution of bedrock and surficial deposits in the map area. Together with the accompanying pamphlet, it provides current information on the geologic structure and stratigraphy of the Grand Canyon area. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller.
Tillman, Fred D.; Garner, Bradley D.; Truini, Margot
2013-01-01
Preliminary numerical models were developed to simulate groundwater flow in the basin-fill alluvium in Detrital, Hualapai, and Sacramento Valleys in northwestern Arizona. The purpose of this exercise was to gather and evaluate available information and data, to test natural‑recharge concepts, and to indicate directions for improving future regional groundwater models of the study area. Both steady-state and transient models were developed with a single layer incorporating vertically averaged hydraulic properties over the model layer. Boundary conditions for the models were constant-head cells along the northern and western edges of the study area, corresponding to the location of the Colorado River, and no-flow boundaries along the bedrock ridges that bound the rest of the study area, except for specified flow where Truxton Wash enters the southern end of Hualapai Valley. Steady-state conditions were simulated for the pre-1935 period, before the construction of Hoover Dam in the northwestern part of the model area. Two recharge scenarios were investigated using the steady-state model—one in which natural aquifer recharge occurs directly in places where water is available from precipitation, and another in which natural aquifer recharge from precipitation occurs in the basin-fill alluvium that drains areas of available water. A transient model with 31 stress periods was constructed to simulate groundwater flow for the period 1935–2010. The transient model incorporates changing Colorado River, Lake Mead, and Lake Mohave water levels and includes time-varying groundwater withdrawals and aquifer recharge. Both the steady-state and transient models were calibrated to available water-level observations in basin-fill alluvium, and simulations approximate observed water-level trends throughout most of the study area.
Villalobos, Hector A.; Hamm, Louis W.
1981-01-01
Several areas in the Paiute Instant Study Area are judged to have at best a low mineral potential. These include areas of copper, lead, manganese, molybdenum, nickel, silver, tungsten, and zinc mineralization, as well as occurrences of dumortierite, beryllium, arsenic, barium, gypsum, gem minerals, sand, gravel, and limestone. The metallic deposits and dumortieri te, beryllium, and arsenic occur over small surface areas. Significant production has not resulted from mining activity in mineralized areas. Sand, gravel, limestone, gem minerals, gypsum, and barium occurrences are far from major markets. Currently, there are no active mining operations in the study area.
Geologic map of the Hiller Mountain Quadrangle, Clark County, Nevada, and Mohave County, Arizona
Howard, Keith A.; Hook, Simon; Phelps, Geoffrey A.; Block, Debra L.
2003-01-01
Map Scale: 1:24,000 Map Type: colored geologic map The Hiller Mountains Quadrangle straddles Virgin Canyon in the eastern part of Lake Mead. Proterozoic gneisses and granitoid rocks underlie much of the quadrangle. They are overlain by upper Miocene basin-filling deposits of arkosic conglomerate, basalt, and the overlying Hualapai Limestone. Inception of the Colorado River followed deposition of the Hualapai Limestone and caused incision of the older rocks. Fluvial gravel deposits indicate various courses of the early river across passes through highlands of the Gold Butte-Hiller Mountains-White Hills structural block. Faults and tilted rocks in the quadrangle record tectonic extension that climaxed in middle Miocene time.
Spring-summer movements of bonytail in a Colorado River reservoir, Lake Mohave, Arizona and Nevada
Marsh, Paul C.; Mueller, Gordon
1999-01-01
Bonytail can move substantial distances in a short time (10s of km in a few days). Fish in both years apparently favored the same areas, where they may remain for weeks. Unmarked bonytail were observed or captured by setting nets in places favored by tagged fish, a significant result since future use of the technique may enhance our ability to monitor reintroductions, locate and document spawning, examine habitat use, and acquire desperately needed brood stock for this critically imperiled species. External tagging techniques developed for juvenile razorback sucker may provide a method of minimizing telemetry induced stress while allowing us to focus sampling on congregation sites.
Spencer, J.E.; Pearthree, P.A.; House, P.K.
2008-01-01
The upper Miocene to lower Pliocene Bouse Formation in the lower Colorado River trough of the American Southwest was deposited in three basins - from north to south, the Mohave, Havasu, and Blythe Basins - that were formed by extensional fault ing in the early to middle Miocene. Fossils of marine, brackish, and freshwater organ isms in the Bouse Formation have been interpreted to indicate an estuarine environment associated with early opening of the nearby Gulf of California. Regional uplift since 5 Ma is required to position the estuarine Bouse Formation at present elevations as high as 555 m, where greater uplift is required in the north. We present a compilation of Bouse Formation elevations that is consistent with Bouse deposition in lakes, with an abrupt 225 m northward increase in maximum Bouse elevations at Topock gorge north of Lake Havasu. Within Blythe and Havasu Basins, maximum Bouse elevations are 330 m above sea level in three widely spaced areas and reveal no evidence of regional tilting. To the north in Mohave Basin, numerous Bouse outcrops above 480 m elevation include three widely spaced sites where the Bouse Formation is exposed at 536-555 m. Numerical simulations of initial Colorado River inflow to a sequence of closed basins along the lower Colorado River corridor model a history of lake filling, spilling, evaporation and salt concentration, and outflow-channel incision. The simulations support the plausibility of evaporative concentration of Colorado River water to seawater-level salinities in Blythe Basin and indicate that such salinities could have remained stable for as long as 20-30 k.y. We infer that fossil marine organ isms in the Bouse Formation, restricted to the southern (Blythe) basin, reflect coloniza tion of a salty lake by a small number of species that were transported by birds.
NASA Astrophysics Data System (ADS)
Zhou, Gang
A continuous occurrence of catastrophic failures, leaks and cracks of the Cr-Mo steam piping has created widespread utility concern for the integrity and serviceability of the seam-welded piping systems in power plants across USA. Cr-Mo steels are the materials widely used for elevated temperature service in fossil-fired generating stations. A large percentage of the power plant units with the Cr-Mo seam-welded steam piping have been in operation for a long duration such that the critical components of the units have been employed beyond the design life (30 or 40 years). This percentage will increase even more significantly in the near future. There is a strong desire to extend and thus there is a need to assess the remaining life of these units. Thus, understanding of the metallurgical causes for the failures and damage in the Cr-Mo seam-welded piping plays a major role in estimating possible life-extension and decision making on whether to operate, repair or replace. In this study, an optical metallographic method and a Cryo-Crack fractographic method have been developed for characterization and quantification of the damage in seam-welded steam piping. More than 500 metallographic assessments, from more than 25 power plants, have been accomplished using the optical metallographic method, and more than 200 fractographic specimens from 10 power plants have been evaluated using the "Cryo-Crack" fractographic technique. For comparison, "virgin" SA welds were fabricated using the Mohave welding procedure with re-N&T Mohave base metal with both "acid" and "basic" fluxes. The damage mechanism, damage distribution pattern, damage classification, correlation of the damage with the microstructural features of these SA welds and the impurity segregation patterns have been determined. A physical model for cavitation (leading to failure) in Cr-Mo SA weld metals and evaluation methodologies for high energy piping are proposed.
Hunter, K.L.; Betancourt, J.L.; Riddle, B.R.; Van Devender, T. R.; Cole, K.L.; Geoffrey, Spaulding W.
2000-01-01
1 A classic biogeographic pattern is the alignment of diploid, tetraploid and hexaploid races of creosote bush (Larrea tridentata) across the Chihuahuan, Sonoran and Mohave Deserts of western North America. We used statistically robust differences in guard cell size of modern plants and fossil leaves from packrat middens to map current and past distributions of these ploidy races since the Last Glacial Maximum (LGM). 2 Glacial/early Holocene (26-10 14C kyr BP or thousands of radiocarbon years before present) populations included diploids along the lower Rio Grande of west Texas, 650 km removed from sympatric diploids and tetraploids in the lower Colorado River Basin of south-eastern California/south-western Arizona. Diploids migrated slowly from lower Rio Grande refugia with expansion into the northern Chihuahuan Desert sites forestalled until after ???4.0 14C kyr BP. Tetraploids expanded from the lower Colorado River Basin into the northern limits of the Sonoran Desert in central Arizona by 6.4 14C kyr BP. Hexaploids appeared by 8.5 14C kyr BP in the lower Colorado River Basin, reaching their northernmost limits (???37??N) in the Mohave Desert between 5.6 and 3.9 14C kyr BP. 3 Modern diploid isolates may have resulted from both vicariant and dispersal events. In central Baja California and the lower Colorado River Basin, modern diploids probably originated from relict populations near glacial refugia. Founder events in the middle and late Holocene established diploid outposts on isolated limestone outcrops in areas of central and southern Arizona dominated by tetraploid populations. 4 Geographic alignment of the three ploidy races along the modern gradient of increasingly drier and hotter summers is clearly a postglacial phenomenon, but evolution of both higher ploidy races must have happened before the Holocene. The exact timing and mechanism of polyploidy evolution in creosote bush remains a matter of conjecture. ?? 2001 Blackwell Science Ltd.
Hunter, Kimberly L.; Betancourt, Julio L.; Riddle, Brett R.; Van Devender, Thomas R.; Cole, K.L.; Spaulding, W.G.
2001-01-01
1. A classic biogeographic pattern is the alignment of diploid, tetraploid and hexaploid races of creosote bush (Larrea tridentata) across the Chihuahuan, Sonoran and Mohave Deserts of western North America. We used statistically robust differences in guard cell size of modern plants and fossil leaves from packrat middens to map current and past distributions of these ploidy races since the Last Glacial Maximum (LGM). 2 Glacial/early Holocene (26a??10 14C kyr bp or thousands of radiocarbon years before present) populations included diploids along the lower Rio Grande of west Texas, 650 km removed from sympatric diploids and tetraploids in the lower Colorado River Basin of south-eastern California/south-western Arizona. Diploids migrated slowly from lower Rio Grande refugia with expansion into the northern Chihuahuan Desert sites forestalled until after ~4.0 14C kyr bp. Tetraploids expanded from the lower Colorado River Basin into the northern limits of the Sonoran Desert in central Arizona by 6.4 14C kyr bp. Hexaploids appeared by 8.5 14C kyr bp in the lower Colorado River Basin, reaching their northernmost limits (~37A?N) in the Mohave Desert between 5.6 and 3.9 14C kyr bp. 3 Modern diploid isolates may have resulted from both vicariant and dispersal events. In central Baja California and the lower Colorado River Basin, modern diploids probably originated from relict populations near glacial refugia. Founder events in the middle and late Holocene established diploid outposts on isolated limestone outcrops in areas of central and southern Arizona dominated by tetraploid populations. 4 Geographic alignment of the three ploidy races along the modern gradient of increasingly drier and hotter summers is clearly a postglacial phenomenon, but evolution of both higher ploidy races must have happened before the Holocene. The exact timing and mechanism of polyploidy evolution in creosote bush remains a matter of conjecture.
The Plate Boundary Observatory Student Field Assistant Program in Southern California
NASA Astrophysics Data System (ADS)
Seider, E. L.
2007-12-01
Each summer, UNAVCO hires students as part of the Plate Boundary Observatory (PBO) Student Field Assistant Program. PBO, the geodetic component of the NSF-funded EarthScope project, involves the reconnaissance, permitting, installation, documentation, and maintenance of 880 permanent GPS stations in five years. During the summer 2007, nine students from around the US and Puerto Rico were hired to assist PBO engineers during the busy summer field season. From June to September, students worked closely with PBO field engineers to install and maintain permanent GPS stations in all regions of PBO, including Alaska. The PBO Student Field Assistant Program provides students with professional hands-on field experience as well as continuing education in the geosciences. It also gives students a glimpse into the increasing technologies available to the science community, the scope of geophysical research utilizing these technologies, and the field techniques necessary to complete this research. Students in the PBO Field Assistant Program are involved in all aspects of GPS support, including in-warehouse preparation and in-field installations and maintenance. Students are taught practical skills such as drilling, wiring, welding, hardware configuration, documentation, and proper field safety procedures needed to construct permanent GPS stations. These real world experiences provide the students with technical and professional skills that are not always available to them in a classroom, and will benefit them greatly in their future studies and careers. The 2007 summer field season in Southern California consisted of over 35 GPS permanent station installations. To date, the Southern California region of PBO has installed over 190 GPS stations. This poster presentation will highlight the experiences gained by the Southern California student field assistants, while supporting PBO- Southern California GPS installations in the Mohave Desert and the Inyo National Forest.
Groundwater budgets for Detrital, Hualapai, and Sacramento Valleys, Mohave County, Arizona, 2007-08
Garner, Bradley D.; Truini, Margot
2011-01-01
Figures 9, 10, and 11 from this report present water budgets for Detritial, Hualapai, and Sacramento Valleys in Northwestern Arizona. These figures show average values for each water-budget component. Uncertainty is discussed but not shown on these report figures. As an aid to readers, these figures have been implemented as interactive, web-based figures here. Water-budget parameters can be varied within reasonable bounds of uncertainty and the effects those changes have on the water budget will be shown as they are varied. This can aid in understanding sensitivity-which parameters most or least affect the water budgets-and also could provide a generally improved sense of the hydrologic cycle represented in these water budgets.
Billingsley, George H.; Wellmeyer, Jessica L.
2003-01-01
The geologic map of the Mount Trumbull 30' x 60' quadrangle is a cooperative product of the U.S. Geological Survey, the National Park Service, and the Bureau of Land Management that provides geologic map coverage and regional geologic information for visitor services and resource management of Grand Canyon National Park, Lake Mead Recreational Area, and Grand Canyon Parashant National Monument, Arizona. This map is a compilation of previous and new geologic mapping that encompasses the Mount Trumbull 30' x 60' quadrangle of Arizona. This digital database, a compilation of previous and new geologic mapping, contains geologic data used to produce the 100,000-scale Geologic Map of the Mount Trumbull 30' x 60' Quadrangle, Mohave and Coconino Counties, Northwestern Arizona. The geologic features that were mapped as part of this project include: geologic contacts and faults, bedrock and surficial geologic units, structural data, fold axes, karst features, mines, and volcanic features. This map was produced using 1:24,000-scale 1976 infrared aerial photographs followed by extensive field checking. Volcanic rocks were mapped as separate units when identified on aerial photographs as mappable and distinctly separate units associated with one or more pyroclastic cones and flows. Many of the Quaternary alluvial deposits that have similar lithology but different geomorphic characteristics were mapped almost entirely by photogeologic methods. Stratigraphic position and amount of erosional degradation were used to determine relative ages of alluvial deposits having similar lithologies. Each map unit and structure was investigated in detail in the field to ensure accuracy of description. Punch-registered mylar sheets were scanned at the Flagstaff Field Center using an Optronics 5040 raster scanner at a resolution of 50 microns (508 dpi). The scans were output in .rle format, converted to .rlc, and then converted to ARC/INFO grids. A tic file was created in geographic coordinates and projected into the base map projection (Polyconic) using a central meridian of -113.500. The tic file was used to transform the grid into Universal Transverse Mercator projection. The linework was vectorized using gridline. Scanned lines were edited interactively in ArcEdit. Polygons were attributed in ArcEdit and all artifacts and scanning errors visible at 1:100,000 were removed. Point data were digitized onscreen. Due to the discovery of digital and geologic errors on the original files, the ARC/INFO coverages were converted to a personal geodatabase and corrected in ArcMap. The feature classes which define the geologic units, lines and polygons, are topologically related and maintained in the geodatabase by a set of validation rules. The internal database structure and feature attributes were then modified to match other geologic map databases being created for the Grand Canyon region. Faults were edited with the downthrown block, if known, on the 'right side' of the line. The 'right' and 'left' sides of a line are determined from 'starting' at the line's 'from node' and moving to the line's end or 'to node'.
STROZ Lidar Results at the MOHAVE III Campaign, October, 2009, Table Mountain, CA
NASA Technical Reports Server (NTRS)
McGee, T. J.; Twigg, L.; Sumnicht, G.; Whiteman, D.; Leblanc, T.; Voemel, H.; Gutman, S.
2010-01-01
During October, 2009 the GSFC STROZ Lidar participated in a campaign at the JPL Table Mountain Facility (Wrightwood, CA, 2285 m Elevation) to measure vertical profiles of water vapor from near the ground to the lower stratosphere. On eleven nights, water vapor, aerosol, temperature and ozone profiles were measured by the STROZ lidar, two other similar lidars, frost-point hygrometer sondes, and ground-based microwave instruments made measurements. Results from these measurements and an evaluation of the performance of the STROZ lidar during the campaign will be presented in this paper. The STROZ lidar was able to measure water vapor up to 13-14 km ASL during the campaign. We will present results from all the STROZ data products and comparisons with other instruments made. Implications for instrumental changes will be discussed.
Sanger, H.W.; Littin, G.R.
1982-01-01
INTRODUCTION The Bill Williams area includes about 3,200 mi 2 in Mohave, Yavapai, and Yuma Counties in west-central Arizona. The west half of the area is in the Basin and Range lowlands water province, and the east half is in the Central high-lands water province (see index map). The Basin and Range lowlands province generally is characterized by high mountains separated by broad valleys filled with deposits that commonly store large amounts of ground water. The Central highlands province consists mostly of rugged mountain masses made up of igneous, metamorphic, and well-consolidated sedimentary rocks that contain little space for the storage of ground water except where highly fractured or faulted. A few small valleys between the mountains contain varying thicknesses of water.-bearing deposits. The area is drained by the Bill Williams River and its major tributaries-the Big Sandy River and the Santa Maria River. Many reaches of the Big Sandy and Santa Maria Rivers and their major tributaries are perennial; the flow is sustained by ground-water discharge (Brown and others, 1978, sheet 2). In the Bill Williams area most of the water used is from ground water, although a small amount of surface water also may be diverted. About 18,000 acre-ft of ground water was withdrawn in 1979 (U.S. Geological Survey, 1981). About 17,000 acre-ft was used for the irrigation of 5,200 acres, and the rest was used for domestic, stock, and public supplies. Most of the irrigated land is in Skull Valley and along lower Kirkland Creek and the Bill Williams River. Only selected wells are shown on the maps in areas of high well density. The hydrologic data on which these maps are based are available, for the most part, in computer-printout form and may be consulted at the Arizona Department of Water Resources, 99 East Virginia, Phoenix, and at U.S. Geological Survey offices in: Federal Building, 301 West Congress Street, Tucson, and Valley Center, Suite 1880, Phoenix. Material from which copies can be made at private expense is available at the Tucson and Phoenix offices of the U.S. Geological Survey.
Predictive modeling of terrestrial radiation exposure from geologic materials
NASA Astrophysics Data System (ADS)
Haber, Daniel A.
Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials in an area by creating a model using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low spatial resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas, referred to as background radiation units, homogenous in terms of K, U, and Th are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by our partner National Security Technologies, LLC (NSTec), allowing for the refinement of the technique. High resolution radiation exposure rate models have been developed for two study areas in Southern Nevada that include the alluvium on the western shore of Lake Mohave, and Government Wash north of Lake Mead; both of these areas are arid with little soil moisture and vegetation. We determined that by using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide radiation background units of alluvium, regions of homogeneous geochemistry can be defined allowing for the exposure rate to be predicted. Soil and rock samples have been collected at Government Wash and Lake Mohave as well as a third site near Cameron, Arizona. K, U, and Th concentrations of these samples have been determined using inductively coupled mass spectrometry (ICP-MS) and laboratory counting using radiation detection equipment. In addition, many sample locations also have concentrations determined via in situ radiation measurements with high purity germanium detectors (HPGe) and aerial survey measurements. These various measurement techniques have been compared and found to produce consistent results. Finally, modeling using Monte Carlo N-Particle Transport Code (MCNP), a particle physics modeling code, has allowed us to derive concentration to exposure rate coefficients. These simulations also have shown that differences in major element chemistry have little impact on the gamma ray emissions of geologic materials.
Geologic map of the Fredonia 30' x 60' quadrangle, Mohave and Coconino counties, northern Arizona
Billingsley, George H.; Priest, Susan S.; Felger, Tracey J.
2008-01-01
This geologic map is the result of a cooperative effort of the U.S. Geological Survey, the National Park Service, the U.S. Forest Service, and the Bureau of Land Management (BLM) and the Kaibab-Paiute Tribe to provide a regional geologic database for resource management officials of all government and agencies, city municipalities, private enterprises, and individuals of this part of the Arizona Strip. The Arizona Strip is part of northwestern Arizona north of the Colorado River and bounded by the States of Nevada and Utah. Field work on the Kaibab-Paiute Indian Reservation was conducted from 2002 to 2005 with permission from the Kaibab-Paiute Tribal Government of that administration and permission was granted to publish a geologic map of 4 quadrangles online (Billingsley and others, 2004). The Kaibab-Paiute Tribal government of 2006 to 2008 requested that all geologic information within the Kaibab-Paiute Indian Reservation not be published as part of the Fredonia 30' x 60' quadrangle (this publication). For further information, contact the Kaibab-Paiute Tribal government at HC 65 Box 2, Fredonia, Arizona, 86022, telephone # (928) 643-7245. Visitors to the Kaibab-Paiute Indian Reservation are required to obtain a permit and permission for access from the Tribal Offices at the junction of State Highway 389 and the paved road leading to Pipe Spring National Monument. The Fredonia 30' x 60' quadrangle encompasses approximately 5,018 km2 (1,960 mi2) within Mohave and Coconino Counties, northern Arizona and is bounded by longitude 112 deg to 113 deg W., and latitude 36 deg 30' to 37 deg N. The map area lies within the southern Colorado Plateaus geologic province (herein Colorado Plateau). The map area is locally subdivided into seven physiographic parts: the Grand Canyon (Kanab Canyon and its tributaries), Kanab Plateau, Uinkaret Plateau, Kaibab Plateau, Paria Plateau, House Rock Valley, and Moccasin Mountains as defined by Billingsley and others, 1997, (fig. 1). Elevations range from 2,737 m (8,980 ft) just west of State Highway 67 on the Kaibab Plateau, southeast corner of the map area to about 927 m (3,040 ft) in Kanab Canyon, south-central edge of the map area.
NASA Astrophysics Data System (ADS)
Balam Matagamon, Chan; Pawa Matagamon, Sagamo
2004-03-01
Certain Native Americans of the past seem to have correctly deduced that significant survival information for their tradition-respecting cultures resided in EMF-based phenomena that they were monitoring. This is based upon their myths and the place or cult-hero names they bequeathed us. The sites we have located in FL have been detectable by us visually, usually by faint blue light, or by the elicitation of pin-like prickings, by somewhat intense nervous-system response, by EMF interactions with aural electrochemical systems that can elicit tinitus, and other ways. In the northeast, Cautantowit served as a harbinger of Indian summer, and appears to be another alter ego of the EMF. The Miami, FL Tequesta site along the river clearly correlates with tornado, earthquake and hurricane locations. Sites like the Mohave Deserts giant man may have had similar significance.
Dobson, James; Yang, Daryl C.; den Brouw, Bianca op; Cochran, Chip; Huynh, Tam; Kurrupu, Sanjaya; Sánchez, Elda E.; Massey, Daniel J.; Baumann, Kate; Jackson, Timothy N.W.; Nouwens, Amanda; Josh, Peter; Neri-Castro, Edgar; Alagón, Alejandro; Hodgson, Wayne C.; Fry, Bryan G.
2017-01-01
While some US populations of the Mohave rattlesnake (Crotalus scutulatus scutulatus) are infamous for being potently neurotoxic, the Mexican subspecies C. s. salvini (Huamantlan rattlesnake) has been largely unstudied beyond crude lethality testing upon mice. In this study we show that at least some populations of this snake are as potently neurotoxic as its northern cousin. Testing of the Mexican antivenom Antivipmyn showed a complete lack of neutralisation for the neurotoxic effects of C. s. salvini venom, while the neurotoxic effects of the US subspecies C. s. scutulatus were time-delayed but ultimately not eliminated. These results document unrecognised potent neurological effects of a Mexican snake and highlight the medical importance of this subspecies, a finding augmented by the ineffectiveness of the Antivipmyn antivenom. These results also influence our understanding of the venom evolution of Crotalus scutulatus, suggesting that neurotoxicity is the ancestral feature of this species, with the US populations which lack neurotoxicity being derived states. PMID:29074260
Is there room for all of us? Renewable energy and Xerospermophilus mohavensis
Inman, Richard D.; Esque, Todd C.; Nussear, Kenneth E.; Leitner, Philip; Matocq, Marjorie D.; Weisberg, Peter J.; Dilts, Tomas E.; Vandergast, Amy G.
2013-01-01
Mohave ground squirrels Xerospermophilus mohavensis Merriam are small ground-dwelling rodents that have a highly restricted range in the northwest Mojave Desert, California, USA. Their small natural range is further reduced by habitat loss from agriculture, urban development, military training and recreational activities. Development of wind and solar resources for renewable energy has the potential to further reduce existing habitat. We used maximum entropy habitat models with observation data to describe current potential habitat in the context of future renewable energy development in the region. While 16% of historic habitat has been impacted by, or lost to, urbanization at present, an additional 10% may be affected by renewable energy development in the near future. Our models show that X. mohavensis habitat suitability is higher in areas slated for renewable energy development than in surrounding areas. We provide habitat maps that can be used to develop sampling designs, evaluate conservation corridors and inform development planning in the region.
Jorquera, Milko A; Shaharoona, Baby; Nadeem, Sajid M; de la Luz Mora, María; Crowley, David E
2012-11-01
Plant growth-promoting rhizobacteria (PGPR) are common components of the rhizosphere, but their role in adaptation of plants to extreme environments is not yet understood. Here, we examined rhizobacteria associated with ancient clones of Larrea tridentata in the Mohave desert, including the 11,700-year-old King Clone, which is oldest known specimen of this species. Analysis of unculturable and culturable bacterial community by PCR-DGGE revealed taxa that have previously been described on agricultural plants. These taxa included species of Proteobacteria, Bacteroidetes, and Firmicutes that commonly carry traits associated with plant growth promotion, including genes encoding aminocyclopropane carboxylate deaminase and β-propeller phytase. The PGPR activities of three representative isolates from L. tridentata were further confirmed using cucumber plants to screen for plant growth promotion. This study provides an intriguing first view of the mutualistic bacteria that are associated with some of the world's oldest living plants and suggests that PGPR likely contribute to the adaptation of L. tridentata and other plant species to harsh environmental conditions in desert habitats.
Dobson, James; Yang, Daryl C; Op den Brouw, Bianca; Cochran, Chip; Huynh, Tam; Kurrupu, Sanjaya; Sánchez, Elda E; Massey, Daniel J; Baumann, Kate; Jackson, Timothy N W; Nouwens, Amanda; Josh, Peter; Neri-Castro, Edgar; Alagón, Alejandro; Hodgson, Wayne C; Fry, Bryan G
2018-02-01
While some US populations of the Mohave rattlesnake (Crotalus scutulatus scutulatus) are infamous for being potently neurotoxic, the Mexican subspecies C. s. salvini (Huamantlan rattlesnake) has been largely unstudied beyond crude lethality testing upon mice. In this study we show that at least some populations of this snake are as potently neurotoxic as its northern cousin. Testing of the Mexican antivenom Antivipmyn showed a complete lack of neutralisation for the neurotoxic effects of C. s. salvini venom, while the neurotoxic effects of the US subspecies C. s. scutulatus were time-delayed but ultimately not eliminated. These results document unrecognised potent neurological effects of a Mexican snake and highlight the medical importance of this subspecies, a finding augmented by the ineffectiveness of the Antivipmyn antivenom. These results also influence our understanding of the venom evolution of Crotalus scutulatus, suggesting that neurotoxicity is the ancestral feature of this species, with the US populations which lack neurotoxicity being derived states. Copyright © 2017 Elsevier Inc. All rights reserved.
Native Fish Sanctuary Project - Sanctuary Development Phase, 2007 Annual Report
Mueller, Gordon A.
2007-01-01
Notable progress was made in 2007 toward the development of native fish facilities in the Lower Colorado River Basin. More than a dozen facilities are, or soon will be, online to benefit native fish. When this study began in 2005 no self-supporting communities of either bonytail or razorback sucker existed. Razorback suckers were removed from Rock Tank in 1997 and the communities at High Levee Pond had been compromised by largemouth bass in 2004. This project reversed that trend with the establishment of the Davis Cove native fish community in 2005. Bonytail and razorback sucker successfully produced young in Davis Cove in 2006. Bonytail successfully produced young in Parker Dam Pond in 2007, representing the first successful sanctuary established solely for bonytail. This past year, Three Fingers Lake received 135 large razorback suckers, and Federal and State agencies have agreed to develop a cooperative management approach dedicating a portion of that lake toward grow-out and (or) the establishment of another sanctuary. Two ponds at River's Edge Golf Course in Needles, California, were renovated in June and soon will be stocked with bonytail. Similar activities are taking place at Mohave Community College, Cerbat Cliffs Golf Course, Cibola High Levee Pond, Office Cove, Emerald Canyon Golf Course, and Bulkhead Cove. Recruitment can be expected as fish become sexually mature at these facilities. Flood-plain facilities have the potential to support 6,000 adult razorback suckers and nearly 20,000 bonytail if native fish management is aggressively pursued. This sanctuary project has assisted agencies in developing 15 native fish communities by identifying specific resource objectives for those sites, listing and prioritizing research opportunities and needs, and strategizing on management approaches through the use of resource-management plans. Such documents have been developed for Davis Cove, Cibola High Levee Pond, Parker Dam Pond, and Three Fingers Lake. We anticipate similar documents will be developed in the near future for River's Edge Golf Course Ponds, Office Cove, Emerald Canyon Golf Course Ponds, Bulkhead Cove, Mohave Community College, and Cerbat Cliffs Golf Course ponds as these facilities come on line or are developed in the future. The following report discusses the process that went into the development of these facilities. Sites were visited, assessed as to their suitability based on the control of nonnative predators, habitat suitability, conversion cost, logistics, geographical location, and willingness of landowners. They were then prioritized according to their suitability, cost, timely conversion, and willingness of landowners. Existing native fish facilities were included in this evaluation for their value in helping to determine physical and biological parameter ranges. This report describes the approaches that led to success, those leading to failure, and some of the biological, institutional, and management issues of implementing native fish sanctuary development.
USDA-ARS?s Scientific Manuscript database
The NASA SMAP (Soil Moisture Active Passive) mission conducted the SMAP Validation Experiment 2015 (SMAPVEX15) in order to support the calibration and validation activities of SMAP soil moisture data product.The main goals of the experiment were to address issues regarding the spatial disaggregation...
CFD validation experiments for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.
Late quaternary environmental changes in the upper Las Vegas valley, Nevada
NASA Astrophysics Data System (ADS)
Quade, Jay
1986-11-01
Five stratigraphic units and five soils of late Pleistocene to Holocene age crop out in dissected badlands on Corn Creek Flat, 30 km northwest of Las Vegas, Nevada, and at Tule Springs, nearer to Las Vegas. The record is dominantly fluvial but contains evidence of several moister, marsh-forming periods: the oldest (Unit B) dates perhaps to the middle Wisconsin, and the more widespread Unit D falls between 30,000 and 15,000 yr B.P. Unit D therefore correlates with pluvial maximum lacustrine deposits elsewhere in the Great Basin. Standing water was not of sufficient depth or extent during either period to form lake strandlines. Between 14,000 and 7200 yr B.P. (Unit E), standing surface water gradually decreased, a trend also apparent in Great Basin pluvial lake chronologies during the same period. Groundwater carbonate cementation and burrowing by cicadas (Cicadae) accompany the moist-phase units. After 7200 yr B.P., increased wind action, decreased biotic activity, and at least 25 m of water-table lowering accompanied widespread erosion of older fine-grained deposits. Based on pack-rat midden and pollen evidence, this coincides with major vegetation changes in the valley, from sagebrush-dominated steppe to lower Mohave desertscrub.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-03
...We, the U.S. Fish and Wildlife Service (Service), designate revised critical habitat for the southwestern willow flycatcher (Empidonax traillii extimus) (flycatcher) under the Endangered Species Act. In total, approximately 1,975 stream kilometers (1,227 stream miles) are being designated as critical habitat. These areas are designated as stream segments, with the lateral extent including the riparian areas and streams that occur within the 100-year floodplain or flood-prone areas encompassing a total area of approximately 84,569 hectares (208,973 acres). The critical habitat is located on a combination of Federal, State, tribal, and private lands in Inyo, Kern, Los Angeles, Riverside, Santa Barbara, San Bernardino, San Diego, and Ventura Counties in California; Clark, Lincoln, and Nye Counties in southern Nevada; Kane, San Juan, and Washington Counties in southern Utah; Alamosa, Conejos, Costilla, and La Plata Counties in southern Colorado; Apache, Cochise, Gila, Graham, Greenlee, La Paz, Maricopa, Mohave, Pima, Pinal, Santa Cruz, and Yavapai Counties in Arizona; and Catron, Grant, Hidalgo, Mora, Rio Arriba, Socorro, Taos, and Valencia Counties in New Mexico. The effect of this regulation is to conserve the flycatcher's habitat under the Endangered Species Act.
Jenkins, Jill A.; Goodbred, Steven L.
2005-01-01
To contribute to an investigation on possible endocrine impacts in three sites along the lower Colorado River in Arizona, especially in male fishes, this study addressed the null hypothesis that aquatic species in southern sites did not exhibit evidence of endocrine disruption as compared with those in nonimpacted sites. The results presented are intended to provide managers with science-based information and interpretations about the reproductive condition of biota in their habitat along the lower Colorado River to minimize any potential adverse effects to trust fish and wildlife resources and to identify water resources of acceptable quality. In particular, these data can inform decision making about wastewater discharges into the Colorado River that directly supplies water to Arizona refuges located along the river. These data are integral to the USFWS proposal entitled 'AZ - Endocrine Disruption in Razorback Sucker and Common Carp on National Wildlife Refuges along the Lower Colorado River' that was proposed to assess evidence of endocrine disruption in carp and razorback suckers downstream of Hoover Dam.
Development and Validation of an Internet Use Attitude Scale
ERIC Educational Resources Information Center
Zhang, Yixin
2007-01-01
This paper describes the development and validation of a new 40-item Internet Attitude Scale (IAS), a one-dimensional inventory for measuring the Internet attitudes. The first experiment initiated a generic Internet attitude questionnaire, ensured construct validity, and examined factorial validity and reliability. The second experiment further…
Teaching "Instant Experience" with Graphical Model Validation Techniques
ERIC Educational Resources Information Center
Ekstrøm, Claus Thorn
2014-01-01
Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.
The Role of Structural Models in the Solar Sail Flight Validation Process
NASA Technical Reports Server (NTRS)
Johnston, John D.
2004-01-01
NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.
NASA Astrophysics Data System (ADS)
Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.
Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R
2015-11-01
The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a "complete mystical experience" that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricci, Paolo; Theiler, C.; Fasoli, A.
A methodology for plasma turbulence code validation is discussed, focusing on quantitative assessment of the agreement between experiments and simulations. The present work extends the analysis carried out in a previous paper [P. Ricci et al., Phys. Plasmas 16, 055703 (2009)] where the validation observables were introduced. Here, it is discussed how to quantify the agreement between experiments and simulations with respect to each observable, how to define a metric to evaluate this agreement globally, and - finally - how to assess the quality of a validation procedure. The methodology is then applied to the simulation of the basic plasmamore » physics experiment TORPEX [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], considering both two-dimensional and three-dimensional simulation models.« less
Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation
NASA Technical Reports Server (NTRS)
Ross, James C.
2016-01-01
Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
Electrolysis Performance Improvement and Validation Experiment
NASA Technical Reports Server (NTRS)
Schubert, Franz H.
1992-01-01
Viewgraphs on electrolysis performance improvement and validation experiment are presented. Topics covered include: water electrolysis: an ever increasing need/role for space missions; static feed electrolysis (SFE) technology: a concept developed for space applications; experiment objectives: why test in microgravity environment; and experiment description: approach, hardware description, test sequence and schedule.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1992-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
A CFD validation roadmap for hypersonic flows
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.
1993-01-01
A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.
Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin
Barrett, Frederick S; Johnson, Matthew W; Griffiths, Roland R
2016-01-01
The 30-item revised Mystical Experience Questionnaire (MEQ30) was previously developed within an online survey of mystical-type experiences occasioned by psilocybin-containing mushrooms. The rated experiences occurred on average eight years before completion of the questionnaire. The current paper validates the MEQ30 using data from experimental studies with controlled doses of psilocybin. Data were pooled and analyzed from five laboratory experiments in which participants (n=184) received a moderate to high oral dose of psilocybin (at least 20 mg/70 kg). Results of confirmatory factor analysis demonstrate the reliability and internal validity of the MEQ30. Structural equation models demonstrate the external and convergent validity of the MEQ30 by showing that latent variable scores on the MEQ30 positively predict persisting change in attitudes, behavior, and well-being attributed to experiences with psilocybin while controlling for the contribution of the participant-rated intensity of drug effects. These findings support the use of the MEQ30 as an efficient measure of individual mystical experiences. A method to score a “complete mystical experience” that was used in previous versions of the mystical experience questionnaire is validated in the MEQ30, and a stand-alone version of the MEQ30 is provided for use in future research. PMID:26442957
Wong, Eliza L. Y.; Coulter, Angela; Hewitson, Paul; Cheung, Annie W. L.; Yam, Carrie H. K.; Lui, Siu fai; Tam, Wilson W. S.; Yeoh, Eng-kiong
2015-01-01
Patient experience reflects quality of care from the patients’ perspective; therefore, patients’ experiences are important data in the evaluation of the quality of health services. The development of an abbreviated, reliable and valid instrument for measuring inpatients’ experience would reflect the key aspect of inpatient care from patients’ perspective as well as facilitate quality improvement by cultivating patient engagement and allow the trends in patient satisfaction and experience to be measured regularly. The study developed a short-form inpatient instrument and tested its ability to capture a core set of inpatients’ experiences. The Hong Kong Inpatient Experience Questionnaire (HKIEQ) was established in 2010; it is an adaptation of the General Inpatient Questionnaire of the Care Quality Commission created by the Picker Institute in United Kingdom. This study used a consensus conference and a cross-sectional validation survey to create and validate a short-form of the Hong Kong Inpatient Experience Questionnaire (SF-HKIEQ). The short-form, the SF-HKIEQ, consisted of 18 items derived from the HKIEQ. The 18 items mainly covered relational aspects of care under four dimensions of the patient’s journey: hospital staff, patient care and treatment, information on leaving the hospital, and overall impression. The SF-HKIEQ had a high degree of face validity, construct validity and internal reliability. The validated SF-HKIEQ reflects the relevant core aspects of inpatients’ experience in a hospital setting. It provides a quick reference tool for quality improvement purposes and a platform that allows both healthcare staff and patients to monitor the quality of hospital care over time. PMID:25860775
Dilt, Thomas E; Weisberg, Peter J; Leitner, Philip; Matocq, Marjorie D; Inman, Richard D; Nussear, Kenneth E; Esque, Todd C
2016-06-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multiscale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods, including graph theory, circuit theory, and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this threatened Californian species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously distributed habitat and should be applicable across a broad range of taxa.
Forensic Uncertainty Quantification of Explosive Dispersal of Particles
NASA Astrophysics Data System (ADS)
Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho
2017-06-01
In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.
ERIC Educational Resources Information Center
Michael, William B.; Colson, Kenneth R.
1979-01-01
The construction and validation of the Life Experience Inventory (LEI) for the identification of creative electrical engineers are described. Using the number of patents held or pending as a criterion measure, the LEI was found to have high concurrent validity. (JKS)
CFD Modeling Needs and What Makes a Good Supersonic Combustion Validation Experiment
NASA Technical Reports Server (NTRS)
Gaffney, Richard L., Jr.; Cutler, Andrew D.
2005-01-01
If a CFD code/model developer is asked what experimental data he wants to validate his code or numerical model, his answer will be: "Everything, everywhere, at all times." Since this is not possible, practical, or even reasonable, the developer must understand what can be measured within the limits imposed by the test article, the test location, the test environment and the available diagnostic equipment. At the same time, it is important for the expermentalist/diagnostician to understand what the CFD developer needs (as opposed to wants) in order to conduct a useful CFD validation experiment. If these needs are not known, it is possible to neglect easily measured quantities at locations needed by the developer, rendering the data set useless for validation purposes. It is also important for the experimentalist/diagnostician to understand what the developer is trying to validate so that the experiment can be designed to isolate (as much as possible) the effects of a particular physical phenomena that is associated with the model to be validated. The probability of a successful validation experiment can be greatly increased if the two groups work together, each understanding the needs and limitations of the other.
ERIC Educational Resources Information Center
Zhang, Yi
2016-01-01
Objective: Guided by validation theory, this study aims to better understand the role that academic advising plays in international community college students' adjustment. More specifically, this study investigated how academic advising validates or invalidates their academic and social experiences in a community college context. Method: This…
Validation Experiences and Persistence among Urban Community College Students
ERIC Educational Resources Information Center
Barnett, Elisabeth A.
2007-01-01
The purpose of this research was to examine the extent to which urban community college students' experiences with validation by faculty contributed to their sense of integration in college and whether this, in turn, contributed to their intent to persist in college. This study focused on urban community college students' validating experiences…
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jedediah M.; Kirk, Daniel; Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment1, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Progress Towards a Microgravity CFD Validation Study Using the ISS SPHERES-SLOSH Experiment
NASA Technical Reports Server (NTRS)
Storey, Jed; Kirk, Daniel (Editor); Marsell, Brandon (Editor); Schallhorn, Paul (Editor)
2017-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecrafts mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many CFD programs have been validated by slosh experiments using various fluids in earth gravity, but prior to the ISS SPHERES-Slosh experiment, little experimental data for long-duration, zero-gravity slosh existed. This paper presents the current status of an ongoing CFD validation study using the ISS SPHERES-Slosh experimental data.
Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta
2009-03-01
In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.
Validation of a dye stain assay for vaginally inserted HEC-filled microbicide applicators
Katzen, Lauren L.; Fernández-Romero, José A.; Sarna, Avina; Murugavel, Kailapuri G.; Gawarecki, Daniel; Zydowsky, Thomas M.; Mensch, Barbara S.
2011-01-01
Background The reliability and validity of self-reports of vaginal microbicide use are questionable given the explicit understanding that participants are expected to comply with study protocols. Our objective was to optimize the Population Council's previously validated dye stain assay (DSA) and related procedures, and establish predictive values for the DSA's ability to identify vaginally inserted single-use, low-density polyethylene microbicide applicators filled with hydroxyethylcellulose gel. Methods Applicators, inserted by 252 female sex workers enrolled in a microbicide feasibility study in Southern India, served as positive controls for optimization and validation experiments. Prior to validation, optimal dye concentration and staining time were ascertained. Three validation experiments were conducted to determine sensitivity, specificity, negative predictive values and positive predictive values. Results The dye concentration of 0.05% (w/v) FD&C Blue No. 1 Granular Food Dye and staining time of five seconds were determined to be optimal and were used for the three validation experiments. There were a total of 1,848 possible applicator readings across validation experiments; 1,703 (92.2%) applicator readings were correct. On average, the DSA performed with 90.6% sensitivity, 93.9% specificity, and had a negative predictive value of 93.8% and a positive predictive value of 91.0%. No statistically significant differences between experiments were noted. Conclusions The DSA was optimized and successfully validated for use with single-use, low-density polyethylene applicators filled with hydroxyethylcellulose (HEC) gel. We recommend including the DSA in future microbicide trials involving vaginal gels in order to identify participants who have low adherence to dosing regimens. In doing so, we can develop strategies to improve adherence as well as investigate the association between product use and efficacy. PMID:21992983
Goals and Status of the NASA Juncture Flow Experiment
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Morrison, Joseph H.
2016-01-01
The NASA Juncture Flow experiment is a new effort whose focus is attaining validation data in the juncture region of a wing-body configuration. The experiment is designed specifically for the purpose of CFD validation. Current turbulence models routinely employed by Reynolds-averaged Navier-Stokes CFD are inconsistent in their prediction of corner flow separation in aircraft juncture regions, so experimental data in the near-wall region of such a configuration will be useful both for assessment as well as for turbulence model improvement. This paper summarizes the Juncture Flow effort to date, including preliminary risk-reduction experiments already conducted and planned future experiments. The requirements and challenges associated with conducting a quality validation test are discussed.
CFD validation experiments at the Lockheed-Georgia Company
NASA Technical Reports Server (NTRS)
Malone, John B.; Thomas, Andrew S. W.
1987-01-01
Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at the Lockheed-Georgia Company. Topics covered include validation experiments on a generic fighter configuration, a transport configuration, and a generic hypersonic vehicle configuration; computational procedures; surface and pressure measurements on wings; laser velocimeter measurements of a multi-element airfoil system; the flowfield around a stiffened airfoil; laser velocimeter surveys of a circulation control wing; circulation control for high lift; and high angle of attack aerodynamic evaluations.
Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.
2014-01-01
The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.
Validation of Skills, Knowledge and Experience in Lifelong Learning in Europe
ERIC Educational Resources Information Center
Ogunleye, James
2012-01-01
The paper examines systems of validation of skills and experience as well as the main methods/tools currently used for validating skills and knowledge in lifelong learning. The paper uses mixed methods--a case study research and content analysis of European Union policy documents and frameworks--as a basis for this research. The selection of the…
Changes and Issues in the Validation of Experience
ERIC Educational Resources Information Center
Triby, Emmanuel
2005-01-01
This article analyses the main changes in the rules for validating experience in France and of what they mean for society. It goes on to consider university validation practices. The way in which this system is evolving offers a chance to identify the issues involved for the economy and for society, with particular attention to the expected…
Reliability and validity of the neurorehabilitation experience questionnaire for inpatients.
Kneebone, Ian I; Hull, Samantha L; McGurk, Rhona; Cropley, Mark
2012-09-01
Patient-centered measures of the inpatient neurorehabilitation experience are needed to assess services. The objective of this study was to develop a valid and reliable Neurorehabilitation Experience Questionnaire (NREQ) to assess whether neurorehabilitation inpatients experience service elements important to them. Based on the themes established in prior qualitative research, adopting questions from established inventories and using a literature review, a draft version of the NREQ was generated. Focus groups and interviews were conducted with 9 patients and 26 staff from neurological rehabilitation units to establish face validity. Then, 70 patients were recruited to complete the NREQ to ascertain reliability (internal and test-retest) and concurrent validity. On the basis of the face validity testing, several modifications were made to the draft version of the NREQ. Subsequently, internal reliability (time 1 α = .76, time 2 α = .80), test retest reliability (r = 0.70), and concurrent validity (r = 0.32 and r = 0.56) were established for the revised version. Whereas responses were associated with positive mood (r = 0.30), they appeared not to be influenced by negative mood, age, education, length of stay, sex, functional independence, or whether a participant had been a patient on a unit previously. Preliminary validation of the NREQ suggests promise for use with its target population.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
The Grand Banks ERS-1 SAR wave spectra validation experiment
NASA Technical Reports Server (NTRS)
Vachon, P. W.; Dobson, F. W.; Smith, S. D.; Anderson, R. J.; Buckley, J. R.; Allingham, M.; Vandemark, D.; Walsh, E. J.; Khandekar, M.; Lalbeharry, R.
1993-01-01
As part of the ERS-1 validation program, the ERS-1 Synthetic Aperture Radar (SAR) wave spectra validation experiment was carried out over the Grand Banks of Newfoundland (Canada) in Nov. 1991. The principal objective of the experiment was to obtain complete sets of wind and wave data from a variety of calibrated instruments to validate SAR measurements of ocean wave spectra. The field program activities are described and the rather complex wind and wave conditions which were observed are summarized. Spectral comparisons with ERS-1 SAR image spectra are provided. The ERS-1 SAR is shown to have measured swell and range traveling wind seas, but did not measure azimuth traveling wind seas at any time during the experiment. Results of velocity bunching forward mapping and new measurements of the relationship between wind stress and sea state are also shown.
Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs
2016-09-27
No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.
ERIC Educational Resources Information Center
de Blignieres-Legeraud, Anne; Bjornavold, Jens; Charraud, Anne-Marie; Gerard, Francoise; Diamanti, Stamatina; Freundlinger, Alfred; Bjerknes, Ellen; Covita, Horacio
A workshop aimed to clarify under what conditions the validation of knowledge gained through experience can be considered a professionalizing factor for European Union teachers and trainers by creating a better link between experience and training and between vocational training and qualifications. Seven papers were presented in addition to an…
USDA-ARS?s Scientific Manuscript database
The purpose of SMAP (Soil Moisture Active Passive) Validation Experiment 2012 (SMAPVEX12) campaign was to collect data for the pre-launch development and validation of SMAP soil moisture algorithms. SMAP is a National Aeronautics and Space Administration’s (NASA) satellite mission designed for the m...
SeaSat-A Satellite Scatterometer (SASS) Validation and Experiment Plan
NASA Technical Reports Server (NTRS)
Schroeder, L. C. (Editor)
1978-01-01
This plan was generated by the SeaSat-A satellite scatterometer experiment team to define the pre-and post-launch activities necessary to conduct sensor validation and geophysical evaluation. Details included are an instrument and experiment description/performance requirements, success criteria, constraints, mission requirements, data processing requirement and data analysis responsibilities.
The inventory for déjà vu experiences assessment. Development, utility, reliability, and validity.
Sno, H N; Schalken, H F; de Jonghe, F; Koeter, M W
1994-01-01
In this article the development, utility, reliability, and validity of the Inventory for Déjà vu Experiences Assessment (IDEA) are described. The IDEA is a 23-item self-administered questionnaire consisting of a general section of nine questions and qualitative section of 14 questions. The latter questions comprise 48 topics. The questionnaire appeared to be a user-friendly instrument with satisfactory to good reliability and validity. The IDEA permits the study of quantitative and qualitative characteristics of déjà vu experiences.
Results from SMAP Validation Experiments 2015 and 2016
NASA Astrophysics Data System (ADS)
Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W.; Powers, J.; Wood, E. F.; Mohanty, B.; Judge, J.; Drewry, D.; McNairn, H.; Bullock, P.; Berg, A. A.; Magagi, R.; O'Neill, P. E.; Yueh, S. H.
2017-12-01
NASA's Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. Well-characterized sites with calibrated in situ soil moisture measurements are used to determine the quality of the soil moisture data products; these sites are designated as core validation sites (CVS). To support the CVS-based validation, airborne field experiments are used to provide high-fidelity validation data and to improve the SMAP retrieval algorithms. The SMAP project and NASA coordinated airborne field experiments at three CVS locations in 2015 and 2016. SMAP Validation Experiment 2015 (SMAPVEX15) was conducted around the Walnut Gulch CVS in Arizona in August, 2015. SMAPVEX16 was conducted at the South Fork CVS in Iowa and Carman CVS in Manitoba, Canada from May to August 2016. The airborne PALS (Passive Active L-band Sensor) instrument mapped all experiment areas several times resulting in 30 coincidental measurements with SMAP. The experiments included intensive ground sampling regime consisting of manual sampling and augmentation of the CVS soil moisture measurements with temporary networks of soil moisture sensors. Analyses using the data from these experiments have produced various results regarding the SMAP validation and related science questions. The SMAPVEX15 data set has been used for calibration of a hyper-resolution model for soil moisture product validation; development of a multi-scale parameterization approach for surface roughness, and validation of disaggregation of SMAP soil moisture with optical thermal signal. The SMAPVEX16 data set has been already used for studying the spatial upscaling within a pixel with highly heterogeneous soil texture distribution; for understanding the process of radiative transfer at plot scale in relation to field scale and SMAP footprint scale over highly heterogeneous vegetation distribution; for testing a data fusion based soil moisture downscaling approach; and for investigating soil moisture impact on estimation of vegetation fluorescence from airborne measurements. The presentation will describe the collected data and showcase some of the most important results achieved so far.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
Development and Validation of the Caring Loneliness Scale.
Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija
2016-12-01
The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.
ERIC Educational Resources Information Center
Tipton, Elizabeth
2013-01-01
As a result of the use of random assignment to treatment, randomized experiments typically have high internal validity. However, units are very rarely randomly selected from a well-defined population of interest into an experiment; this results in low external validity. Under nonrandom sampling, this means that the estimate of the sample average…
PSI-Center Simulations of Validation Platform Experiments
NASA Astrophysics Data System (ADS)
Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.
2013-10-01
The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.
Validation Experiments for Spent-Fuel Dry-Cask In-Basket Convection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Barton L.
2016-08-16
This work consisted of the following major efforts; 1. Literature survey on validation of external natural convection; 2. Design the experiment; 3. Build the experiment; 4. Run the experiment; 5. Collect results; 6. Disseminate results; and 7. Perform a CFD validation study using the results. We note that while all tasks are complete, some deviations from the original plan were made. Specifically, geometrical changes in the parameter space were skipped in favor of flow condition changes, which were found to be much more practical to implement. Changing the geometry required new as-built measurements, which proved extremely costly and impractical givenmore » the time and funds available« less
[Ethic review on clinical experiments of medical devices in medical institutions].
Shuai, Wanjun; Chao, Yong; Wang, Ning; Xu, Shining
2011-07-01
Clinical experiments are always used to evaluate the safety and validity of medical devices. The experiments have two types of clinical trying and testing. Ethic review must be done by the ethics committee of the medical department with the qualification of clinical research, and the approval must be made before the experiments. In order to ensure the safety and validity of clinical experiments of medical devices in medical institutions, the contents, process and approval criterions of the ethic review were analyzed and discussed.
Sánchez, Elda E.; Lucena, Sara E.; Reyes, Steven; Soto, Julio G.; Cantu, Esteban; Lopez-Johnston, Juan Carlos; Guerrero, Belsy; Salazar, Ana Maria; Rodríguez-Acosta, Alexis; Galán, Jacob A.; Tao, W. Andy; Pérez, John C.
2012-01-01
Interactions with exposed subendothelial extracellular proteins and cellular integrins (endothelial cells, platelets and lymphocytes) can cause alterations in the hemostatic system associated with atherothrombotic processes. Many molecules found in snake venoms induce pathophysiological changes in humans, cause edema, hemorrhage, and necrosis. Disintegrins are low molecular weight, non-enzymatic proteins found in snake venom that mediate changes by binding to integrins of platelets or other cells and prevent binding of the natural ligands such as fibrinogen, fibronectin or vitronectin. Disintegrins are of great biomedical importance due to their binding affinities resulting in the inhibition of platelet aggregation, adhesion of cancer cells, and induction of signal transduction pathways. RT-PCR was used to obtain a 216 bp disintegrin cDNA from a C. s. scutulatus snake venom gland. The cloned recombinant disintegrin called r-mojastin 1 codes for 71 amino acids, including 12 cysteines, and an RGD binding motif. r-Mojastin 1 inhibited platelet adhesion to fibronectin with an IC50 of 58.3 nM and ADP-induced platelet aggregation in whole blood with an IC50 of 46 nM. r-Mojastin 1 was also tested for its ability to inhibit platelet ATP release using PRP resulting with an IC50 of 95.6 nM. MALDI-TOF mass spectrum analysis showed that r-mojastin has a mass of 7.9509 kDa. PMID:20598348
Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.
2016-01-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.
Merging of an EET CInSAR DEM with the SRTM DEM
NASA Astrophysics Data System (ADS)
Wegmuller, Urs; Wiesmann, Andreas; Santoro, Maurizio
2010-03-01
Cross-interferometry (CInSAR) using ERS-2 and ENVISAT ASAR SAR data acquired in the ERS like mode IS2 at VV-polarization with perpendicular baselines of approximately 2 kilometers permits generation of digital elevation models (DEMs). Thanks to the long perpendicular baselines CInSAR has a good potential to generate accurate DEMs over relatively flat terrain. Over sloped terrain the topographic phase gradients get very high and the signals decorrelate if the carrier frequency difference and the baseline effects do not compensate any more. As a result phase unwrapping gets very difficult so that often no reliable solution is obtained for hilly terrain, resulting in DEMs with significant spatial gaps.Spatial gaps in ERS-2 ENVISAT Tandem (EET) CInSAR DEMs over hilly terrain are clearly an important limitation to the utility of these DEMs. On the other hand the high quality achieved over relatively flat terrain is of high interest. As an attempt to significantly improve the utility of the "good information" contained in the CInSAR DEM we developed a methodology to merge a CInSAR DEM with another available DEM, e.g. the SRTM DEM.The methodology was applied to an area in California, USA, including relatively flat terrain belonging to the Mohave desert as well as hilly to mountainous terrain of the San Gabriel and Tehachapi Mountains.
Hanauer, David I.; Bauerle, Cynthia
2015-01-01
Science, technology, engineering, and mathematics education reform efforts have called for widespread adoption of evidence-based teaching in which faculty members attend to student outcomes through assessment practice. Awareness about the importance of assessment has illuminated the need to understand what faculty members know and how they engage with assessment knowledge and practice. The Faculty Self-Reported Assessment Survey (FRAS) is a new instrument for evaluating science faculty assessment knowledge and experience. Instrument validation was composed of two distinct studies: an empirical evaluation of the psychometric properties of the FRAS and a comparative known-groups validation to explore the ability of the FRAS to differentiate levels of faculty assessment experience. The FRAS was found to be highly reliable (α = 0.96). The dimensionality of the instrument enabled distinction of assessment knowledge into categories of program design, instrumentation, and validation. In the known-groups validation, the FRAS distinguished between faculty groups with differing levels of assessment experience. Faculty members with formal assessment experience self-reported higher levels of familiarity with assessment terms, higher frequencies of assessment activity, increased confidence in conducting assessment, and more positive attitudes toward assessment than faculty members who were novices in assessment. These results suggest that the FRAS can reliably and validly differentiate levels of expertise in faculty knowledge of assessment. PMID:25976653
CFD validation experiments at McDonnell Aircraft Company
NASA Technical Reports Server (NTRS)
Verhoff, August
1987-01-01
Information is given in viewgraph form on computational fluid dynamics (CFD) validation experiments at McDonnell Aircraft Company. Topics covered include a high speed research model, a supersonic persistence fighter model, a generic fighter wing model, surface grids, force and moment predictions, surface pressure predictions, forebody models with 65 degree clipped delta wings, and the low aspect ratio wing/body experiment.
Ribeiro, João Carlos; Simões, João; Silva, Filipe; Silva, Eduardo D.; Hummel, Cornelia; Hummel, Thomas; Paiva, António
2016-01-01
The cross-cultural adaptation and validation of the Sniffin`Sticks test for the Portuguese population is described. Over 270 people participated in four experiments. In Experiment 1, 67 participants rated the familiarity of presented odors and seven descriptors of the original test were adapted to a Portuguese context. In Experiment 2, the Portuguese version of Sniffin`Sticks test was administered to 203 healthy participants. Older age, male gender and active smoking status were confirmed as confounding factors. The third experiment showed the validity of the Portuguese version of Sniffin`Sticks test in discriminating healthy controls from patients with olfactory dysfunction. In Experiment 4, the test-retest reliability for both the composite score (r71 = 0.86) and the identification test (r71 = 0.62) was established (p<0.001). Normative data for the Portuguese version of Sniffin`Sticks test is provided, showing good validity and reliability and effectively distinguishing patients from healthy controls with high sensitivity and specificity. The Portuguese version of Sniffin`Sticks test identification test is a clinically suitable screening tool in routine outpatient Portuguese settings. PMID:26863023
Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.
Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B
2018-01-01
The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.
DC-8 and ER-2 in Sweden for the Sage III Ozone Loss and Validation Experiment (SOLVE)
NASA Technical Reports Server (NTRS)
2000-01-01
This 48 second video shows Dryden's Airborne Science aircraft in Kiruna Sweden in January 2000. The DC-8 and ER-2 conducted atmospheric studies for the Sage III Ozone Loss and Validation Experiment (SOLVE).
Kirsch, Monika; Mitchell, Sandra A; Dobbels, Fabienne; Stussi, Georg; Basch, Ethan; Halter, Jorg P; De Geest, Sabina
2015-02-01
The aim of this sequential mixed methods study was to develop a PRO-CTCAE (Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events)-based measure of the symptom experience of late effects in German speaking long-term survivors of allogeneic stem cell transplantation (SCT), and to examine its content validity. The US National Cancer Institute's PRO-CTAE item library was translated into German and linguistically validated. PRO-CTCAE symptoms prevalent in ≥50% of survivors (n = 15) and recognized in its importance by SCT experts (n = 9) were identified. Additional concepts relevant to the symptom experience and its consequences were elicited. Content validity of the PROVIVO (Patient-Reported Outcomes of long-term survivors after allogeneic SCT) instrument was assessed through an additional round of cognitive debriefing in 15 patients, and item and scale content validity indices by 9 experts. PROVIVO is comprised of a total of 49 items capturing the experience of physical, emotional and cognitive symptoms. To improve the instrument's utility for clinical decision-making, questions soliciting limitations in activities of daily living, frequent infections, and overall well-being were added. Cognitive debriefings demonstrated that items were well understood and relevant to the SCT survivor experience. Scale Content Validity Index (CVI) (0.94) and item CVI (median = 1; range 0.75-1) were very high. Qualitative and quantitative data provide preliminary evidence supporting the content validity of PROVIVO and identify a PRO-CTCAE item bundle for use in SCT survivors. A study to evaluate the measurement properties of PROVIVO and to examine its capacity to improve survivorship care planning is underway. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cloud computing and validation of expandable in silico livers.
Ropella, Glen E P; Hunt, C Anthony
2010-12-03
In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.
Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya
2018-03-08
A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.
A statistical approach to selecting and confirming validation targets in -omics experiments
2012-01-01
Background Genomic technologies are, by their very nature, designed for hypothesis generation. In some cases, the hypotheses that are generated require that genome scientists confirm findings about specific genes or proteins. But one major advantage of high-throughput technology is that global genetic, genomic, transcriptomic, and proteomic behaviors can be observed. Manual confirmation of every statistically significant genomic result is prohibitively expensive. This has led researchers in genomics to adopt the strategy of confirming only a handful of the most statistically significant results, a small subset chosen for biological interest, or a small random subset. But there is no standard approach for selecting and quantitatively evaluating validation targets. Results Here we present a new statistical method and approach for statistically validating lists of significant results based on confirming only a small random sample. We apply our statistical method to show that the usual practice of confirming only the most statistically significant results does not statistically validate result lists. We analyze an extensively validated RNA-sequencing experiment to show that confirming a random subset can statistically validate entire lists of significant results. Finally, we analyze multiple publicly available microarray experiments to show that statistically validating random samples can both (i) provide evidence to confirm long gene lists and (ii) save thousands of dollars and hundreds of hours of labor over manual validation of each significant result. Conclusions For high-throughput -omics studies, statistical validation is a cost-effective and statistically valid approach to confirming lists of significant results. PMID:22738145
Truini, Margot; Beard, L. Sue; Kennedy, Jeffrey; Anning, Dave W.
2013-01-01
We have investigated the hydrogeology of the Hualapai Valley, Detrital Valley, and Sacramento Valley basins of Mohave County in northwestern Arizona to develop a better understanding of groundwater storage within the basin fill aquifers. In our investigation we used geologic maps, well-log data, and geophysical surveys to delineate the sedimentary textures and lithology of the basin fill. We used gravity data to construct a basin geometry model that defines smaller subbasins within the larger basins, and airborne transient-electromagnetic modeled results along with well-log lithology data to infer the subsurface distribution of basin fill within the subbasins. Hydrogeologic units (HGUs) are delineated within the subbasins on the basis of the inferred lithology of saturated basin fill. We used the extent and size of HGUs to estimate groundwater storage to depths of 400 meters (m) below land surface (bls). The basin geometry model for the Hualapai Valley basin consists of three subbasins: the Kingman, Hualapai, and southern Gregg subbasins. In the Kingman subbasin, which is estimated to be 1,200 m deep, saturated basin fill consists of a mixture of fine- to coarse-grained sedimentary deposits. The Hualapai subbasin, which is the largest of the subbasins, contains a thick halite body from about 400 m to about 4,300 m bls. Saturated basin fill overlying the salt body consists predominately of fine-grained older playa deposits. In the southern Gregg subbasin, which is estimated to be 1,400 m deep, saturated basin fill is interpreted to consist primarily of fine- to coarse-grained sedimentary deposits. Groundwater storage to 400 m bls in the Hualapai Valley basin is estimated to be 14.1 cubic kilometers (km3). The basin geometry model for the Detrital Valley basin consists of three subbasins: northern Detrital, central Detrital, and southern Detrital subbasins. The northern and central Detrital subbasins are characterized by a predominance of playa evaporite and fine-grained clastic deposits; evaporite deposits in the northern Detrital subbasin include halite. The northern Detrital subbasin is estimated to be 600 m deep and the middle Detrital subbasin is estimated to be 700 m deep. The southern Detrital subbasin, which is estimated to be 1,500 m deep, is characterized by a mixture of fine- to coarse-grained basin fill deposits. Groundwater storage to 400 m bls in the Detrital Valley basin is estimated to be 9.8 km3. The basin geometry model for the Sacramento Valley basin consists of three subbasins: the Chloride, Golden Valley, and Dutch Flat subbasins. The Chloride subbasin, which is estimated to be 900 m deep, is characterized by fine- to coarse-grained basin fill deposits. In the Golden Valley subbasin, which is elongated north-south, and is estimated to be 1,300 m deep, basin fill includes fine-grained sedimentary deposits overlain by coarse-grained sedimentary deposits in much of the subbasin. The Dutch Flat subbasin is estimated to be 2,600 m deep, and well-log lithologic data suggest that the basin fill consists of interlayers of gravel, sand, and clay. Groundwater storage to 400 m bls in the Sacramento Valley basin is estimated to be 35.1 km3.
NASA Technical Reports Server (NTRS)
Anderson, James G.
2001-01-01
This grant provided partial support for participation in the SAGE III Ozone Loss and Validation Experiment. The NASA-sponsored SOLVE mission was conducted Jointly with the European Commission-sponsored Third European Stratospheric Experiment on Ozone (THESEO 2000). Researchers examined processes that control ozone amounts at mid to high latitudes during the arctic winter and acquired correlative data needed to validate the Stratospheric Aerosol and Gas Experiment (SAGE) III satellite measurements that are used to quantitatively assess high-latitude ozone loss. The campaign began in September 1999 with intercomparison flights out of NASA Dryden Flight Research Center in Edwards. CA. and continued through March 2000. with midwinter deployments out of Kiruna. Sweden. SOLVE was co-sponsored by the Upper Atmosphere Research Program (UARP). Atmospheric Effects of Aviation Project (AEAP). Atmospheric Chemistry Modeling and Analysis Program (ACMAP). and Earth Observing System (EOS) of NASA's Earth Science Enterprise (ESE) as part of the validation program for the SAGE III instrument.
Earth Radiation Budget Experiment (ERBE) validation
NASA Technical Reports Server (NTRS)
Barkstrom, Bruce R.; Harrison, Edwin F.; Smith, G. Louis; Green, Richard N.; Kibler, James F.; Cess, Robert D.
1990-01-01
During the past 4 years, data from the Earth Radiation Budget Experiment (ERBE) have been undergoing detailed examination. There is no direct source of groundtruth for the radiation budget. Thus, this validation effort has had to rely heavily upon intercomparisons between different types of measurements. The ERBE SCIENCE Team chose 10 measures of agreement as validation criteria. Late in August 1988, the Team agreed that the data met these conditions. As a result, the final, monthly averaged data products are being archived. These products, their validation, and some results for January 1986 are described. Information is provided on obtaining the data from the archive.
Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier
2007-01-01
The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.
Further Validation of the Coach Identity Prominence Scale
ERIC Educational Resources Information Center
Pope, J. Paige; Hall, Craig R.
2014-01-01
This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…
Modeling the effects of argument length and validity on inductive and deductive reasoning.
Rotello, Caren M; Heit, Evan
2009-09-01
In an effort to assess models of inductive reasoning and deductive reasoning, the authors, in 3 experiments, examined the effects of argument length and logical validity on evaluation of arguments. In Experiments 1a and 1b, participants were given either induction or deduction instructions for a common set of stimuli. Two distinct effects were observed: Induction judgments were more affected by argument length, and deduction judgments were more affected by validity. In Experiment 2, fluency was manipulated by displaying the materials in a low-contrast font, leading to increased sensitivity to logical validity. Several variants of 1-process and 2-process models of reasoning were assessed against the results. A 1-process model that assumed the same scale of argument strength underlies induction and deduction was not successful. A 2-process model that assumed separate, continuous informational dimensions of apparent deductive validity and associative strength gave the more successful account. (c) 2009 APA, all rights reserved.
Nuclear Energy Knowledge and Validation Center (NEKVaC) Needs Workshop Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gougar, Hans
2015-02-01
The Department of Energy (DOE) has made significant progress developing simulation tools to predict the behavior of nuclear systems with greater accuracy and of increasing our capability to predict the behavior of these systems outside of the standard range of applications. These analytical tools require a more complex array of validation tests to accurately simulate the physics and multiple length and time scales. Results from modern simulations will allow experiment designers to narrow the range of conditions needed to bound system behavior and to optimize the deployment of instrumentation to limit the breadth and cost of the campaign. Modern validation,more » verification and uncertainty quantification (VVUQ) techniques enable analysts to extract information from experiments in a systematic manner and provide the users with a quantified uncertainty estimate. Unfortunately, the capability to perform experiments that would enable taking full advantage of the formalisms of these modern codes has progressed relatively little (with some notable exceptions in fuels and thermal-hydraulics); the majority of the experimental data available today is the "historic" data accumulated over the last decades of nuclear systems R&D. A validated code-model is a tool for users. An unvalidated code-model is useful for code developers to gain understanding, publish research results, attract funding, etc. As nuclear analysis codes have become more sophisticated, so have the measurement and validation methods and the challenges that confront them. A successful yet cost-effective validation effort requires expertise possessed only by a few, resources possessed only by the well-capitalized (or a willing collective), and a clear, well-defined objective (validating a code that is developed to satisfy the need(s) of an actual user). To that end, the Idaho National Laboratory established the Nuclear Energy Knowledge and Validation Center to address the challenges of modern code validation and to manage the knowledge from past, current, and future experimental campaigns. By pulling together the best minds involved in code development, experiment design, and validation to establish and disseminate best practices and new techniques, the Nuclear Energy Knowledge and Validation Center (NEKVaC or the ‘Center’) will be a resource for industry, DOE Programs, and academia validation efforts.« less
Stefanidis, Dimitrios; Hope, William W; Scott, Daniel J
2011-07-01
The value of robotic assistance for intracorporeal suturing is not well defined. We compared robotic suturing with laparoscopic suturing on the FLS model with a large cohort of surgeons. Attendees (n=117) at the SAGES 2006 Learning Center robotic station placed intracorporeal sutures on the FLS box-trainer model using conventional laparoscopic instruments and the da Vinci® robot. Participant performance was recorded using a validated objective scoring system, and a questionnaire regarding demographics, task workload, and suturing modality preference was completed. Construct validity for both tasks was assessed by comparing the performance scores of subjects with various levels of experience. A validated questionnaire was used for workload measurement. Of the participants, 84% had prior laparoscopic and 10% prior robotic suturing experience. Within the allotted time, 83% of participants completed the suturing task laparoscopically and 72% with the robot. Construct validity was demonstrated for both simulated tasks according to the participants' advanced laparoscopic experience, laparoscopic suturing experience, and self-reported laparoscopic suturing ability (p<0.001 for all) and according to prior robotic experience, robotic suturing experience, and self-reported robotic suturing ability (p<0.001 for all), respectively. While participants achieved higher suturing scores with standard laparoscopy compared with the robot (84±75 vs. 56±63, respectively; p<0.001), they found the laparoscopic task more physically demanding (NASA score 13±5 vs. 10±5, respectively; p<0.001) and favored the robot as their method of choice for intracorporeal suturing (62 vs. 38%, respectively; p<0.01). Construct validity was demonstrated for robotic suturing on the FLS model. Suturing scores were higher using standard laparoscopy likely as a result of the participants' greater experience with laparoscopic suturing versus robotic suturing. Robotic assistance decreases the physical demand of intracorporeal suturing compared with conventional laparoscopy and, in this study, was the preferred suturing method by most surgeons. Curricula for robotic suturing training need to be developed.
In-Space Structural Validation Plan for a Stretched-Lens Solar Array Flight Experiment
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woods-Vedeler, Jessica A.; Jones, Thomas W.
2001-01-01
This paper summarizes in-space structural validation plans for a proposed Space Shuttle-based flight experiment. The test article is an innovative, lightweight solar array concept that uses pop-up, refractive stretched-lens concentrators to achieve a power/mass density of at least 175 W/kg, which is more than three times greater than current capabilities. The flight experiment will validate this new technology to retire the risk associated with its first use in space. The experiment includes structural diagnostic instrumentation to measure the deployment dynamics, static shape, and modes of vibration of the 8-meter-long solar array and several of its lenses. These data will be obtained by photogrammetry using the Shuttle payload-bay video cameras and miniature video cameras on the array. Six accelerometers are also included in the experiment to measure base excitations and small-amplitude tip motions.
Zimmermann, Karin; Cignacco, Eva; Eskola, Katri; Engberg, Sandra; Ramelet, Anne-Sylvie; Von der Weid, Nicolas; Bergstraesser, Eva
2015-12-01
To develop and test the Parental PELICAN Questionnaire, an instrument to retrospectively assess parental experiences and needs during their child's end-of-life care. To offer appropriate care for dying children, healthcare professionals need to understand the illness experience from the family perspective. A questionnaire specific to the end-of-life experiences and needs of parents losing a child is needed to evaluate the perceived quality of paediatric end-of-life care. This is an instrument development study applying mixed methods based on recommendations for questionnaire design and validation. The Parental PELICAN Questionnaire was developed in four phases between August 2012-March 2014: phase 1: item generation; phase 2: validity testing; phase 3: translation; phase 4: pilot testing. Psychometric properties were assessed after applying the Parental PELICAN Questionnaire in a sample of 224 bereaved parents in April 2014. Validity testing covered the evidence based on tests of content, internal structure and relations to other variables. The Parental PELICAN Questionnaire consists of approximately 90 items in four slightly different versions accounting for particularities of the four diagnostic groups. The questionnaire's items were structured according to six quality domains described in the literature. Evidence of initial validity and reliability could be demonstrated with the involvement of healthcare professionals and bereaved parents. The Parental PELICAN Questionnaire holds promise as a measure to assess parental experiences and needs and is applicable to a broad range of paediatric specialties and settings. Future validation is needed to evaluate its suitability in different cultures. © 2015 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricks, Allen; Blanchat, Thomas K.; Jernigan, Dann A.
2006-06-01
It is necessary to improve understanding and develop validation data of the heat flux incident to an object located within the fire plume for the validation of SIERRA/ FUEGO/SYRINX fire and SIERRA/CALORE. One key aspect of the validation data sets is the determination of the relative contribution of the radiative and convective heat fluxes. To meet this objective, a cylindrical calorimeter with sufficient instrumentation to measure total and radiative heat flux had been designed and fabricated. This calorimeter will be tested both in the controlled radiative environment of the Penlight facility and in a fire environment in the FLAME/Radiant Heatmore » (FRH) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisons between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. A significant question of interest to modeling heat flux incident to an object in or near a fire is the contribution of the radiation and convection modes of heat transfer. The series of experiments documented in this test plan is designed to provide data on the radiation partitioning, defined as the fraction of the total heat flux that is due to radiation.« less
In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study
NASA Technical Reports Server (NTRS)
Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.
2012-01-01
In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.
Zhou, Bailing; Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu; Yang, Yuedong; Zhou, Yaoqi; Wang, Jihua
2018-01-04
Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Zhao, Huiying; Yu, Jiafeng; Guo, Chengang; Dou, Xianghua; Song, Feng; Hu, Guodong; Cao, Zanxia; Qu, Yuanxu
2018-01-01
Abstract Long non-coding RNAs (lncRNAs) play important functional roles in various biological processes. Early databases were utilized to deposit all lncRNA candidates produced by high-throughput experimental and/or computational techniques to facilitate classification, assessment and validation. As more lncRNAs are validated by low-throughput experiments, several databases were established for experimentally validated lncRNAs. However, these databases are small in scale (with a few hundreds of lncRNAs only) and specific in their focuses (plants, diseases or interactions). Thus, it is highly desirable to have a comprehensive dataset for experimentally validated lncRNAs as a central repository for all of their structures, functions and phenotypes. Here, we established EVLncRNAs by curating lncRNAs validated by low-throughput experiments (up to 1 May 2016) and integrating specific databases (lncRNAdb, LncRANDisease, Lnc2Cancer and PLNIncRBase) with additional functional and disease-specific information not covered previously. The current version of EVLncRNAs contains 1543 lncRNAs from 77 species that is 2.9 times larger than the current largest database for experimentally validated lncRNAs. Seventy-four percent lncRNA entries are partially or completely new, comparing to all existing experimentally validated databases. The established database allows users to browse, search and download as well as to submit experimentally validated lncRNAs. The database is available at http://biophy.dzu.edu.cn/EVLncRNAs. PMID:28985416
Validation of an Instrument to Measure Community College Student Satisfaction
ERIC Educational Resources Information Center
Zhai, Lijuan
2012-01-01
This article reports the development and validation of a survey instrument to measure community college students' satisfaction with their educational experiences. The initial survey included 95 questions addressing community college student experiences. Data were collected from 558 community college students during spring of 2001. An exploratory…
Validation Experiences and Persistence among Community College Students
ERIC Educational Resources Information Center
Barnett, Elisabeth A.
2011-01-01
The purpose of this correlational research was to examine the extent to which community college students' experiences with validation by faculty (Rendon, 1994, 2002) predicted: (a) their sense of integration, and (b) their intent to persist. The research was designed as an elaboration of constructs within Tinto's (1993) Longitudinal Model of…
Reconceptualising the external validity of discrete choice experiments.
Lancsar, Emily; Swait, Joffre
2014-10-01
External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.
Preparing for the Validation Visit--Guidelines for Optimizing the Experience.
ERIC Educational Resources Information Center
Osborn, Hazel A.
2003-01-01
Urges child care programs to seek accreditation from NAEYC's National Academy of Early Childhood Programs to increase program quality and provides information on the validation process. Includes information on the validation visit and the validator's role and background. Offers suggestions for preparing the director, staff, children, and families…
Ego-Dissolution and Psychedelics: Validation of the Ego-Dissolution Inventory (EDI).
Nour, Matthew M; Evans, Lisa; Nutt, David; Carhart-Harris, Robin L
2016-01-01
The experience of a compromised sense of "self", termed ego-dissolution, is a key feature of the psychedelic experience. This study aimed to validate the Ego-Dissolution Inventory (EDI), a new 8-item self-report scale designed to measure ego-dissolution. Additionally, we aimed to investigate the specificity of the relationship between psychedelics and ego-dissolution. Sixteen items relating to altered ego-consciousness were included in an internet questionnaire; eight relating to the experience of ego-dissolution (comprising the EDI), and eight relating to the antithetical experience of increased self-assuredness, termed ego-inflation. Items were rated using a visual analog scale. Participants answered the questionnaire for experiences with classical psychedelic drugs, cocaine and/or alcohol. They also answered the seven questions from the Mystical Experiences Questionnaire (MEQ) relating to the experience of unity with one's surroundings. Six hundred and ninety-one participants completed the questionnaire, providing data for 1828 drug experiences (1043 psychedelics, 377 cocaine, 408 alcohol). Exploratory factor analysis demonstrated that the eight EDI items loaded exclusively onto a single common factor, which was orthogonal to a second factor comprised of the items relating to ego-inflation (rho = -0.110), demonstrating discriminant validity. The EDI correlated strongly with the MEQ-derived measure of unitive experience (rho = 0.735), demonstrating convergent validity. EDI internal consistency was excellent (Cronbach's alpha 0.93). Three analyses confirmed the specificity of ego-dissolution for experiences occasioned by psychedelic drugs. Firstly, EDI score correlated with drug-dose for psychedelic drugs (rho = 0.371), but not for cocaine (rho = 0.115) or alcohol (rho = -0.055). Secondly, the linear regression line relating the subjective intensity of the experience to ego-dissolution was significantly steeper for psychedelics (unstandardized regression coefficient = 0.701) compared with cocaine (0.135) or alcohol (0.144). Ego-inflation, by contrast, was specifically associated with cocaine experiences. Finally, a binary Support Vector Machine classifier identified experiences occasioned by psychedelic drugs vs. cocaine or alcohol with over 85% accuracy using ratings of ego-dissolution and ego-inflation alone. Our results demonstrate the psychometric structure, internal consistency and construct validity of the EDI. Moreover, we demonstrate the close relationship between ego-dissolution and the psychedelic experience. The EDI will facilitate the study of the neuronal correlates of ego-dissolution, which is relevant for psychedelic-assisted psychotherapy and our understanding of psychosis.
van der Veer, Sabine N; Jager, Kitty J; Visserman, Ella; Beekman, Robert J; Boeschoten, Els W; de Keizer, Nicolette F; Heuveling, Lara; Stronks, Karien; Arah, Onyebuchi A
2012-08-01
Patient experience is an established indicator of quality of care. Validated tools that measure both experiences and priorities are lacking for chronic dialysis care, hampering identification of negative experiences that patients actually rate important. We developed two Consumer Quality (CQ) index questionnaires, one for in-centre haemodialysis (CHD) and the other for peritoneal dialysis and home haemodialysis (PHHD) care. The instruments were validated using exploratory factor analyses, reliability analysis of identified scales and assessing the association between reliable scales and global ratings. We investigated opportunities for improvement by combining suboptimal experience with patient priority. Sixteen dialysis centres participated in our study. The pilot CQ index for CHD care consisted of 71 questions. Based on data of 592 respondents, we identified 42 core experience items in 10 scales with Cronbach's α ranging from 0.38 to 0.88; five were reliable (α ≥ 0.70). The instrument identified information on centres' fire procedures as the aspect of care exhibiting the biggest opportunity for improvement. The pilot CQ index PHHD comprised 56 questions. The response of 248 patients yielded 31 core experience items in nine scales with Cronbach's α ranging between 0.53 and 0.85; six were reliable. Information on kidney transplantation during pre-dialysis showed most room for improvement. However, for both types of care, opportunities for improvement were mostly limited. The CQ index reliably and validly captures dialysis patient experience. Overall, most care aspects showed limited room for improvement, mainly because patients participating in our study rated their experience to be optimal. To evaluate items with high priority, but with which relatively few patients have experience, more qualitative instruments should be considered.
ERIC Educational Resources Information Center
Kohn, Paul M.; Milrose, Jill A.
1993-01-01
A decontaminated measure of exposures to hassles for adolescents, the Inventory of High-School Students' Recent Life Experiences (IHSSRLE), was developed and validated with 94 male and 82 female Canadian high school students. The IHSSRLE shows adequate internal consistency reliability and validity against the criterion of subjectively appraised…
Pathways to Engineering: The Validation Experiences of Transfer Students
ERIC Educational Resources Information Center
Zhang, Yi; Ozuna, Taryn
2015-01-01
Community college engineering transfer students are a critical student population of engineering degree recipients and technical workforce in the United States. Focusing on this group of students, we adopted Rendón's (1994) validation theory to explore the students' experiences in community colleges prior to transferring to a four-year…
Validity of Adult Retrospective Reports of Adverse Childhood Experiences: Review of the Evidence
ERIC Educational Resources Information Center
Hardt, Jochen; Rutter, Michael
2004-01-01
Background: Influential studies have cast doubt on the validity of retrospective reports by adults of their own adverse experiences in childhood. Accordingly, many researchers view retrospective reports with scepticism. Method: A computer-based search, supplemented by hand searches, was used to identify studies reported between 1980 and 2001 in…
NASA Astrophysics Data System (ADS)
Andromeda, A.; Lufri; Festiyed; Ellizar, E.; Iryani, I.; Guspatni, G.; Fitri, L.
2018-04-01
This Research & Development study aims to produce a valid and practical experiment integrated guided inquiry based module on topic of colloidal chemistry. 4D instructional design model was selected in this study. Limited trial of the product was conducted at SMAN 7 Padang. Instruments used were validity and practicality questionnaires. Validity and practicality data were analyzed using Kappa moment. Analysis of the data shows that Kappa moment for validity was 0.88 indicating a very high degree of validity. Kappa moments for the practicality from students and teachers were 0.89 and 0.95 respectively indicating high degree of practicality. Analysis on the module filled in by students shows that 91.37% students could correctly answer critical thinking, exercise, prelab, postlab and worksheet questions asked in the module. These findings indicate that the integrated guided inquiry based module on topic of colloidal chemistry was valid and practical for chemistry learning in senior high school.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, David A.; Hughes, Henry Grady
In this paper, we expand on previous validation work by Dixon and Hughes. That is, we present a more complete suite of validation results with respect to to the well-known Lockwood energy deposition experiment. Lockwood et al. measured energy deposition in materials including beryllium, carbon, aluminum, iron, copper, molybdenum, tantalum, and uranium, for both single- and multi-layer 1-D geometries. Source configurations included mono-energetic, mono-directional electron beams with energies of 0.05-MeV, 0.1-MeV, 0.3- MeV, 0.5-MeV, and 1-MeV, in both normal and off-normal angles of incidence. These experiments are particularly valuable for validating electron transport codes, because they are closely represented bymore » simulating pencil beams incident on 1-D semi-infinite slabs with and without material interfaces. Herein, we include total energy deposition and energy deposition profiles for the single-layer experiments reported by Lockwood et al. (a more complete multi-layer validation will follow in another report).« less
Cloud computing and validation of expandable in silico livers
2010-01-01
Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207
Kim, Eun-Mi; Kim, Sun-Aee; Lee, Ju-Ry; Burlison, Jonathan D; Oh, Eui Geum
2018-02-13
"Second victims" are defined as healthcare professionals whose wellness is influenced by adverse clinical events. The Second Victim Experience and Support Tool (SVEST) was used to measure the second-victim experience and quality of support resources. Although the reliability and validity of the original SVEST have been validated, those for the Korean tool have not been validated. The aim of the study was to evaluate the psychometric properties of the Korean version of the SVEST. The study included 305 clinical nurses as participants. The SVEST was translated into Korean via back translation. Content validity was assessed by seven experts, and test-retest reliability was evaluated by 30 clinicians. Internal consistency and construct validity were assessed via confirmatory factor analysis. The analyses were performed using SPSS 23.0 and STATA 13.0 software. The content validity index value demonstrated validity; item- and scale-level content validity index values were both 0.95. Test-retest reliability and internal consistency reliability were satisfactory: the intraclass consistent coefficient was 0.71, and Cronbach α values ranged from 0.59 to 0.87. The CFA showed a significantly good fit for an eight-factor structure (χ = 578.21, df = 303, comparative fit index = 0.92, Tucker-Lewis index = 0.90, root mean square error of approximation = 0.05). The K-SVEST demonstrated good psychometric properties and adequate validity and reliability. The results showed that the Korean version of SVEST demonstrated the extent of second victimhood and support resources in Korean healthcare workers and could aid in the development of support programs and evaluation of their effectiveness.
The Environmental Reward Observation Scale (EROS): development, validity, and reliability.
Armento, Maria E A; Hopko, Derek R
2007-06-01
Researchers acknowledge a strong association between the frequency and duration of environmental reward and affective mood states, particularly in relation to the etiology, assessment, and treatment of depression. Given behavioral theories that outline environmental reward as a strong mediator of affect and the unavailability of an efficient, reliable, and valid self-report measure of environmental reward, we developed the Environmental Reward Observation Scale (EROS) and examined its psychometric properties. In Experiment 1, exploratory factor analysis supported a unidimensional 10-item measure with strong internal consistency and test-retest reliability. When administered to a replication sample, confirmatory factor analysis suggested an excellent fit to the 1-factor model and convergent/discriminant validity data supported the construct validity of the EROS. In Experiment 2, further support for the convergent validity of the EROS was obtained via moderate correlations with the Pleasant Events Schedule (PES; MacPhillamy & Lewinsohn, 1976). In Experiment 3, hierarchical regression supported the ecological validity of the EROS toward predicting daily diary reports of time spent in highly rewarding behaviors and activities. Above and beyond variance accounted for by depressive symptoms (BDI), the EROS was associated with significant incremental variance in accounting for time spent in both low and high reward behaviors. The EROS may represent a brief, reliable and valid measure of environmental reward that may improve the psychological assessment of negative mood states such as clinical depression.
Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral
2010-07-01
Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.
2012-01-01
Background A father’s experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers’ experiences of childbirth. Method Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Results Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach’s alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. Conclusions The questionnaire adequately measures important dimensions of first-time fathers’ childbirth experience and may be used to assess aspects of fathers’ experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author. PMID:22594834
Premberg, Åsa; Taft, Charles; Hellström, Anna-Lena; Berg, Marie
2012-05-17
A father's experience of the birth of his first child is important not only for his birth-giving partner but also for the father himself, his relationship with the mother and the newborn. No validated questionnaire assessing first-time fathers' experiences during childbirth is currently available. Hence, the aim of this study was to develop and validate an instrument to assess first-time fathers' experiences of childbirth. Domains and items were initially derived from interviews with first-time fathers, and supplemented by a literature search and a focus group interview with midwives. The comprehensibility, comprehension and relevance of the items were evaluated by four paternity research experts and a preliminary questionnaire was pilot tested in eight first-time fathers. A revised questionnaire was completed by 200 first-time fathers (response rate = 81%) Exploratory factor analysis using principal component analysis with varimax rotation was performed and multitrait scaling analysis was used to test scaling assumptions. External validity was assessed by means of known-groups analysis. Factor analysis yielded four factors comprising 22 items and accounting 48% of the variance. The domains found were Worry, Information, Emotional support and Acceptance. Multitrait analysis confirmed the convergent and discriminant validity of the domains; however, Cronbach's alpha did not meet conventional reliability standards in two domains. The questionnaire was sensitive to differences between groups of fathers hypothesized to differ on important socio demographic or clinical variables. The questionnaire adequately measures important dimensions of first-time fathers' childbirth experience and may be used to assess aspects of fathers' experiences during childbirth. To obtain the FTFQ and permission for its use, please contact the corresponding author.
Moving to Capture Children's Attention: Developing a Methodology for Measuring Visuomotor Attention.
Hill, Liam J B; Coats, Rachel O; Mushtaq, Faisal; Williams, Justin H G; Aucott, Lorna S; Mon-Williams, Mark
2016-01-01
Attention underpins many activities integral to a child's development. However, methodological limitations currently make large-scale assessment of children's attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of 'Visual Motor Attention' (VMA)-a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method's core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults' attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action).
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried; Molod, Andrea; Houser, Paul R.
1999-01-01
Land-surface processes in a data assimilation system influence the lower troposphere and must be properly represented. With the recent incorporation of the Mosaic Land-surface Model (LSM) into the GEOS Data Assimilation System (DAS), the detailed land-surface processes require strict validation. While global data sources can identify large-scale systematic biases at the monthly timescale, the diurnal cycle is difficult to validate. Moreover, global data sets rarely include variables such as evaporation, sensible heat and soil water. Intensive field experiments, on the other hand, can provide high temporal resolution energy budget and vertical profile data for sufficiently long periods, without global coverage. Here, we evaluate the GEOS DAS against several intensive field experiments. The field experiments are First ISLSCP Field Experiment (FIFE, Kansas, summer 1987), Cabauw (as used in PILPS, Netherlands, summer 1987), Atmospheric Radiation Measurement (ARM, Southern Great Plains, winter and summer 1998) and the Surface Heat Budget of the Arctic Ocean (SHEBA, Arctic ice sheet, winter and summer 1998). The sites provide complete surface energy budget data for periods of at least one year, and some periods of vertical profiles. This comparison provides a detailed validation of the Mosaic LSM within the GEOS DAS for a variety of climatologic and geographic conditions.
ERIC Educational Resources Information Center
Hamsatu, Pur; Yusufu, Gambo; Mohammed, Habib A.
2016-01-01
This study was conducted to explore teachers' perceptions, and students' experiences in e-Examination in University of Maiduguri. Questionnaires were distributed to 30 teachers and 50 students, and the 80 collated instruments were valid for data analysis, representing a response rate of 100%. The validity of the questionnaire was approved by some…
A Validation Study of the Adolescent Dissociative Experiences Scale
ERIC Educational Resources Information Center
Keck Seeley, Susan. M.; Perosa, Sandra, L.; Perosa, Linda, M.
2004-01-01
Objective: The purpose of this study was to further the validation process of the Adolescent Dissociative Experiences Scale (A-DES). In this study, a 6-item Likert response format with descriptors was used when responding to the A-DES rather than the 11-item response format used in the original A-DES. Method: The internal reliability and construct…
ERIC Educational Resources Information Center
Räisänen, Milla; Tuononen, Tarja; Postareff, Liisa; Hailikari, Telle; Virtanen, Viivi
2016-01-01
This case study explores the assessment of students' learning outcomes in a second-year lecture course in biosciences. The aim is to deeply explore the teacher's and the students' experiences of the validity and reliability of assessment and to compare those perspectives. The data were collected through stimulated recall interviews. The results…
An Examination and Validation of an Adapted Youth Experience Scale for University Sport
ERIC Educational Resources Information Center
Rathwell, Scott; Young, Bradley W.
2016-01-01
Limited tools assess positive development through university sport. Such a tool was validated in this investigation using two independent samples of Canadian university athletes. In Study 1, 605 athletes completed 99 survey items drawn from the Youth Experience Scale (YES 2.0), and separate a priori measurement models were evaluated (i.e., 99…
(In)validation in the Minority: The Experiences of Latino Students Enrolled in an HBCU
ERIC Educational Resources Information Center
Allen, Taryn Ozuna
2016-01-01
This qualitative, phenomenological study examined the academic and interpersonal validation experiences of four female and four male Latino students who were enrolled in their second- to fifth-year at an HBCU in Texas. Using interviews, campus observations, a questionnaire, and analytic memos, this study sought to understand the role of in- and…
Casper, T. A.; Meyer, W. H.; Jackson, G. L.; ...
2010-12-08
We are exploring characteristics of ITER startup scenarios in similarity experiments conducted on the DIII-D Tokamak. In these experiments, we have validated scenarios for the ITER current ramp up to full current and developed methods to control the plasma parameters to achieve stability. Predictive simulations of ITER startup using 2D free-boundary equilibrium and 1D transport codes rely on accurate estimates of the electron and ion temperature profiles that determine the electrical conductivity and pressure profiles during the current rise. Here we present results of validation studies that apply the transport model used by the ITER team to DIII-D discharge evolutionmore » and comparisons with data from our similarity experiments.« less
Modeling background radiation in Southern Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Daniel A.; Burnley, Pamela C.; Adcock, Christopher T.
Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials by creating a high resolution background model. The intention is for this method to be used in an emergency response scenario where the background radiation envi-ronment is unknown. Two studymore » areas in Southern Nevada have been modeled using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas that are homogenous in terms of K, U, and Th, referred to as background radiation units, are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by the Department of Energy's Remote Sensing Lab - Nellis, allowing for the refinement of the technique. By using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide and define radiation background units within alluvium, successful models have been produced for Government Wash, north of Lake Mead, and for the western shore of Lake Mohave, east of Searchlight, NV.« less
Modeling background radiation in Southern Nevada
Haber, Daniel A.; Burnley, Pamela C.; Adcock, Christopher T.; ...
2017-02-06
Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials by creating a high resolution background model. The intention is for this method to be used in an emergency response scenario where the background radiation envi-ronment is unknown. Two studymore » areas in Southern Nevada have been modeled using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas that are homogenous in terms of K, U, and Th, referred to as background radiation units, are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by the Department of Energy's Remote Sensing Lab - Nellis, allowing for the refinement of the technique. By using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide and define radiation background units within alluvium, successful models have been produced for Government Wash, north of Lake Mead, and for the western shore of Lake Mohave, east of Searchlight, NV.« less
Beard, L.S.; Anderson, R.E.; Block, D.L.; Bohannon, R.G.; Brady, R.J.; Castor, S.B.; Duebendorfer, E.M.; Faulds, J.E.; Felger, T.J.; Howard, K.A.; Kuntz, M.A.; Williams, V.S.
2007-01-01
Introduction The geologic map of the Lake Mead 30' x 60' quadrangle was completed for the U.S. Geological Survey's Las Vegas Urban Corridor Project and the National Parks Project, National Cooperative Geologic Mapping Program. Lake Mead, which occupies the northern part of the Lake Mead National Recreation Area (LAME), mostly lies within the Lake Mead quadrangle and provides recreation for about nine million visitors annually. The lake was formed by damming of the Colorado River by Hoover Dam in 1939. The recreation area and surrounding Bureau of Land Management lands face increasing public pressure from rapid urban growth in the Las Vegas area to the west. This report provides baseline earth science information that can be used in future studies of hazards, groundwater resources, mineral and aggregate resources, and of soils and vegetation distribution. The preliminary report presents a geologic map and GIS database of the Lake Mead quadrangle and a description and correlation of map units. The final report will include cross-sections and interpretive text. The geology was compiled from many sources, both published and unpublished, including significant new mapping that was conducted specifically for this compilation. Geochronologic data from published sources, as well as preliminary unpublished 40Ar/39Ar ages that were obtained for this report, have been used to refine the ages of formal Tertiary stratigraphic units and define new informal Tertiary sedimentary and volcanic units.
Billingsley, George H.; Block, Debra L.; Dyer, Helen C.
2006-01-01
This map is a product of a cooperative project of the U.S. Geological Survey, the U.S. National Park Service, and the Bureau of Land Management to provide geologic map coverage and regional geologic information for visitor services and resource management of Grand Canyon National Park, Lake Mead National Recreation Area, Grand Canyon-Parashant-National Monument, and adjacent lands in northwestern Arizona. This map is a synthesis of previous and new geologic mapping that encompasses the Peach Springs 30' x 60' quadrangle, Arizona. The geologic data will support future geologic, biologic, hydrologic, and other science resource studies of this area conducted by the National Park Service, the Hualapai Indian Tribe, the Bureau of Land Management, the State of Arizona, and private organizations. The Colorado River and its tributaries have dissected the southwestern Colorado Plateau into what is now the southwestern part of Grand Canyon. The erosion of Grand Canyon has exposed about 426 m (1,400 ft) of Proterozoic crystalline metamorphic rocks and granite, about 1,450 m (4,760 ft) of Paleozoic strata, and about 300 m (1,000 ft) of Tertiary sedimentary rocks. Outcrops of Proterozoic crystalline rocks are exposed at the bottom of Grand Canyon at Granite Park from Colorado River Mile 207 to 209, at Mile 212, and in the Lower Granite Gorge from Colorado River Mile 216 to 262, and along the Grand Wash Cliffs in the southwest corner of the map area.
Asian fish tapeworm Bothriocephalus acheilognathi in the desert southwestern United States.
Archdeacon, Thomas P; Iles, Alison; Kline, S Jason; Bonar, Scott A
2010-12-01
The Asian fish tapeworm Bothriocephalus acheilognathi (Cestoda: Bothriocephalidea) is an introduced fish parasite in the southwestern United States and is often considered a serious threat to native desert fishes. Determining the geographic distribution of nonnative fish parasites is important for recovery efforts of native fishes. We examined 1,140 individuals belonging to nine fish species from southwestern U.S. streams and springs between January 2005 and April 2007. The Asian fish tapeworm was present in the Gila River, Salt River, Verde River, San Pedro River, Aravaipa Creek, and Fossil Creek, Arizona, and in Lake Tuendae at Zzyzx Springs and Afton Canyon of the Mojave River, California. Overall prevalence of the Asian fish tapeworm in Arizona fish populations was 19% (range = 0-100%) and varied by location, time, and fish species. In California, the prevalence, abundance, and intensity of the Asian fish tapeworm in Mohave tui chub Gila bicolor mohavensis were higher during warmer months than during cooler months. Three new definitive host species--Yaqui chub G. purpurea, headwater chub G. nigra, and longfin dace agosia chrysogaster--were identified. Widespread occurrence of the Asian fish tapeworm in southwestern U.S. waters suggests that the lack of detection in other systems where nonnative fishes occur is due to a lack of effort as opposed to true absence of the parasite. To limit further spread of diseases to small, isolated systems, we recommend treatment for both endo- and exoparasites when management actions include translocation of fishes.
DSMC Simulations of Hypersonic Flows and Comparison With Experiments
NASA Technical Reports Server (NTRS)
Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.
2004-01-01
This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.
Real-time remote scientific model validation
NASA Technical Reports Server (NTRS)
Frainier, Richard; Groleau, Nicolas
1994-01-01
This paper describes flight results from the use of a CLIPS-based validation facility to compare analyzed data from a space life sciences (SLS) experiment to an investigator's preflight model. The comparison, performed in real-time, either confirms or refutes the model and its predictions. This result then becomes the basis for continuing or modifying the investigator's experiment protocol. Typically, neither the astronaut crew in Spacelab nor the ground-based investigator team are able to react to their experiment data in real time. This facility, part of a larger science advisor system called Principal Investigator in a Box, was flown on the space shuttle in October, 1993. The software system aided the conduct of a human vestibular physiology experiment and was able to outperform humans in the tasks of data integrity assurance, data analysis, and scientific model validation. Of twelve preflight hypotheses associated with investigator's model, seven were confirmed and five were rejected or compromised.
Anderson, P. S. L.; Rayfield, E. J.
2012-01-01
Computational models such as finite-element analysis offer biologists a means of exploring the structural mechanics of biological systems that cannot be directly observed. Validated against experimental data, a model can be manipulated to perform virtual experiments, testing variables that are hard to control in physical experiments. The relationship between tooth form and the ability to break down prey is key to understanding the evolution of dentition. Recent experimental work has quantified how tooth shape promotes fracture in biological materials. We present a validated finite-element model derived from physical compression experiments. The model shows close agreement with strain patterns observed in photoelastic test materials and reaction forces measured during these experiments. We use the model to measure strain energy within the test material when different tooth shapes are used. Results show that notched blades deform materials for less strain energy cost than straight blades, giving insights into the energetic relationship between tooth form and prey materials. We identify a hypothetical ‘optimal’ blade angle that minimizes strain energy costs and test alternative prey materials via virtual experiments. Using experimental data and computational models offers an integrative approach to understand the mechanics of tooth morphology. PMID:22399789
NASA Astrophysics Data System (ADS)
Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.
2014-10-01
The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.
HBOI Underwater Imaging and Communication Research - Phase 1
2012-04-19
validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller
Study design elements for rigorous quasi-experimental comparative effectiveness research.
Maciejewski, Matthew L; Curtis, Lesley H; Dowd, Bryan
2013-03-01
Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.
Ego-Dissolution and Psychedelics: Validation of the Ego-Dissolution Inventory (EDI)
Nour, Matthew M.; Evans, Lisa; Nutt, David; Carhart-Harris, Robin L.
2016-01-01
Aims: The experience of a compromised sense of “self”, termed ego-dissolution, is a key feature of the psychedelic experience. This study aimed to validate the Ego-Dissolution Inventory (EDI), a new 8-item self-report scale designed to measure ego-dissolution. Additionally, we aimed to investigate the specificity of the relationship between psychedelics and ego-dissolution. Method: Sixteen items relating to altered ego-consciousness were included in an internet questionnaire; eight relating to the experience of ego-dissolution (comprising the EDI), and eight relating to the antithetical experience of increased self-assuredness, termed ego-inflation. Items were rated using a visual analog scale. Participants answered the questionnaire for experiences with classical psychedelic drugs, cocaine and/or alcohol. They also answered the seven questions from the Mystical Experiences Questionnaire (MEQ) relating to the experience of unity with one’s surroundings. Results: Six hundred and ninety-one participants completed the questionnaire, providing data for 1828 drug experiences (1043 psychedelics, 377 cocaine, 408 alcohol). Exploratory factor analysis demonstrated that the eight EDI items loaded exclusively onto a single common factor, which was orthogonal to a second factor comprised of the items relating to ego-inflation (rho = −0.110), demonstrating discriminant validity. The EDI correlated strongly with the MEQ-derived measure of unitive experience (rho = 0.735), demonstrating convergent validity. EDI internal consistency was excellent (Cronbach’s alpha 0.93). Three analyses confirmed the specificity of ego-dissolution for experiences occasioned by psychedelic drugs. Firstly, EDI score correlated with drug-dose for psychedelic drugs (rho = 0.371), but not for cocaine (rho = 0.115) or alcohol (rho = −0.055). Secondly, the linear regression line relating the subjective intensity of the experience to ego-dissolution was significantly steeper for psychedelics (unstandardized regression coefficient = 0.701) compared with cocaine (0.135) or alcohol (0.144). Ego-inflation, by contrast, was specifically associated with cocaine experiences. Finally, a binary Support Vector Machine classifier identified experiences occasioned by psychedelic drugs vs. cocaine or alcohol with over 85% accuracy using ratings of ego-dissolution and ego-inflation alone. Conclusion: Our results demonstrate the psychometric structure, internal consistency and construct validity of the EDI. Moreover, we demonstrate the close relationship between ego-dissolution and the psychedelic experience. The EDI will facilitate the study of the neuronal correlates of ego-dissolution, which is relevant for psychedelic-assisted psychotherapy and our understanding of psychosis. PMID:27378878
Garfjeld Roberts, Patrick; Guyver, Paul; Baldwin, Mathew; Akhtar, Kash; Alvand, Abtin; Price, Andrew J; Rees, Jonathan L
2017-02-01
To assess the construct and face validity of ArthroS, a passive haptic VR simulator. A secondary aim was to evaluate the novel performance metrics produced by this simulator. Two groups of 30 participants, each divided into novice, intermediate or expert based on arthroscopic experience, completed three separate tasks on either the knee or shoulder module of the simulator. Performance was recorded using 12 automatically generated performance metrics and video footage of the arthroscopic procedures. The videos were blindly assessed using a validated global rating scale (GRS). Participants completed a survey about the simulator's realism and training utility. This new simulator demonstrated construct validity of its tasks when evaluated against a GRS (p ≤ 0.003 in all cases). Regarding it's automatically generated performance metrics, established outputs such as time taken (p ≤ 0.001) and instrument path length (p ≤ 0.007) also demonstrated good construct validity. However, two-thirds of the proposed 'novel metrics' the simulator reports could not distinguish participants based on arthroscopic experience. Face validity assessment rated the simulator as a realistic and useful tool for trainees, but the passive haptic feedback (a key feature of this simulator) is rated as less realistic. The ArthroS simulator has good task construct validity based on established objective outputs, but some of the novel performance metrics could not distinguish between surgical experience. The passive haptic feedback of the simulator also needs improvement. If simulators could offer automated and validated performance feedback, this would facilitate improvements in the delivery of training by allowing trainees to practise and self-assess.
NASA Technical Reports Server (NTRS)
Starr, David
1999-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program homepage i/e.: http://ospso.gsfc.nasa.gov/validation/valpage.html).
A user-targeted synthesis of the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.
The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).
Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S
2016-12-01
The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.
ERIC Educational Resources Information Center
Reddy, Linda A.; Dudek, Christopher M.; Kettler, Ryan J.; Kurz, Alexander; Peters, Stephanie
2016-01-01
This study presents the reliability and validity of the Teacher Evaluation Experience Scale--Teacher Form (TEES-T), a multidimensional measure of educators' attitudes and beliefs about teacher evaluation. Confirmatory factor analyses of data from 583 teachers were conducted on the TEES-T hypothesized five-factor model, as well as on alternative…
ERIC Educational Resources Information Center
Hoi, Cathy Ka Weng; Zhou, Mingming; Teo, Timothy; Nie, Youyan
2017-01-01
The aim of the current study is to develop and validate an instrument to measure the four sources of teacher efficacy among Chinese primary school teachers. A 26-item Sources of Teacher Efficacy Questionnaire (STEQ) was proposed with four subscales: mastery experience, vicarious experience, social persuasion, and physiological arousal. The results…
ERIC Educational Resources Information Center
Coelho, Francisco Antonio, Jr.; Ferreira, Rodrigo Rezende; Paschoal, Tatiane; Faiad, Cristiane; Meneses, Paulo Murce
2015-01-01
The purpose of this study was twofold: to assess evidences of construct validity of the Brazilian Scale of Tutors Competences in the field of Open and Distance Learning and to examine if variables such as professional experience, perception of the student´s learning performance and prior experience influence the development of technical and…
Modeling the Effects of Argument Length and Validity on Inductive and Deductive Reasoning
ERIC Educational Resources Information Center
Rotello, Caren M.; Heit, Evan
2009-01-01
In an effort to assess models of inductive reasoning and deductive reasoning, the authors, in 3 experiments, examined the effects of argument length and logical validity on evaluation of arguments. In Experiments 1a and 1b, participants were given either induction or deduction instructions for a common set of stimuli. Two distinct effects were…
ERIC Educational Resources Information Center
Zhan, Ying; Wan, Zhi Hong
2016-01-01
Test takers' beliefs or experiences have been overlooked in most validation studies in language education. Meanwhile, a mutual exclusion has been observed in the literature, with little or no dialogue between validation studies and studies concerning the uses and consequences of testing. To help fill these research gaps, a group of Senior III…
ERIC Educational Resources Information Center
Liu, Juhong Christie; St. John, Kristen; Courtier, Anna M. Bishop
2017-01-01
Identifying instruments and surveys to address geoscience education research (GER) questions is among the high-ranked needs in a 2016 survey of the GER community (St. John et al., 2016). The purpose of this study was to develop and validate a student-centered assessment instrument to measure course experience in a general education integrated…
ERIC Educational Resources Information Center
Özenç, Emine Gül; Dogan, M. Cihangir
2014-01-01
This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…
Predeployment validation of fault-tolerant systems through software-implemented fault insertion
NASA Technical Reports Server (NTRS)
Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.
1989-01-01
Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.
Convergent Validity of the Early Memory Index in Two Primary Care Samples.
Porcerelli, John H; Cogan, Rosemary; Melchior, Katherine A; Jasinski, Matthew J; Richardson, Laura; Fowler, Shannon; Morris, Pierre; Murdoch, William
2016-01-01
Karliner, Westrich, Shedler, and Mayman (1996) developed the Early Memory Index (EMI) to assess mental health, narrative coherence, and traumatic experiences in reports of early memories. We assessed the convergent validity of EMI scales with data from 103 women from an urban primary care clinic (Study 1) and data from 48 women and 24 men from a suburban primary care clinic (Study 2). Patients provided early memory narratives and completed self-report measures of psychopathology, trauma, and health care utilization. In both studies, lower scores on the Mental Health scale and higher scores on the Traumatic Experiences scale were related to higher scores on measures of psychopathology and childhood trauma. Less consistent associations were found between the Mental Health and Traumatic Experiences scores and measures of health care utilization. The Narrative Coherence scale showed inconsistent relationships across measures in both samples. In analyses assessing the overall fit between hypothesized and actual correlations between EMI scores and measures of psychopathology, severity of trauma symptoms, and health care utilization, the Mental Health scale of the EMI demonstrated stronger convergent validity than the EMI Traumatic Experiences scale. The results provide support for the convergent validity of the Mental Health scale of the EMI.
Panamanian women׳s experience of vaginal examination in labour: A questionnaire validation.
Bonilla-Escobar, Francisco J; Ortega-Lenis, Delia; Rojas-Mirquez, Johanna C; Ortega-Loubon, Christian
2016-05-01
to validate a tool that allows healthcare providers to obtain accurate information regarding Panamanian women׳s thoughts and feelings about vaginal examination during labour that can be used in other Latin-American countries. validation study based on a database from a cross-sectional study carried out in two tertiary care hospitals in Panama City, Panama. Women in the immediate postpartum period who had spontaneous labour onset and uncomplicated deliveries were included in the study from April to August 2008. Researchers used a survey designed by Lewin et al. that included 20 questions related to a patient׳s experience during a vaginal examination. five constructs (factors) related to a patient׳s experience of vaginal examination during labour were identified: Approval (Alpha Cronbach׳s 0.72), Perception (0.67), Rejection (0.40), Consent (0.51), and Stress (0.20). it was demonstrated the validity of the scale and its constructs used to obtain information related to vaginal examination during labour, including patients' experiences with examination and healthcare staff performance. utilisation of the scale will allow institutions to identify items that need improvement and address these areas in order to promote the best care for patients in labour. Copyright © 2016 Elsevier Ltd. All rights reserved.
Gawlik, Stephanie; Müller, Mitho; Hoffmann, Lutz; Dienes, Aimée; Reck, Corinna
2015-01-01
validated questionnaire assessment of fathers' experiences during childbirth is lacking in routine clinical practice. Salmon's Item List is a short, validated method used for the assessment of birth experience in mothers in both English- and German-speaking communities. With little to no validated data available for fathers, this pilot study aimed to assess the applicability of the German version of Salmon's Item List, including a multidimensional birth experience concept, in fathers. longitudinal study. Data were collected by questionnaires. University hospital in Germany. the birth experiences of 102 fathers were assessed four to six weeks post partum using the German version of Salmon's Item List. construct validity testing with exploratory factor analysis using principal component analysis with varimax rotation was performed to identify the dimensions of childbirth experiences. Internal consistency was also analysed. factor analysis yielded a four-factor solution comprising 17 items that accounted for 54.5% of the variance. The main domain was 'fulfilment', and the secondary domains were 'emotional distress', 'physical discomfort' and 'emotional adaption'. For fulfilment, Cronbach's α met conventional reliability standards (0.87). Salmon's Item List is an appropriate instrument to assess birth experience in fathers in terms of fulfilment. Larger samples need to be examined in order to prove the stability of the factor structure before this can be extended to routine clinical assessment. a reduced version of Salmon's Item List may be useful as a screening tool for general assessment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bell, Vaughan
2017-01-01
Background The experience of ‘sensed presence’—a feeling or sense that another entity, individual or being is present despite no clear sensory or perceptual evidence—is known to occur in the general population, appears more frequently in religious or spiritual contexts, and seems to be prominent in certain psychiatric or neurological conditions and may reflect specific functions of social cognition or body-image representation systems in the brain. Previous research has relied on ad-hoc measures of the experience and no specific psychometric scale to measure the experience exists to date. Methods Based on phenomenological description in the literature, we created the 16-item Sensed Presence Questionnaire (SenPQ). We recruited participants from (i) a general population sample, and; (ii) a sample including specific selection for religious affiliation, to complete the SenPQ and additional measures of well-being, schizotypy, social anxiety, social imagery, and spiritual experience. We completed an analysis to test internal reliability, the ability of the SenPQ to distinguish between religious and non-religious participants, and whether the SenPQ was specifically related to positive schizotypical experiences and social imagery. A factor analysis was also conducted to examine underlying latent variables. Results The SenPQ was found to be reliable and valid, with religious participants significantly endorsing more items than non-religious participants, and the scale showing a selective relationship with construct relevant measures. Principal components analysis indicates two potential underlying factors interpreted as reflecting ‘benign’ and ‘malign’ sensed presence experiences. Discussion The SenPQ appears to be a reliable and valid measure of sensed presence experience although further validation in neurological and psychiatric conditions is warranted. PMID:28367379
Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.
We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less
Fernandes, Tânia; Araújo, Susana; Sucena, Ana; Reis, Alexandra; Castro, São Luís
2017-02-01
Reading is a central cognitive domain, but little research has been devoted to standardized tests for adults. We, thus, examined the psychometric properties of the 1-min version of Teste de Idade de Leitura (Reading Age Test; 1-min TIL), the Portuguese version of Lobrot L3 test, in three experiments with college students: typical readers in Experiment 1A and B, dyslexic readers and chronological age controls in Experiment 2. In Experiment 1A, test-retest reliability and convergent validity were evaluated in 185 students. Reliability was >.70, and phonological decoding underpinned 1-min TIL. In Experiment 1B, internal consistency was assessed by presenting two 45-s versions of the test to 19 students, and performance in these versions was significantly associated (r = .78). In Experiment 2, construct validity, criterion validity and clinical utility of 1-min TIL were investigated. A multiple regression analysis corroborated construct validity; both phonological decoding and listening comprehension were reliable predictors of 1-min TIL scores. Logistic regression and receiver operating characteristics analyses revealed the high accuracy of this test in distinguishing dyslexic from typical readers. Therefore, the 1-min TIL, which assesses reading comprehension and potential reading difficulties in college students, has the necessary psychometric properties to become a useful screening instrument in neuropsychological assessment and research. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Examining students' views about validity of experiments: From introductory to Ph.D. students
NASA Astrophysics Data System (ADS)
Hu, Dehui; Zwickl, Benjamin M.
2018-06-01
We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.
ERIC Educational Resources Information Center
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
Construction and Initial Validation of the Multiracial Experiences Measure (MEM)
Yoo, Hyung Chol; Jackson, Kelly; Guevarra, Rudy P.; Miller, Matthew J.; Harrington, Blair
2015-01-01
This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across two studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one’s social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. PMID:26460977
Construction and initial validation of the Multiracial Experiences Measure (MEM).
Yoo, Hyung Chol; Jackson, Kelly F; Guevarra, Rudy P; Miller, Matthew J; Harrington, Blair
2016-03-01
This article describes the development and validation of the Multiracial Experiences Measure (MEM): a new measure that assesses uniquely racialized risks and resiliencies experienced by individuals of mixed racial heritage. Across 2 studies, there was evidence for the validation of the 25-item MEM with 5 subscales including Shifting Expressions, Perceived Racial Ambiguity, Creating Third Space, Multicultural Engagement, and Multiracial Discrimination. The 5-subscale structure of the MEM was supported by a combination of exploratory and confirmatory factor analyses. Evidence of criterion-related validity was partially supported with MEM subscales correlating with measures of racial diversity in one's social network, color-blind racial attitude, psychological distress, and identity conflict. Evidence of discriminant validity was supported with MEM subscales not correlating with impression management. Implications for future research and suggestions for utilization of the MEM in clinical practice with multiracial adults are discussed. (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Observing System Simulation Experiments
NASA Technical Reports Server (NTRS)
Prive, Nikki
2015-01-01
This presentation gives an overview of Observing System Simulation Experiments (OSSEs). The components of an OSSE are described, along with discussion of the process for validating, calibrating, and performing experiments. a.
Moving to Capture Children’s Attention: Developing a Methodology for Measuring Visuomotor Attention
Coats, Rachel O.; Mushtaq, Faisal; Williams, Justin H. G.; Aucott, Lorna S.; Mon-Williams, Mark
2016-01-01
Attention underpins many activities integral to a child’s development. However, methodological limitations currently make large-scale assessment of children’s attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of ‘Visual Motor Attention’ (VMA)—a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method’s core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults’ attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198
The Play Experience Scale: development and validation of a measure of play.
Pavlas, Davin; Jentsch, Florian; Salas, Eduardo; Fiore, Stephen M; Sims, Valerie
2012-04-01
A measure of play experience in video games was developed through literature review and two empirical validation studies. Despite the considerable attention given to games in the behavioral sciences, play experience remains empirically underexamined. One reason for this gap is the absence of a scale that measures play experience. In Study 1, the initial Play Experience Scale (PES) was tested through an online validation that featured three different games (N = 203). In Study 2, a revised PES was assessed with a serious game in the laboratory (N = 77). Through principal component analysis of the Study 1 data, the initial 20-item PES was revised, resulting in the 16-item PES-16. Study 2 showed the PES-16 to be a robust instrument with the same patterns of correlations as in Study 1 via (a) internal consistency estimates, (b) correlations with established scales of motivation, (c) distributions of PES-16 scores in different game conditions, and (d) examination of the average variance extracted of the PES and the Intrinsic Motivation Scale. We suggest that the PES is appropriate for use in further validation studies. Additional examinations of the scale are required to determine its applicability to other contexts and its relationship with other constructs. The PES is potentially relevant to human factors undertakings involving video games, including basic research into play, games, and learning; prototype testing; and exploratory learning studies.
Hayes, Brett K; Stephens, Rachel G; Ngo, Jeremy; Dunn, John C
2018-02-01
Three-experiments examined the number of qualitatively different processing dimensions needed to account for inductive and deductive reasoning. In each study, participants were presented with arguments that varied in logical validity and consistency with background knowledge (believability), and evaluated them according to deductive criteria (whether the conclusion was necessarily true given the premises) or inductive criteria (whether the conclusion was plausible given the premises). We examined factors including working memory load (Experiments 1 and 2), individual working memory capacity (Experiments 1 and 2), and decision time (Experiment 3), which according to dual-processing theories, modulate the contribution of heuristic and analytic processes to reasoning. A number of empirical dissociations were found. Argument validity affected deduction more than induction. Argument believability affected induction more than deduction. Lower working memory capacity reduced sensitivity to argument validity and increased sensitivity to argument believability, especially under induction instructions. Reduced decision time led to decreased sensitivity to argument validity. State-trace analyses of each experiment, however, found that only a single underlying dimension was required to explain patterns of inductive and deductive judgments. These results show that the dissociations, which have traditionally been seen as supporting dual-processing models of reasoning, are consistent with a single-process model that assumes a common evidentiary scale for induction and deduction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Validation results of satellite mock-up capturing experiment using nets
NASA Astrophysics Data System (ADS)
Medina, Alberto; Cercós, Lorenzo; Stefanescu, Raluca M.; Benvenuto, Riccardo; Pesce, Vincenzo; Marcon, Marco; Lavagna, Michèle; González, Iván; Rodríguez López, Nuria; Wormnes, Kjetil
2017-05-01
The PATENDER activity (Net parametric characterization and parabolic flight), funded by the European Space Agency (ESA) via its Clean Space initiative, was aiming to validate a simulation tool for designing nets for capturing space debris. This validation has been performed through a set of different experiments under microgravity conditions where a net was launched capturing and wrapping a satellite mock-up. This paper presents the architecture of the thrown-net dynamics simulator together with the set-up of the deployment experiment and its trajectory reconstruction results on a parabolic flight (Novespace A-310, June 2015). The simulator has been implemented within the Blender framework in order to provide a highly configurable tool, able to reproduce different scenarios for Active Debris Removal missions. The experiment has been performed over thirty parabolas offering around 22 s of zero-g conditions. Flexible meshed fabric structure (the net) ejected from a container and propelled by corner masses (the bullets) arranged around its circumference have been launched at different initial velocities and launching angles using a pneumatic-based dedicated mechanism (representing the chaser satellite) against a target mock-up (the target satellite). High-speed motion cameras were recording the experiment allowing 3D reconstruction of the net motion. The net knots have been coloured to allow the images post-process using colour segmentation, stereo matching and iterative closest point (ICP) for knots tracking. The final objective of the activity was the validation of the net deployment and wrapping simulator using images recorded during the parabolic flight. The high-resolution images acquired have been post-processed to determine accurately the initial conditions and generate the reference data (position and velocity of all knots of the net along its deployment and wrapping of the target mock-up) for the simulator validation. The simulator has been properly configured according to the parabolic flight scenario, and executed in order to generate the validation data. Both datasets have been compared according to different metrics in order to perform the validation of the PATENDER simulator.
Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui
2017-12-01
Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.
Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission
NASA Technical Reports Server (NTRS)
Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve
2009-01-01
In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.
McConnell, Bridget L.; Urushihara, Kouji; Miller, Ralph R.
2009-01-01
Three conditioned suppression experiments with rats investigated contrasting predictions made by the extended comparator hypothesis and acquisition-focused models of learning, specifically, modified SOP and the revised Rescorla-Wagner model, concerning retrospective revaluation. Two target cues (X and Y) were partially reinforced using a stimulus relative validity design (i.e., AX-Outcome/ BX-No outcome/ CY-Outcome/ DY-No outcome), and subsequently one of the companion cues for each target was extinguished in compound (BC-No outcome). In Experiment 1, which used spaced trials for relative validity training, greater suppression was observed to target cue Y for which the excitatory companion cue had been extinguished relative to target cue X for which the nonexcitatory companion cue had been extinguished. Experiment 2 replicated these results in a sensory preconditioning preparation. Experiment 3 massed the trials during relative validity training, and the opposite pattern of data was observed. The results are consistent with the predictions of the extended comparator hypothesis. Furthermore, this set of experiments is unique in being able to differentiate between these models without invoking higher-order comparator processes. PMID:20141324
ERIC Educational Resources Information Center
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly
2014-01-01
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
A Performance Management Framework for Civil Engineering
1990-09-01
cultural change. A non - equivalent control group design was chosen to augment the case analysis. Figure 3.18 shows the form of the quasi-experiment. The...The non - equivalent control group design controls the following obstacles to internal validity: history, maturation, testing, and instrumentation. The...and Stanley, 1963:48,50) Table 7. Validity of Quasi-Experiment The non - equivalent control group experimental design controls the following obstacles to
De Pasquale, Concetta; Sciacca, Federica; Hichy, Zira
2016-01-01
The Dissociative Experience Scale for adolescent (A-DES), a 30-item, multidimensional, self-administered questionnaire, was validated using a large sample of American young people sample. We reported the linguistic validation process and the metric validity of the Italian version of A-DES in the Italy. A set of questionnaires was provided to a total of 633 participants from March 2015 to April 2016. The participants consisted of 282 boys and 351 girls, and their average age was between 18 and 24 years old. The translation process consisted of two consecutive steps: forward-backward translation and acceptability testing. The psychometric testing was applied to Italian students who were recruited from the Italian Public Schools and Universities in Sicily. Informed consent was obtained from all participants at the research. All individuals completed the A-DES. Reliability and validity were tested. The translated version was validated on a total of 633 Italian students. The reliability of A-DES total is .926. It is composed by 4 subscales: Dissociative amnesia, Absorption and imaginative involvement, Depersonalization and derealization, and Passive influence. The reliability of each subscale is: .756 for dissociative amnesia, .659 for absorption and imaginative involvement, .850 for depersonalization and derealization, and .743 for passive influence. The Italian version of the A-DES constitutes a useful instrument to measure dissociative experience in adolescents and young adults in Italy.
Remote Patron Validation: Posting a Proxy Server at the Digital Doorway.
ERIC Educational Resources Information Center
Webster, Peter
2002-01-01
Discussion of remote access to library services focuses on proxy servers as a method for remote access, based on experiences at Saint Mary's University (Halifax). Topics include Internet protocol user validation; browser-directed proxies; server software proxies; vendor alternatives for validating remote users; and Internet security issues. (LRW)
Empirical Validation and Application of the Computing Attitudes Survey
ERIC Educational Resources Information Center
Dorn, Brian; Elliott Tew, Allison
2015-01-01
Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…
Rivard, Justin D; Vergis, Ashley S; Unger, Bertram J; Hardy, Krista M; Andrew, Chris G; Gillman, Lawrence M; Park, Jason
2014-06-01
Computer-based surgical simulators capture a multitude of metrics based on different aspects of performance, such as speed, accuracy, and movement efficiency. However, without rigorous assessment, it may be unclear whether all, some, or none of these metrics actually reflect technical skill, which can compromise educational efforts on these simulators. We assessed the construct validity of individual performance metrics on the LapVR simulator (Immersion Medical, San Jose, CA, USA) and used these data to create task-specific summary metrics. Medical students with no prior laparoscopic experience (novices, N = 12), junior surgical residents with some laparoscopic experience (intermediates, N = 12), and experienced surgeons (experts, N = 11) all completed three repetitions of four LapVR simulator tasks. The tasks included three basic skills (peg transfer, cutting, clipping) and one procedural skill (adhesiolysis). We selected 36 individual metrics on the four tasks that assessed six different aspects of performance, including speed, motion path length, respect for tissue, accuracy, task-specific errors, and successful task completion. Four of seven individual metrics assessed for peg transfer, six of ten metrics for cutting, four of nine metrics for clipping, and three of ten metrics for adhesiolysis discriminated between experience levels. Time and motion path length were significant on all four tasks. We used the validated individual metrics to create summary equations for each task, which successfully distinguished between the different experience levels. Educators should maintain some skepticism when reviewing the plethora of metrics captured by computer-based simulators, as some but not all are valid. We showed the construct validity of a limited number of individual metrics and developed summary metrics for the LapVR. The summary metrics provide a succinct way of assessing skill with a single metric for each task, but require further validation.
Development and psychometric testing of the rural pregnancy experience scale (RPES).
Kornelsen, Jude; Stoll, Kathrin; Grzybowski, Stefan
2011-01-01
Rural pregnant woman who lack local access to maternity care due to their remote living circumstances may experience stress and anxiety related to pregnancy and parturition. The Rural Pregnancy Experience Scale (RPES) was designed to assess the unique worry and concerns reflective of the stress and anxiety of rural pregnant women related to pregnancy and parturition. The items of the scale were designed based on the results of a qualitative study of the experiences of pregnant rural women, thereby building a priori content validity into the measure. The relevancy content validity index (CVI) for this instrument was 1.0 and the clarity CVI was .91, as rated by maternity care specialists. A field test of the RPES with 187 pregnant rural women from British Columbia indicated that it had two factors: financial worries and worries/concerns about maternity care services, which were consistent with the conceptual base of the tool. Cronbach's alpha for the total RPES was .91; for the financial worries subscale and the worries/concerns about maternity care services subscale, alpha were .89 and .88, respectively. Construct validity was supported by significant correlations between the total scores of the RPES and the Depression Anxiety Stress Scales (DASS [r =.39, p < .01]), and subscale scores on the RPES were significantly correlated and converged with the depression, anxiety, and stress subscales of the DASS supporting convergent validity (correlations ranged between .20; p < .05 and .43; p < .01). Construct validity was also supported by findings that the level of access and availability of maternity care services were significantly associated with RPES scores. It was concluded that the RPES is a reliable and valid measure of worries and concerns reflective of rural pregnant women's stress and anxiety related to pregnancy and parturition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenburg, C.M.
2011-06-01
The need for risk-driven field experiments for CO{sub 2} geologic storage processes to complement ongoing pilot-scale demonstrations is discussed. These risk-driven field experiments would be aimed at understanding the circumstances under which things can go wrong with a CO{sub 2} capture and storage (CCS) project and cause it to fail, as distinguished from accomplishing this end using demonstration and industrial scale sites. Such risk-driven tests would complement risk-assessment efforts that have already been carried out by providing opportunities to validate risk models. In addition to experimenting with high-risk scenarios, these controlled field experiments could help validate monitoring approaches to improvemore » performance assessment and guide development of mitigation strategies.« less
Hanauer, David I; Bauerle, Cynthia
2015-01-01
Science, technology, engineering, and mathematics education reform efforts have called for widespread adoption of evidence-based teaching in which faculty members attend to student outcomes through assessment practice. Awareness about the importance of assessment has illuminated the need to understand what faculty members know and how they engage with assessment knowledge and practice. The Faculty Self-Reported Assessment Survey (FRAS) is a new instrument for evaluating science faculty assessment knowledge and experience. Instrument validation was composed of two distinct studies: an empirical evaluation of the psychometric properties of the FRAS and a comparative known-groups validation to explore the ability of the FRAS to differentiate levels of faculty assessment experience. The FRAS was found to be highly reliable (α = 0.96). The dimensionality of the instrument enabled distinction of assessment knowledge into categories of program design, instrumentation, and validation. In the known-groups validation, the FRAS distinguished between faculty groups with differing levels of assessment experience. Faculty members with formal assessment experience self-reported higher levels of familiarity with assessment terms, higher frequencies of assessment activity, increased confidence in conducting assessment, and more positive attitudes toward assessment than faculty members who were novices in assessment. These results suggest that the FRAS can reliably and validly differentiate levels of expertise in faculty knowledge of assessment. © 2015 D. I. Hanauer and C. Bauerle. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Thylén, Ingela; Wenemark, Marika; Fluur, Christina; Strömberg, Anna; Bolse, Kärstin; Årestedt, Kristofer
2014-04-01
Due to extended indications and resynchronization therapy, many implantable cardioverter defibrillator (ICD) recipients will experience progressive co-morbid conditions and will be more likely to die of causes other than cardiac death. It is therefore important to elucidate the ICD patients' preferences when nearing end-of-life. Instead of avoiding the subject of end-of-life, a validated questionnaire may be helpful to explore patients' experiences and attitudes about end-of-life concerns and to assess knowledge of the function of the ICD in end-of-life. Validated instruments assessing patients' perspective concerning end-of-life issues are scarce. The purpose of this study was to develop and evaluate respondent satisfaction and measurement properties of the 'Experiences, Attitudes and Knowledge of End-of-Life Issues in Implantable Cardioverter Defibrillator Patients' Questionnaire' (EOL-ICDQ). The instrument was tested for validity, respondent satisfaction, and for homogeneity and stability in the Swedish language. An English version of the EOL-ICDQ was validated, but has not yet been pilot tested. The final instrument contained three domains, which were clustered into 39 items measuring: experiences (10 items), attitudes (18 items), and knowledge (11 items) of end-of-life concerns in ICD patients. In addition, the questionnaire also contained items on socio-demographic background (six items) and ICD-specific background (eight items). The validity and reliability properties were considered sufficient. The EOL-ICDQ has the potential to be used in clinical practice and future research. Further studies are needed using this instrument in an Anglo-Saxon context with a sample of English-speaking ICD recipients.
Scale for positive aspects of caregiving experience: development, reliability, and factor structure.
Kate, N; Grover, S; Kulhara, P; Nehra, R
2012-06-01
OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.
Validation of multiprocessor systems
NASA Technical Reports Server (NTRS)
Siewiorek, D. P.; Segall, Z.; Kong, T.
1982-01-01
Experiments that can be used to validate fault free performance of multiprocessor systems in aerospace systems integrating flight controls and avionics are discussed. Engineering prototypes for two fault tolerant multiprocessors are tested.
Maclean, Katherine A.; Leoutsakos, Jeannie-Marie S.; Johnson, Matthew W.; Griffiths, Roland R.
2012-01-01
A large body of historical evidence describes the use of hallucinogenic compounds, such as psilocybin mushrooms, for religious purposes. But few scientific studies have attempted to measure or characterize hallucinogen-occasioned spiritual experiences. The present study examined the factor structure of the Mystical Experience Questionnaire (MEQ), a self-report measure that has been used to assess the effects of hallucinogens in laboratory studies. Participants (N=1602) completed the 43-item MEQ in reference to a mystical or profound experience they had had after ingesting psilocybin. Exploratory factor analysis of the MEQ retained 30 items and revealed a 4-factor structure covering the dimensions of classic mystical experience: unity, noetic quality, sacredness (F1); positive mood (F2); transcendence of time/space (F3); and ineffability (F4). MEQ factor scores showed good internal reliability and correlated with the Hood Mysticism Scale, indicating convergent validity. Participants who endorsed having had a mystical experience on psilocybin, compared to those who did not, had significantly higher factor scores, indicating construct validity. The 4-factor structure was confirmed in a second sample (N=440) and demonstrated superior fit compared to alternative models. The results provide initial evidence of the validity, reliability, and factor structure of a 30-item scale for measuring single, hallucinogen-occasioned mystical experiences, which may be a useful tool in the scientific study of mysticism. PMID:23316089
The influence of cueing on attentional focus in perceptual decision making.
Yang, Cheng-Ta; Little, Daniel R; Hsu, Ching-Chun
2014-11-01
Selective attention has been known to play an important role in decision making. In the present study, we combined a cueing paradigm with a redundant-target detection task to examine how attention affects the decision process when detecting the redundant targets. Cue validity was manipulated in two experiments. The results showed that when the cue was 50 % valid in one experiment, the participants adopted a parallel self-terminating processing strategy, indicative of a diffuse attentional focus on both target locations. When the cue was 100 % valid in the second experiment, all of the participants switched to a serial self-terminating processing strategy, which in our study indicated focused attention to a single target location. This study demonstrates the flexibility of the decision mechanism and highlights the importance of top-down control in selecting a decision strategy.
SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T
2009-01-01
Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.
Implicit attitudes towards homosexuality: reliability, validity, and controllability of the IAT.
Banse, R; Seise, J; Zerbes, N
2001-01-01
Two experiments were conducted to investigate the psychometric properties of an Implicit Association Test (IAT; Greenwald, McGhee, & Schwartz, 1998) that was adapted to measure implicit attitudes towards homosexuality. In a first experiment, the validity of the Homosexuality-IAT was tested using a known group approach. Implicit and explicit attitudes were assessed in heterosexual and homosexual men and women (N = 101). The results provided compelling evidence for the convergent and discriminant validity of the Homosexuality-IAT as a measure of implicit attitudes. No evidence was found for two alternative explanations of IAT effects (familiarity with stimulus material and stereotype knowledge). The internal consistency of IAT scores was satisfactory (alpha s > .80), but retest correlations were lower. In a second experiment (N = 79) it was shown that uninformed participants were able to fake positive explicit but not implicit attitudes. Discrepancies between implicit and explicit attitudes towards homosexuality could be partially accounted for by individual differences in the motivation to control prejudiced behavior, thus providing independent evidence for the validity of the implicit attitude measure. Neither explicit nor implicit attitudes could be changed by persuasive messages. The results of both experiments are interpreted as evidence for a single construct account of implicit and explicit attitudes towards homosexuality.
Crazy like a fox. Validity and ethics of animal models of human psychiatric disease.
Rollin, Michael D H; Rollin, Bernard E
2014-04-01
Animal models of human disease play a central role in modern biomedical science. Developing animal models for human mental illness presents unique practical and philosophical challenges. In this article we argue that (1) existing animal models of psychiatric disease are not valid, (2) attempts to model syndromes are undermined by current nosology, (3) models of symptoms are rife with circular logic and anthropomorphism, (4) any model must make unjustified assumptions about subjective experience, and (5) any model deemed valid would be inherently unethical, for if an animal adequately models human subjective experience, then there is no morally relevant difference between that animal and a human.
Mayer, B; Muche, R
2013-01-01
Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.
Scaglione, John M.; Mueller, Don E.; Wagner, John C.
2014-12-01
One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (k eff) evaluations based on best-available data andmore » methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate k eff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the application model. Finally, this paper provides a detailed description of the approach and its technical bases, describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models, and provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data.« less
Validating presupposed versus focused text information.
Singer, Murray; Solar, Kevin G; Spear, Jackie
2017-04-01
There is extensive evidence that readers continually validate discourse accuracy and congruence, but that they may also overlook conspicuous text contradictions. Validation may be thwarted when the inaccurate ideas are embedded sentence presuppositions. In four experiments, we examined readers' validation of presupposed ("given") versus new text information. Throughout, a critical concept, such as a truck versus a bus, was introduced early in a narrative. Later, a character stated or thought something about the truck, which therefore matched or mismatched its antecedent. Furthermore, truck was presented as either given or new information. Mismatch target reading times uniformly exceeded the matching ones by similar magnitudes for given and new concepts. We obtained this outcome using different grammatical constructions and with different antecedent-target distances. In Experiment 4, we examined only given critical ideas, but varied both their matching and the main verb's factivity (e.g., factive know vs. nonfactive think). The Match × Factivity interaction closely resembled that previously observed for new target information (Singer, 2006). Thus, readers can successfully validate given target information. Although contemporary theories tend to emphasize either deficient or successful validation, both types of theory can accommodate the discourse and reader variables that may regulate validation.
Janssen, Ellen M; Marshall, Deborah A; Hauber, A Brett; Bridges, John F P
2017-12-01
The recent endorsement of discrete-choice experiments (DCEs) and other stated-preference methods by regulatory and health technology assessment (HTA) agencies has placed a greater focus on demonstrating the validity and reliability of preference results. Areas covered: We present a practical overview of tests of validity and reliability that have been applied in the health DCE literature and explore other study qualities of DCEs. From the published literature, we identify a variety of methods to assess the validity and reliability of DCEs. We conceptualize these methods to create a conceptual model with four domains: measurement validity, measurement reliability, choice validity, and choice reliability. Each domain consists of three categories that can be assessed using one to four procedures (for a total of 24 tests). We present how these tests have been applied in the literature and direct readers to applications of these tests in the health DCE literature. Based on a stakeholder engagement exercise, we consider the importance of study characteristics beyond traditional concepts of validity and reliability. Expert commentary: We discuss study design considerations to assess the validity and reliability of a DCE, consider limitations to the current application of tests, and discuss future work to consider the quality of DCEs in healthcare.
Rakotonarivo, O Sarobidy; Schaafsma, Marije; Hockley, Neal
2016-12-01
While discrete choice experiments (DCEs) are increasingly used in the field of environmental valuation, they remain controversial because of their hypothetical nature and the contested reliability and validity of their results. We systematically reviewed evidence on the validity and reliability of environmental DCEs from the past thirteen years (Jan 2003-February 2016). 107 articles met our inclusion criteria. These studies provide limited and mixed evidence of the reliability and validity of DCE. Valuation results were susceptible to small changes in survey design in 45% of outcomes reporting reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2-90% of respondents protested against a feature of the survey, and a considerable proportion found DCEs to be incomprehensible or inconsequential (17-40% and 10-62% respectively). DCE remains useful for non-market valuation, but its results should be used with caution. Given the sparse and inconclusive evidence base, we recommend that tests of reliability and validity are more routinely integrated into DCE studies and suggest how this might be achieved. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Validation of the Vanderbilt Holistic Face Processing Test.
Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J
2016-01-01
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.
Validation of the Vanderbilt Holistic Face Processing Test
Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.
2016-01-01
The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1. PMID:27933014
Prognostics of Power Electronics, Methods and Validation Experiments
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai
2012-01-01
Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.
2018-01-01
Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…
Demonstrating Experimenter "Ineptitude" as a Means of Teaching Internal and External Validity
ERIC Educational Resources Information Center
Treadwell, Kimberli R.H.
2008-01-01
Internal and external validity are key concepts in understanding the scientific method and fostering critical thinking. This article describes a class demonstration of a "botched" experiment to teach validity to undergraduates. Psychology students (N = 75) completed assessments at the beginning of the semester, prior to and immediately following…
ERIC Educational Resources Information Center
Acevedo-Gil, Nancy; Solorzano, Daniel G.; Santos, Ryan E.
2014-01-01
This qualitative study examines the experiences of Latinas/os in community college English and math developmental education courses. Critical race theory in education and the theory of validation serve as guiding frameworks. The authors find that institutional agents provide academic validation by emphasizing high expectations, focusing on social…
ERIC Educational Resources Information Center
Acevedo-Gil, Nancy; Santos, Ryan E.; Alonso, LLuliana; Solorzano, Daniel G.
2015-01-01
This qualitative study examines the experiences of Latinas/os in community college English and math developmental education courses. Critical race theory in education and the theory of validation serve as guiding frameworks. The authors find that institutional agents provide academic validation by emphasizing high expectations, focusing on social…
Ganna, Andrea; Lee, Donghwan; Ingelsson, Erik; Pawitan, Yudi
2015-07-01
It is common and advised practice in biomedical research to validate experimental or observational findings in a population different from the one where the findings were initially assessed. This practice increases the generalizability of the results and decreases the likelihood of reporting false-positive findings. Validation becomes critical when dealing with high-throughput experiments, where the large number of tests increases the chance to observe false-positive results. In this article, we review common approaches to determine statistical thresholds for validation and describe the factors influencing the proportion of significant findings from a 'training' sample that are replicated in a 'validation' sample. We refer to this proportion as rediscovery rate (RDR). In high-throughput studies, the RDR is a function of false-positive rate and power in both the training and validation samples. We illustrate the application of the RDR using simulated data and real data examples from metabolomics experiments. We further describe an online tool to calculate the RDR using t-statistics. We foresee two main applications. First, if the validation study has not yet been collected, the RDR can be used to decide the optimal combination between the proportion of findings taken to validation and the size of the validation study. Secondly, if a validation study has already been done, the RDR estimated using the training data can be compared with the observed RDR from the validation data; hence, the success of the validation study can be assessed. © The Author 2014. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Taylor, Rachel M; Fern, Lorna A; Solanki, Anita; Hooker, Louise; Carluccio, Anna; Pye, Julia; Jeans, David; Frere-Smith, Tom; Gibson, Faith; Barber, Julie; Raine, Rosalind; Stark, Dan; Feltbower, Richard; Pearce, Susie; Whelan, Jeremy S
2015-07-28
Patient experience is increasingly used as an indicator of high quality care in addition to more traditional clinical end-points. Surveys are generally accepted as appropriate methodology to capture patient experience. No validated patient experience surveys exist specifically for adolescents and young adults (AYA) aged 13-24 years at diagnosis with cancer. This paper describes early work undertaken to develop and validate a descriptive patient experience survey for AYA with cancer that encompasses both their cancer experience and age-related issues. We aimed to develop, with young people, an experience survey meaningful and relevant to AYA to be used in a longitudinal cohort study (BRIGHTLIGHT), ensuring high levels of acceptability to maximise study retention. A three-stage approach was employed: Stage 1 involved developing a conceptual framework, conducting literature/Internet searches and establishing content validity of the survey; Stage 2 confirmed the acceptability of methods of administration and consisted of four focus groups involving 11 young people (14-25 years), three parents and two siblings; and Stage 3 established survey comprehension through telephone-administered cognitive interviews with a convenience sample of 23 young people aged 14-24 years. Stage 1: Two-hundred and thirty eight questions were developed from qualitative reports of young people's cancer and treatment-related experience. Stage 2: The focus groups identified three core themes: (i) issues directly affecting young people, e.g. impact of treatment-related fatigue on ability to complete survey; (ii) issues relevant to the actual survey, e.g. ability to answer questions anonymously; (iii) administration issues, e.g. confusing format in some supporting documents. Stage 3: Cognitive interviews indicated high levels of comprehension requiring minor survey amendments. Collaborating with young people with cancer has enabled a survey of to be developed that is both meaningful to young people but also examines patient experience and outcomes associated with specialist cancer care. Engagement of young people throughout the survey development has ensured the content appropriately reflects their experience and is easily understood. The BRIGHTLIGHT survey was developed for a specific research project but has the potential to be used as a TYA cancer survey to assess patient experience and the care they receive.
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
Anke, Audny; Manskow, Unn Sollid; Friborg, Oddgeir; Røe, Cecilie; Arntzen, Cathrine
2016-11-28
Family members are important for support and care of their close relative after severe traumas, and their experiences are vital health care quality indicators. The objective was to describe the development of the Family Experiences of in-hospital Care Questionnaire for family members of patients with severe Traumatic Brain Injury (FECQ-TBI), and to evaluate its psychometric properties and validity. The design of the study is a Norwegian multicentre study inviting 171 family members. The questionnaire developmental process included a literature review, use of an existing instrument (the parent experience of paediatric care questionnaire), focus group with close family members, as well as expert group judgments. Items asking for family care experiences related to acute wards and rehabilitation were included. Several items of the paediatric care questionnaire were removed or the wording of the items was changed to comply with the present purpose. Questions covering experiences with the inpatient rehabilitation period, the discharge phase, the family experiences with hospital facilities, the transfer between departments and the economic needs of the family were added. The developed questionnaire was mailed to the participants. Exploratory factor analyses were used to examine scale structure, in addition to screening for data quality, and analyses of internal consistency and validity. The questionnaire was returned by 122 (71%) of family members. Principal component analysis extracted six dimensions (eigenvalues > 1.0): acute organization and information (10 items), rehabilitation organization (13 items), rehabilitation information (6 items), discharge (4 items), hospital facilities-patients (4 items) and hospital facilities-family (2 items). Items related to the acute phase were comparable to items in the two dimensions of rehabilitation: organization and information. All six subscales had high Cronbach's alpha coefficients >0.80. The construct validity was confirmed. The FECQ-TBI assesses important aspects of in-hospital care in the acute and rehabilitation phases, as seen from a family perspective. The psychometric properties and the construct validity of the questionnaire were good, hence supporting the use of the FECQ-TBI to assess quality of care in rehabilitation departments.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, David; Leconte, Pierre; Destouches, Christophe
2015-07-01
Two recent papers justified a new experimental program to give a new basis for the validation of {sup 238}U nuclear data, namely neutron induced inelastic scattering and transport codes at neutron fission energies. The general idea is to perform a neutron transmission experiment through natural uranium material. As shown by Hans Bethe, neutron transmissions measured by dosimetric responses are linked to inelastic cross sections. This paper describes the principle and the results of such an experience called EXCALIBUR performed recently (January and October 2014) at the CALIBAN reactor facility. (authors)
A Framework for Understanding Experiments
2008-06-01
operations. Experiments that emphasize free play and uncertainty in scenarios reflect conditions found in existent operations and satisfy external...validity Requirement 4, the ability to relate results. Conversely, experiments emphasizing similar conditions with diminished free play across multiple
Lievaart, Marien; Franken, Ingmar H A; Hovens, Johannes E
2016-03-01
The most commonly used instrument for measuring anger is the State-Trait Anger Expression Inventory-2 (STAXI-2; Spielberger, 1999). This study further examines the validity of the STAXI-2 and compares anger scores between several clinical and nonclinical samples. Reliability, concurrent, and construct validity were investigated in Dutch undergraduate students (N = 764), a general population sample (N = 1211), and psychiatric outpatients (N = 226). The results support the reliability and validity of the STAXI-2. Concurrent validity was strong, with meaningful correlations between the STAXI-2 scales and anger-related constructs in both clinical and nonclinical samples. Importantly, patients showed higher experience and expression of anger than the general population sample. Additionally, forensic outpatients with addiction problems reported higher Anger Expression-Out than general psychiatric outpatients. Our conclusion is that the STAXI-2 is a suitable instrument to measure both the experience and the expression of anger in both general and clinical populations. © 2016 Wiley Periodicals, Inc.
Wei, Meifen; Russell, Daniel W; Mallinckrodt, Brent; Vogel, David L
2007-04-01
We developed a 12-item, short form of the Experiences in Close Relationship Scale (ECR; Brennan, Clark, & Shaver, 1998) across 6 studies. In Study 1, we examined the reliability and factor structure of the measure. In Studies 2 and 3, we cross-validated the reliability, factor structure, and validity of the short form measure; whereas in Study 4, we examined test-retest reliability over a 1-month period. In Studies 5 and 6, we further assessed the reliability, factor structure, and validity of the short version of the ECR when administered as a stand-alone instrument. Confirmatory factor analyses indicated that 2 factors, labeled Anxiety and Avoidance, provided a good fit to the data after removing the influence of response sets. We found validity to be equivalent for the short and the original versions of the ECR across studies. Finally, the results were comparable when we embedded the short form within the original version of the ECR and when we administered it as a stand-alone measure.
CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Davis, David O.
2015-01-01
Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.
1989-07-21
formulation of physiologically-based pharmacokinetic models. Adult male Sprague-Dawley rats and male beagle dogs will be administered equal doses...experiments in the 0 dog . Physiologically-based pharmacokinetic models will be developed and validated for oral and inhalation exposures to halocarbons...of conducting experiments in dogs . The original physiolo ic model for the rat will be scaled up to predict halocarbon pharmacokinetics in the dog . The
Quality Control and Analysis of Microphysical Data Collected in TRMM Aircraft Validation Experiments
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.
2004-01-01
This report summarizes our efforts on the funded project 'Quality Control and Analysis of Microphysical Data Collected in TRMM Airborne Validation Experiments', NASA NAG5-9663, Andrew Heymsfield, P. I. We begin this report by summarizing our activities in FY2000-FY2004. We then present some highlights of our work. The last part of the report lists the publications that have resulted from our funding through this grant.
A Validation Framework for the Long Term Preservation of High Energy Physics Data
NASA Astrophysics Data System (ADS)
Ozerov, Dmitri; South, David M.
2014-06-01
The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.
Utilization of sounding rockets and balloons in the German Space Programme
NASA Astrophysics Data System (ADS)
Preu, Peter; Friker, Achim; Frings, Wolfgang; Püttmann, Norbert
2005-08-01
Sounding rockets and balloons are important tools of Germany's Space Programme. DLR manages these activities and promotes scientific experiments and validation programmes within (1) Space Science, (2) Earth Observation, (3) Microgravity Research and (4) Re-entry Technologies (SHEFEX). In Space Science the present focus is at atmospheric research. Concerning Earth Observation balloon-borne measurements play a key role in the validation of atmospheric satellite sounders (ENVISAT). TEXUS and MAXUS sounding rockets are successfully used for short duration microgravity experiments. The Sharp Edge Flight Experiment SHEFEX will deliver data from a hypersonic flight for the validation of a new Thermal Protection System (TPS), wind tunnel testing and numerical analysis of aerothermodynamics. Signing the Revised Esrange and Andøya Special Project (EASP) Agreement 2006-2010 in June 2004 Germany has made an essential contribution to the long-term availability of the Scandinavian ranges for the European science community.
Relations between inductive reasoning and deductive reasoning.
Heit, Evan; Rotello, Caren M
2010-05-01
One of the most important open questions in reasoning research is how inductive reasoning and deductive reasoning are related. In an effort to address this question, we applied methods and concepts from memory research. We used 2 experiments to examine the effects of logical validity and premise-conclusion similarity on evaluation of arguments. Experiment 1 showed 2 dissociations: For a common set of arguments, deduction judgments were more affected by validity, and induction judgments were more affected by similarity. Moreover, Experiment 2 showed that fast deduction judgments were like induction judgments-in terms of being more influenced by similarity and less influenced by validity, compared with slow deduction judgments. These novel results pose challenges for a 1-process account of reasoning and are interpreted in terms of a 2-process account of reasoning, which was implemented as a multidimensional signal detection model and applied to receiver operating characteristic data. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Logical fallacies in animal model research.
Sjoberg, Espen A
2017-02-15
Animal models of human behavioural deficits involve conducting experiments on animals with the hope of gaining new knowledge that can be applied to humans. This paper aims to address risks, biases, and fallacies associated with drawing conclusions when conducting experiments on animals, with focus on animal models of mental illness. Researchers using animal models are susceptible to a fallacy known as false analogy, where inferences based on assumptions of similarities between animals and humans can potentially lead to an incorrect conclusion. There is also a risk of false positive results when evaluating the validity of a putative animal model, particularly if the experiment is not conducted double-blind. It is further argued that animal model experiments are reconstructions of human experiments, and not replications per se, because the animals cannot follow instructions. This leads to an experimental setup that is altered to accommodate the animals, and typically involves a smaller sample size than a human experiment. Researchers on animal models of human behaviour should increase focus on mechanistic validity in order to ensure that the underlying causal mechanisms driving the behaviour are the same, as relying on face validity makes the model susceptible to logical fallacies and a higher risk of Type 1 errors. We discuss measures to reduce bias and risk of making logical fallacies in animal research, and provide a guideline that researchers can follow to increase the rigour of their experiments.
Integrated Disposal Facility FY 2016: ILAW Verification and Validation of the eSTOMP Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Bacon, Diana H.; Fang, Yilin
2016-05-13
This document describes two sets of simulations carried out to further verify and validate the eSTOMP simulator. In this report, a distinction is made between verification and validation, and the focus is on verifying eSTOMP through a series of published benchmarks on cementitious wastes, and validating eSTOMP based on a lysimeter experiment for the glassified waste. These activities are carried out within the context of a scientific view of validation that asserts that models can only be invalidated, and that model validation (and verification) is a subjective assessment.
Learning to recognize rat social behavior: Novel dataset and cross-dataset application.
Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C
2018-04-15
Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Anning, David W.; Truini, Margot; Flynn, Marilyn E.; Remick, William H.
2007-01-01
Ground-water levels for water year 2006 and their change over time in Detrital, Hualapai, and Sacramento Valley Basins of northwestern Arizona were investigated to improve the understanding of current and past ground-water conditions in these basins. The potentiometric surface for ground water in the Basin-Fill aquifer of each basin is generally parallel to topography. Consequently, ground-water movement is generally from the mountain front toward the basin center and then along the basin axis toward the Colorado River or Lake Mead. Observed water levels in Detrital, Hualapai, and Sacramento Valley Basins have fluctuated during the period of historic water-level records (1943 through 2006). In Detrital Valley Basin, water levels in monitored areas have either remained the same, or have steadily increased as much as 3.5 feet since the 1980s. Similar steady conditions or water-level rises were observed for much of the northern and central parts of Hualapai Valley Basin. During the period of historic record, steady water-level declines as large as 60 feet were found in wells penetrating the Basin-Fill aquifer in areas near Kingman, northwest of Hackberry, and northeast of Dolan Springs within the Hualapai Valley Basin. Within the Sacramento Valley Basin, during the period of historic record, water-level declines as large as 55 feet were observed in wells penetrating the Basin-Fill aquifer in the Kingman and Golden Valley areas; whereas small, steady rises were observed in Yucca and in the Dutch Flat area.
Summary, synthesis, and significance: Chapter 6
Esque, Todd C.; Nussear, Kenneth E.; Inman, Richard D.; Matocq, Marjorie D.; Weisberg, Peter J.; Dilts, Thomas E.; Leitner, Phillip
2013-01-01
The initial habitat suitability model estimates pre‐European suitable habitat of the Mohave ground squirrel (MGS, Xerospermophilus mohavensis) covering 19,023 km2. Impact scenarios predicted that between 10 percent and 16 percent of suitable habitat has been lost to historical human disturbances, and up to an additional 10 percent may be affected by renewable energy development in the near future. These figures are the result of analyses conducted solely on public lands. State and private lands in the region also have pending proposals for renewable energy on 260 km2, and an additional 3,500 km2 may be available for renewable energy. The sum of potential habitat disturbance on public, State, and private lands could equal up to a quarter of historic suitable habitat from pre‐European settlement levels. While the analyses conducted here consider direct impacts from the footprint of renewable energy and associated transmission corridors, there are many indirect sources of environmental disturbance related to renewable energy development (Lovich and Ennen 2011). Some of those potentially important to the MGS include: increased fugitive dust and the release of chemicals such as dust suppressants, insulating fluids, and herbicides throughout the operational life of facilities, auditory interference from the sound and vibrations of turbines, increases in predators and invasive species that further alter system processes, and changes in surface flow of water that also influence vegetation that is important in these habitats. However, there is little research in the broader context of these topics for the Mojave Desert ecosystem, and less, if any, about the MGS.
[Validation of a Japanese version of the Experience in Close Relationship- Relationship Structure].
Komura, Kentaro; Murakami, Tatsuya; Toda, Koji
2016-08-01
The purpose of this study was to translate the Experience of Close Relationship-Relationship Structure (ECRRS) and evaluate its validity. In study 1 (N = 982), evidence based internal structure (factor structure, internal consistency, and correlation among sub-scales) and evidence based relations to other variables (depression, reassurance seeking and self-esteem) were confirmed. In study 2 (N = 563), evidence based on internal structure was reconfirmed, and evidence based relations to other variables (IWMS, RQ, and ECR-GO) were confirmed. In study 3 (N = 342), evidence based internal structure (test-retest reliability) was confirmed. Based on these results, we concluded that ECR-RS was valid for measuring adult attachment style.
Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation
NASA Astrophysics Data System (ADS)
Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim
2017-09-01
For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.
Development and validation of a Chinese music quality rating test.
Cai, Yuexin; Zhao, Fei; Zheng, Yiqing
2013-09-01
The present study aims to develop and validate a Chinese music quality rating test (MQRT). In Experiment 1, 22 music pieces were initially selected and paired as a 'familiar music piece' and 'unfamiliar music piece' based on familiarities amongst the general public in the categories of classical music (6), Chinese folk music (8), and pop music (8). Following the selection criteria, one pair of music pieces from each music category was selected and used for the MQRT in Experiment 2. In Experiment 2, the MQRT was validated using these music pieces in the categories 'Pleasantness', 'Naturalness', 'Fullness', 'Roughness', and 'Sharpness'. Seventy-two adult participants and 30 normal-hearing listeners were recruited in Experiments 1 and 2, respectively. Significant differences between the familiar and unfamiliar music pieces were found in respect of pleasantness rating for folk and pop music pieces as well as in sharpness rating for pop music pieces. The comparison of music category effect on MQRT found significant differences in pleasantness, fullness, and sharpness ratings. The Chinese MQRT developed in the present study is an effective tool for assessing music quality.
Development and validation of the crew-station system-integration research facility
NASA Technical Reports Server (NTRS)
Nedell, B.; Hardy, G.; Lichtenstein, T.; Leong, G.; Thompson, D.
1986-01-01
The various issues associated with the use of integrated flight management systems in aircraft were discussed. To address these issues a fixed base integrated flight research (IFR) simulation of a helicopter was developed to support experiments that contribute to the understanding of design criteria for rotorcraft cockpits incorporating advanced integrated flight management systems. A validation experiment was conducted that demonstrates the main features of the facility and the capability to conduct crew/system integration research.
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.; Shivarama, Ravishankar
2004-01-01
The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.
The Canadian Experiment for Freeze/Thaw in 2012 or 2013 CanEx-FT12 or FT13
NASA Technical Reports Server (NTRS)
Belair, Stephane; Bernier, Monique; Colliander, Andreas; Jackson, Thomas; McDonald, Kyle; Walker, Anne
2011-01-01
General objectives of the experiment are: Pre-launch Calibration/Validation of SMAP Freeze/Thaw products and retrieval algorithms and rehearsal for Soil Moisture Active-Passive (SMAP) post launch validation. The basis of the radar freeze-thaw measurement is the large shift in dielectric constant and backscatter (dB) between predominantly frozen & thawed conditions. The Dielectric constant of liquid water varies with frequency, whereas that of pure ice is constant
Rover-based visual target tracking validation and mission infusion
NASA Technical Reports Server (NTRS)
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
Validation of a wireless modular monitoring system for structures
NASA Astrophysics Data System (ADS)
Lynch, Jerome P.; Law, Kincho H.; Kiremidjian, Anne S.; Carryer, John E.; Kenny, Thomas W.; Partridge, Aaron; Sundararajan, Arvind
2002-06-01
A wireless sensing unit for use in a Wireless Modular Monitoring System (WiMMS) has been designed and constructed. Drawing upon advanced technological developments in the areas of wireless communications, low-power microprocessors and micro-electro mechanical system (MEMS) sensing transducers, the wireless sensing unit represents a high-performance yet low-cost solution to monitoring the short-term and long-term performance of structures. A sophisticated reduced instruction set computer (RISC) microcontroller is placed at the core of the unit to accommodate on-board computations, measurement filtering and data interrogation algorithms. The functionality of the wireless sensing unit is validated through various experiments involving multiple sensing transducers interfaced to the sensing unit. In particular, MEMS-based accelerometers are used as the primary sensing transducer in this study's validation experiments. A five degree of freedom scaled test structure mounted upon a shaking table is employed for system validation.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
The New Millennium Program: Validating Advanced Technologies for Future Space Missions
NASA Technical Reports Server (NTRS)
Minning, Charles P.; Luers, Philip
1999-01-01
This presentation reviews the activities of the New Millennium Program (NMP) in validating advanced technologies for space missions. The focus of these breakthrough technologies are to enable new capabilities to fulfill the science needs, while reducing costs of future missions. There is a broad spectrum of NMP partners, including government agencies, universities and private industry. The DS-1 was launched on October 24, 1998. Amongst the technologies validated by the NMP on DS-1 are: a Low Power Electronics Experiment, the Power Activation and Switching Module, Multi-Functional Structures. The first two of these technologies are operational and the data analysis is still ongoing. The third program is also operational, and its performance parameters have been verified. The second program, DS-2, was launched January 3 1999. It is expected to impact near Mars southern polar region on 3 December 1999. The technologies used on this mission awaiting validation are an advanced microcontroller, a power microelectronics unit, an evolved water experiment and soil thermal conductivity experiment, Lithium-Thionyl Chloride batteries, the flexible cable interconnect, aeroshell/entry system, and a compact telecom system. EO-1 on schedule for launch in December 1999 carries several technologies to be validated. Amongst these are: a Carbon-Carbon Radiator, an X-band Phased Array Antenna, a pulsed plasma thruster, a wideband advanced recorder processor, an atmospheric corrector, lightweight flexible solar arrays, Advanced Land Imager and the Hyperion instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Reed, Davis Allan
2010-01-01
One of the challenges associated with implementation of burnup credit is the validation of criticality calculations used in the safety evaluation; in particular the availability and use of applicable critical experiment data. The purpose of the validation is to quantify the relationship between reality and calculated results. Validation and determination of bias and bias uncertainty require the identification of sets of critical experiments that are similar to the criticality safety models. A principal challenge for crediting fission products (FP) in a burnup credit safety evaluation is the limited availability of relevant FP critical experiments for bias and bias uncertainty determination.more » This paper provides an evaluation of the available critical experiments that include FPs, along with bounding, burnup-dependent estimates of FP biases generated by combining energy dependent sensitivity data for a typical burnup credit application with the nuclear data uncertainty information distributed with SCALE 6. A method for determining separate bias and bias uncertainty values for individual FPs and illustrative results is presented. Finally, a FP bias calculation method based on data adjustment techniques and reactivity sensitivity coefficients calculated with the SCALE sensitivity/uncertainty tools and some typical results is presented. Using the methods described in this paper, the cross-section bias for a representative high-capacity spent fuel cask associated with the ENDF/B-VII nuclear data for 16 most important stable or near stable FPs is predicted to be no greater than 2% of the total worth of the 16 FPs, or less than 0.13 % k/k.« less
Probing eukaryotic cell mechanics via mesoscopic simulations
NASA Astrophysics Data System (ADS)
Pivkin, Igor V.; Lykov, Kirill; Nematbakhsh, Yasaman; Shang, Menglin; Lim, Chwee Teck
2017-11-01
We developed a new mesoscopic particle based eukaryotic cell model which takes into account cell membrane, cytoskeleton and nucleus. The breast epithelial cells were used in our studies. To estimate the viscoelastic properties of cells and to calibrate the computational model, we performed micropipette aspiration experiments. The model was then validated using data from microfluidic experiments. Using the validated model, we probed contributions of sub-cellular components to whole cell mechanics in micropipette aspiration and microfluidics experiments. We believe that the new model will allow to study in silico numerous problems in the context of cell biomechanics in flows in complex domains, such as capillary networks and microfluidic devices.
Evans, Rand B
2017-01-01
Beginning in 1 9a0, a major thread of research was added to E. B. Titchener's Cornell laboratory: the synthetic experiment. Titchener and his graduate students used introspective analysis to reduce a perception, a complex experience, into its simple sensory constituents. To test the validity of that analysis, stimulus patterns were selected to reprodiuce the patterns of sensations found in the introspective analyses. If the original perception can be reconstructed in this way, then the analysis was considered validated. This article reviews development of the synthetic method in E. B. Titchener's laboratory at Cornell University and examines its impact on psychological research.
ERIC Educational Resources Information Center
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.
2016-01-01
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
ERIC Educational Resources Information Center
Chen, Jin; Lin, Jianghao; Li, Xinguang
2015-01-01
This article aims to find out the validity of rhythm measurements to capture the rhythmic features of Chinese English. Besides, the reliability of the valid rhythm measurements applied in automatically scoring the English rhythm proficiency of Chinese EFL learners is also explored. Thus, two experiments were carried out. First, thirty students of…
Methodology and issues of integral experiments selection for nuclear data validation
NASA Astrophysics Data System (ADS)
Tatiana, Ivanova; Ivanov, Evgeny; Hill, Ian
2017-09-01
Nuclear data validation involves a large suite of Integral Experiments (IEs) for criticality, reactor physics and dosimetry applications. [1] Often benchmarks are taken from international Handbooks. [2, 3] Depending on the application, IEs have different degrees of usefulness in validation, and usually the use of a single benchmark is not advised; indeed, it may lead to erroneous interpretation and results. [1] This work aims at quantifying the importance of benchmarks used in application dependent cross section validation. The approach is based on well-known General Linear Least Squared Method (GLLSM) extended to establish biases and uncertainties for given cross sections (within a given energy interval). The statistical treatment results in a vector of weighting factors for the integral benchmarks. These factors characterize the value added by a benchmark for nuclear data validation for the given application. The methodology is illustrated by one example, selecting benchmarks for 239Pu cross section validation. The studies were performed in the framework of Subgroup 39 (Methods and approaches to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data files) established at the Working Party on International Nuclear Data Evaluation Cooperation (WPEC) of the Nuclear Science Committee under the Nuclear Energy Agency (NEA/OECD).
Wood, Lisa; Burke, Eilish; Byrne, Rory; Enache, Gabriela; Morrison, Anthony P
2016-10-01
Stigma is a significant difficulty for people who experience psychosis. To date, there have been no outcome measures developed to examine stigma exclusively in people with psychosis. The aim of this study was develop and validate a semi-structured interview measure of stigma (SIMS) in psychosis. The SIMS is an eleven item measure of stigma developed in consultation with service users who have experienced psychosis. 79 participants with experience of psychosis were recruited for the purposes of this study. They were administered the SIMS alongside a battery of other relevant outcome measures to examine reliability and validity. A one-factor solution was identified for the SIMS which encompassed all ten rateable items. The measure met all reliability and validity criteria and illustrated good internal consistency, inter-rater reliability, test retest reliability, criterion validity, construct validity, sensitivity to change and had no floor or ceiling effects. The SIMS is a reliable and valid measure of stigma in psychosis. It may be more engaging and acceptable than other stigma measures due to its semi-structured interview format. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Sperandio, Naiara; Morais, Dayane de Castro; Priore, Silvia Eloiza
2018-02-01
The scope of this systematic review was to compare the food insecurity scales validated and used in the countries in Latin America and the Caribbean, and analyze the methods used in validation studies. A search was conducted in the Lilacs, SciELO and Medline electronic databases. The publications were pre-selected by titles and abstracts, and subsequently by a full reading. Of the 16,325 studies reviewed, 14 were selected. Twelve validated scales were identified for the following countries: Venezuela, Brazil, Colombia, Bolivia, Ecuador, Costa Rica, Mexico, Haiti, the Dominican Republic, Argentina and Guatemala. Besides these, there is the Latin American and Caribbean scale, the scope of which is regional. The scales ranged from the standard reference used, number of questions and diagnosis of insecurity. The methods used by the studies for internal validation were calculation of Cronbach's alpha and the Rasch model; for external validation the authors calculated association and /or correlation with socioeconomic and food consumption variables. The successful experience of Latin America and the Caribbean in the development of national and regional scales can be an example for other countries that do not have this important indicator capable of measuring the phenomenon of food insecurity.
Replicating the Z iron opacity experiments on the NIF
NASA Astrophysics Data System (ADS)
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Ross, P. W.; Kline, J. L.; Flippo, K. A.; Sherrill, M. E.; Dodd, E. S.; DeVolder, B. G.; Cardenas, T.; Archuleta, T. N.; Craxton, R. S.; Zhang, R.; McKenty, P. W.; Garcia, E. M.; Huffman, E. J.; King, J. A.; Ahmed, M. F.; Emig, J. A.; Ayers, S. L.; Barrios, M. A.; May, M. J.; Schneider, M. B.; Liedahl, D. A.; Wilson, B. G.; Urbatsch, T. J.; Iglesias, C. A.; Bailey, J. E.; Rochau, G. A.
2017-06-01
X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity of iron at a temperature of ∼160 eV and an electron density of ∼7 × 1021 cm-3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.
Kühn, Simone; Fernyhough, Charles; Alderson-Day, Benjamin; Hurlburt, Russell T.
2014-01-01
To provide full accounts of human experience and behavior, research in cognitive neuroscience must be linked to inner experience, but introspective reports of inner experience have often been found to be unreliable. The present case study aimed at providing proof of principle that introspection using one method, descriptive experience sampling (DES), can be reliably integrated with fMRI. A participant was trained in the DES method, followed by nine sessions of sampling within an MRI scanner. During moments where the DES interview revealed ongoing inner speaking, fMRI data reliably showed activation in classic speech processing areas including left inferior frontal gyrus. Further, the fMRI data validated the participant’s DES observations of the experiential distinction between inner speaking and innerly hearing her own voice. These results highlight the precision and validity of the DES method as a technique of exploring inner experience and the utility of combining such methods with fMRI. PMID:25538649
SDG and qualitative trend based model multiple scale validation
NASA Astrophysics Data System (ADS)
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Hagen, Inger Hilde; Svindseth, Marit Følsvik; Nesset, Erik; Orner, Roderick; Iversen, Valentina Cabral
2018-03-27
The experience of having their new-borns admitted to an intensive care unit (NICU) can be extremely distressing. Subsequent risk of post-incident-adjustment difficulties are increased for parents, siblings, and affected families. Patient and next of kin satisfaction surveys provide key indicators of quality in health care. Methodically constructed and validated survey tools are in short supply and parents' experiences of care in Neonatal Intensive Care Units is under-researched. This paper reports a validation of the Neonatal Satisfaction Survey (NSS-8) in six Norwegian NICUs. Parents' survey returns were collected using the Neonatal Satisfaction Survey (NSS-13). Data quality and psychometric properties were systematically assessed using exploratory factor analysis, tests of internal consistency, reliability, construct, convergent and discriminant validity. Each set of hospital returns were subjected to an apostasy analysis before an overall satisfaction rate was calculated. The survey sample of 568 parents represents 45% of total eligible population for the period of the study. Missing data accounted for 1,1% of all returns. Attrition analysis shows congruence between sample and total population. Exploratory factor analysis identified eight factors of concern to parents,"Care and Treatment", "Doctors", "Visits", "Information", "Facilities", "Parents' Anxiety", "Discharge" and "Sibling Visits". All factors showed satisfactory internal consistency, good reliability (Cronbach's alpha ranged from 0.70-0.94). For the whole scale of 51 items α 0.95. Convergent validity using Spearman's rank between the eight factors and question measuring overall satisfaction was significant on all factors. Discriminant validity was established for all factors. Overall satisfaction rates ranged from 86 to 90% while for each of the eight factors measures of satisfaction varied between 64 and 86%. The NSS-8 questionnaire is a valid and reliable scale for measuring parents' assessment of quality of care in NICU. Statistical analysis confirms the instrument's capacity to gauge parents' experiences of NICU. Further research is indicated to validate the survey questionnaire in other Nordic countries and beyond.
Kim, Su Yeong; Hou, Yang; Shen, Yishan; Zhang, Minyu
2016-01-01
Objectives Language brokering occurs frequently in immigrant families and can have significant implications for the well-being of family members involved. The present study aimed to develop and validate a measure that can be used to assess multiple dimensions of subjective language brokering experiences among Mexican American adolescents. Methods Participants were 557 adolescent language brokers (54.2% female, Mage.wave1 =12.96, SD=.94) in Mexican American families. Results Using exploratory and confirmatory factor analyses, we were able to identify seven reliable subscales of language brokering: linguistic benefits, socio-emotional benefits, efficacy, positive parent-child relationships, parental dependence, negative feelings, and centrality. Tests of factorial invariance show that these subscales demonstrate, at minimum, partial strict invariance across time and across experiences of translating for mothers and fathers, and in most cases, also across adolescent gender, nativity, and translation frequency. Thus, in general, the means of the subscales and the relations among the subscales with other variables can be compared across these different occasions and groups. Tests of criterion-related validity demonstrated that these subscales correlated, concurrently and longitudinally, with parental warmth and hostility, parent-child alienation, adolescent family obligation, depressive symptoms, resilience, and life meaning. Conclusions This reliable and valid subjective language brokering experiences scale will be helpful for gaining a better understanding of adolescents’ language brokering experiences with their mothers and fathers, and how such experiences may influence their development. PMID:27362872
Reliability, validity and sensitivity of a computerized visual analog scale measuring state anxiety.
Abend, Rany; Dan, Orrie; Maoz, Keren; Raz, Sivan; Bar-Haim, Yair
2014-12-01
Assessment of state anxiety is frequently required in clinical and research settings, but its measurement using standard multi-item inventories entails practical challenges. Such inventories are increasingly complemented by paper-and-pencil, single-item visual analog scales measuring state anxiety (VAS-A), which allow rapid assessment of current anxiety states. Computerized versions of VAS-A offer additional advantages, including facilitated and accurate data collection and analysis, and applicability to computer-based protocols. Here, we establish the psychometric properties of a computerized VAS-A. Experiment 1 assessed the reliability, convergent validity, and discriminant validity of the computerized VAS-A in a non-selected sample. Experiment 2 assessed its sensitivity to increase in state anxiety following social stress induction, in participants with high levels of social anxiety. Experiment 1 demonstrated the computerized VAS-A's test-retest reliability (r = .44, p < .001); convergent validity with the State-Trait Anxiety Inventory's state subscale (STAI-State; r = .60, p < .001); and discriminant validity as indicated by significantly lower correlations between VAS-A and different psychological measures relative to the correlation between VAS-A and STAI-State. Experiment 2 demonstrated the VAS-A's sensitivity to changes in state anxiety via a significant pre- to during-stressor rise in VAS-A scores (F(1,48) = 25.13, p < .001). Set-order administration of measures, absence of clinically-anxious population, and gender-unbalanced samples. The adequate psychometric characteristics, combined with simple and rapid administration, make the computerized VAS-A a valuable self-rating tool for state anxiety. It may prove particularly useful for clinical and research settings where multi-item inventories are less applicable, including computer-based treatment and assessment protocols. The VAS-A is freely available: http://people.socsci.tau.ac.il/mu/anxietytrauma/visual-analog-scale/. Copyright © 2014 Elsevier Ltd. All rights reserved.
Radiative transfer model validations during the First ISLSCP Field Experiment
NASA Technical Reports Server (NTRS)
Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine
1990-01-01
Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchat, Thomas K.; Jernigan, Dann A.
A set of experiments and test data are outlined in this report that provides radiation intensity data for the validation of models for the radiative transfer equation. The experiments were performed with lightly-sooting liquid hydrocarbon fuels that yielded fully turbulent fires 2 m diameter). In addition, supplemental measurements of air flow and temperature, fuel temperature and burn rate, and flame surface emissive power, wall heat, and flame height and width provide a complete set of boundary condition data needed for validation of models used in fire simulations.
Corno, Giulia; Molinari, Guadalupe; Baños, Rosa Maria
2016-01-01
The aim of this study is to explore the psychometric properties of an affect scale, the Scale of Positive and Negative Experience (SPANE), in an Italian-speaking population. The results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The results of the Confirmatory Factor Analysis support the expected two-factor structure, positive and negative feeling, which characterized the previous versions. As expected, measures of negative affect, anxiety, negative future expectances, and depression correlated positively with the negative experiences SPANE subscale, and negatively with the positive experiences SPANE subscale. Results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The use of this instrument provides clinically useful information about a person’s overall emotional experience and it is an indicator of well-being. Although further studies are required to confirm the psychometric characteristics of the scale, the SPANE Italian version is expected to improve theoretical and empirical research on the well-being of the Italian population.
Waghorn, Geoff; Chant, David; King, Robert
2005-04-01
To develop a self-report scale of subjective experiences of illness perceived to impact on employment functioning, as an alternative to a diagnostic perspective, for anticipating the vocational assistance needs of people with schizophrenia or schizoaffective disorders. A repeated measures pilot study (n(1) = 26, n(2) = 21) of community residents with schizophrenia identified a set of work-related subjective experiences perceived to impact on employment functioning. Items with the best psychometric properties were applied in a 12 month longitudinal survey of urban residents with schizophrenia or schizoaffective disorder (n(1) = 104; n(2) = 94; n(3) = 94). Construct validity, factor structure, responsiveness, internal consistency, stability, and criterion validity investigations produced favourable results. Work-related subjective experiences provide information about the intersection of the person, the disorder, and expectations of employment functioning, which suggest new opportunities for vocational professionals to explore and discuss individual assistance needs. Further psychometric investigations of test-retest reliability, discriminant and predictive validity, and research applications in supported employment and vocational rehabilitation, are recommended. Subject to adequate psychometric properties, the new measure promises to facilitate exploring: individuals' specific subjective experiences; how each is perceived to contribute to employment restrictions; and the corresponding implications for specialized treatment, vocational interventions and workplace accommodations.
Design and Validation of an Augmented Reality System for Laparoscopic Surgery in a Real Environment
López-Mir, F.; Naranjo, V.; Fuertes, J. J.; Alcañiz, M.; Bueno, J.; Pareja, E.
2013-01-01
Purpose. This work presents the protocol carried out in the development and validation of an augmented reality system which was installed in an operating theatre to help surgeons with trocar placement during laparoscopic surgery. The purpose of this validation is to demonstrate the improvements that this system can provide to the field of medicine, particularly surgery. Method. Two experiments that were noninvasive for both the patient and the surgeon were designed. In one of these experiments the augmented reality system was used, the other one was the control experiment, and the system was not used. The type of operation selected for all cases was a cholecystectomy due to the low degree of complexity and complications before, during, and after the surgery. The technique used in the placement of trocars was the French technique, but the results can be extrapolated to any other technique and operation. Results and Conclusion. Four clinicians and ninety-six measurements obtained of twenty-four patients (randomly assigned in each experiment) were involved in these experiments. The final results show an improvement in accuracy and variability of 33% and 63%, respectively, in comparison to traditional methods, demonstrating that the use of an augmented reality system offers advantages for trocar placement in laparoscopic surgery. PMID:24236293
A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle
Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less
Piest, Benjamin A; Isberner, Maj-Britt; Richter, Tobias
2018-04-05
Previous research has shown that the validation of incoming information during language comprehension is a fast, efficient, and routine process (epistemic monitoring). Previous research on this topic has focused on epistemic monitoring during reading. The present study extended this research by investigating epistemic monitoring of audiovisual information. In a Stroop-like paradigm, participants (Experiment 1: adults; Experiment 2: 10-year-old children) responded to the probe words correct and false by keypress after the presentation of auditory assertions that could be either true or false with respect to concurrently presented pictures. Results provide evidence for routine validation of audiovisual information. Moreover, the results show a stronger and more stable interference effect for children compared with adults.
Large-scale experimental technology with remote sensing in land surface hydrology and meteorology
NASA Technical Reports Server (NTRS)
Brutsaert, Wilfried; Schmugge, Thomas J.; Sellers, Piers J.; Hall, Forrest G.
1988-01-01
Two field experiments to study atmospheric and land surface processes and their interactions are summarized. The Hydrologic-Atmospheric Pilot Experiment, which tested techniques for measuring evaporation, soil moisture storage, and runoff at scales of about 100 km, was conducted over a 100 X 100 km area in France from mid-1985 to early 1987. The first International Satellite Land Surface Climatology Program field experiment was conducted in 1987 to develop and use relationships between current satellite measurements and hydrologic, climatic, and biophysical variables at the earth's surface and to validate these relationships with ground truth. This experiment also validated surface parameterization methods for simulation models that describe surface processes from the scale of vegetation leaves up to scales appropriate to satellite remote sensing.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; Barth, Janet L.; Brewer, Dana A.
2003-01-01
This viewgraph presentation provides information on flight validation experiments for technologies to determine solar effects. The experiments are intended to demonstrate tolerance to a solar variant environment. The technologies tested are microelectronics, photonics, materials, and sensors.
Validity of clinical color vision tests for air traffic control specialists.
DOT National Transportation Integrated Search
1992-10-01
An experiment on the relationship between aeromedical color vision screening test performance and performance on color-dependent tasks of Air Traffic Control Specialists was replicated to expand the data base supporting the job-related validity of th...
Bröder, A
2000-09-01
The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.
Preliminary Results from the GPS-Reflections Mediterranean Balloon Experiment (GPSR-MEBEX)
NASA Technical Reports Server (NTRS)
Garrison, James L.; Ruffini, Giulio; Rius, Antonio; Cardellach, Estelle; Masters, Dallas; Armatys, Michael; Zavorotny, Valery; Bauer, Frank H. (Technical Monitor)
2000-01-01
An experiment to collect bistatically scattered GPS signals from a balloon at 37 km altitude has been conducted. This experiment represented the highest altitude to date that such signals were successfully recorded. The flight took place in August 1999 over the Mediterranean sea, between a launch in Sicily and recovery near Nerpio, a town in the Sierra de Segura, Albacete province of Huelva, Spain. Results from this experiment are presented, showing the waveform shape as compared to theoretical calculations. These results will be used to validate analytical models which form the basis of wind vector retrieval algorithms. These algorithms are already being validated from aircraft altitudes, but may be applied to data from future spacebourne GPS receivers. Surface wind data from radiosondes were used for comparison. This experiment was a cooperative project between NASA, the IEEC in Barcelona, and the University of Colorado at Boulder.
Preliminary Results from the GPS-Reflections Mediterranean Balloon Experiment (GPSR MEBEX)
NASA Technical Reports Server (NTRS)
Garrison, James L.; Ruffini, Giulio; Rius, Antonio; Cardellach, Estelle; Masters, Dallas; Armathys, Michael; Zavorotny, Valery
2000-01-01
An experiment to collect bistatically scattered GPS signals from a balloon at 37 km altitude has been conducted. This experiment represented the highest altitude to date that such signals were successfully recorded. The flight took place in August 1999 over the Mediterranean sea, between a launch in Sicily and recovery near Nerpio, a town in the Sierra de Segura, Albacete province of Huelva, Spain. Results from this experiment are presented, showing the waveform shape as compared to theoretical calculations. These results will be used to validate analytical models which form the basis of wind vector retrieval algorithms. These algorithms are already being validated from aircraft altitudes, but may be applied to data from future spaceborne GPS receivers. Surface wind data from radiosondes were used for comparison. This experiment was a cooperative project between NASA, the IEEC in Barcelona, and the University of Colorado at Boulder.
ERIC Educational Resources Information Center
Adelman, Clifford
Information is presented on the use of transcripts to validate institutional mission, proposing that transcript archives can serve as grounds against which the validity of an institution's claimed mission with respect to its primary beneficiaries can be measured. This is done with a focus on the community college. The National Longitudinal Study…
Fast Sampling Gas Chromatography (GC) System for Speciation in a Shock Tube
2016-10-31
capture similar ethylene decomposition rates for temperature-dependent shock experiments. (a) Papers published in peer-reviewed journals (N/A for none...3 GC Sampling System Validation Experiments ............................................................................... 5 Ethylene ...results for cold shock experiments, and both techniques capture similar ethylene decomposition rates for temperature-dependent shock experiments. Problem
Barnhardt, Terrence M; Geraci, Lisa
2008-01-01
Two experiments--one employing a perceptual implicit memory test and the other a conceptual implicit memory test--investigated the validity of posttest questionnaires for determining the incidence of awareness in implicit memory tests. In both experiments, a condition in which none of the studied words could be used as test responses (i.e., the none-studied condition) was compared with a standard implicit test condition. Results showed that reports of awareness on the posttest questionnaire were much less frequent in the none-studied condition than in the standard condition. This was especially true after deep processing at study. In both experiments, 83% of the participants in the none-studied condition stated they were unaware even though there were strong demands for claiming awareness. Although there was a small bias in the questionnaire (i.e., 17% of the participants in the none-studied condition stated they were aware), overall, there was strong support for the validity of awareness questionnaires.
Digital Fly-By-Wire Flight Control Validation Experience
NASA Technical Reports Server (NTRS)
Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.
1978-01-01
The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.
EAQUATE: An International Experiment for Hyper-Spectral Atmospheric Sounding Validation
NASA Technical Reports Server (NTRS)
Taylor, J. P.; Smith, W.; Cuomo, V.; Larar, A.; Zhou, D.; Serio, C.; Maestri, T.; Rizzi, R.; Newman, S.; Antonelli, P.;
2008-01-01
The international experiment called EAQUATE (European AQUA Thermodynamic Experiment) was held in September 2004 in Italy and the United Kingdom to demonstrate certain ground-based and airborne systems useful for validating hyperspectral satellite sounding observations. A range of flights over land and marine surfaces were conducted to coincide with overpasses of the AIRS instrument on the EOS Aqua platform. Direct radiance evaluation of AIRS using NAST-I and SHIS has shown excellent agreement. Comparisons of level 2 retrievals of temperature and water vapor from AIRS and NAST-I validated against high quality lidar and drop sonde data show that the 1K/1km and 10%/1km requirements for temperature and water vapor (respectively) are generally being met. The EAQUATE campaign has proven the need for synergistic measurements from a range of observing systems for satellite cal/val and has paved the way for future cal/val activities in support of IASI on the European Metop platform and CrIS on the US NPP/NPOESS platform.
Lankreijer, K; D'Hooghe, T; Sermeus, W; van Asseldonk, F P M; Repping, S; Dancet, E A F
2016-08-01
Can a valid and reliable questionnaire be developed to assess patients' experiences with all of the characteristics of hormonal fertility medication valued by them? The FertiMed questionnaire is a valid and reliable tool that assesses patients' experiences with all medication characteristics valued by them and that can be used for all hormonal fertility medications, irrespective of their route of administration. Hormonal fertility medications cause emotional strain and differ in their dosage regime and route of administration, although they often have comparable effectiveness. Medication experiences of former patients would be informative for medication choices. A recent literature review showed that there is no trustworthy tool to compare patients' experiences of medications with differing routes of administration, regarding all medication characteristics which patients value. The items of the new FertiMed questionnaire were generated by literature review and 23 patient interviews. In 2013, 411 IVF-patients were asked to retrospectively complete the FertiMed questionnaire to assess 1 out of the 8 different medications used for ovarian stimulation, induction of pituitary quiescence, ovulation triggering or luteal support. In total, 276 patients (on average 35 per medication) from 2 university fertility clinics (Belgium, the Netherlands) completed the FertiMed questionnaire (67% response rate). The FertiMed questionnaire questioned whether items were valued by patients and whether these items were experienced while using the assessed medication. Hence, the final outcome 'Experiences with Valued Aspects Scores' (EVAS) combined importance and experience ratings. The content and face validity, reliability, feasibility and discriminative potential of the FertiMed questionnaire were tested and changes were made accordingly. Patient interviews defined 51 items relevant to seven medication characteristics previously proved to be important to patients. Item analysis deleted 10 items. The combined results from the reliability and content validity analysis identified 10 characteristics instead of 7. The final FertiMed questionnaire was valid (Adapted Goodness of Fit Index = 0.95) and all but one characteristic ('ease of use: disturbance') could be assessed reliably (Cronbach's α > 0.60). The EVAS per characteristic differed between the medications inducing pituitary quiescence (P = 0.001). As all eight medications prescribed in the recruiting clinics were questioned, sample sizes per medication were rather small for presenting EVAS per medication and for testing the discriminative potential of the FertiMed questionnaire. The FertiMed questionnaire can be used for all hormonal fertility medications to assess in a valid and reliable way whether patients experience what they value regarding 10 medication characteristics (e.g. side effects and ease of use). Future randomized controlled trials (RCT) comparing medications could include the FertiMed questionnaire as a Patient Reported Experience Measure (PREM). Insights from these RCTs could be used to develop evidence-based decision aids aiming to facilitate shared physician-patient medication choices. Funding was received from the University of Leuven and Amsterdam University Medical Centre. E.A.F.D. holds a postdoctoral fellowship of the Research Foundation of Flanders. T.D. was appointed Vice-President and Head Global Medical Affairs Fertility at Merck (Darmstadt, Germany) on 1 October 2015. The reported project was initiated and finished before this date. The other authors had no conflicts of interest to declare. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fransson, Boel A; Chen, Chi-Ya; Noyes, Julie A; Ragle, Claude A
2016-11-01
To determine the construct and concurrent validity of instrument motion metrics for laparoscopic skills assessment in virtual reality and augmented reality simulators. Evaluation study. Veterinarian students (novice, n = 14) and veterinarians (experienced, n = 11) with no or variable laparoscopic experience. Participants' minimally invasive surgery (MIS) experience was determined by hospital records of MIS procedures performed in the Teaching Hospital. Basic laparoscopic skills were assessed by 5 tasks using a physical box trainer. Each participant completed 2 tasks for assessments in each type of simulator (virtual reality: bowel handling and cutting; augmented reality: object positioning and a pericardial window model). Motion metrics such as instrument path length, angle or drift, and economy of motion of each simulator were recorded. None of the motion metrics in a virtual reality simulator showed correlation with experience, or to the basic laparoscopic skills score. All metrics in augmented reality were significantly correlated with experience (time, instrument path, and economy of movement), except for the hand dominance metric. The basic laparoscopic skills score was correlated to all performance metrics in augmented reality. The augmented reality motion metrics differed between American College of Veterinary Surgeons diplomates and residents, whereas basic laparoscopic skills score and virtual reality metrics did not. Our results provide construct validity and concurrent validity for motion analysis metrics for an augmented reality system, whereas a virtual reality system was validated only for the time score. © Copyright 2016 by The American College of Veterinary Surgeons.
Yanos, Philip T; Vayshenker, Beth; DeLuca, Joseph S; O'Connor, Lauren K
2017-10-01
Mental health professionals who work with people with serious mental illnesses are believed to experience associative stigma. Evidence suggests that associative stigma could play an important role in the erosion of empathy among professionals; however, no validated measure of the construct currently exists. This study examined the convergent and discriminant validity and factor structure of a new scale assessing the associative stigma experiences of clinicians working with people with serious mental illnesses. A total of 473 clinicians were recruited from professional associations in the United States and participated in an online study. Participants completed the Clinician Associative Stigma Scale (CASS) and measures of burnout, quality of care, expectations about recovery, and self-efficacy. Associative stigma experiences were commonly endorsed; eight items on the 18-item scale were endorsed as being experienced "sometimes" or "often" by over 50% of the sample. The new measure demonstrated a logical four-factor structure: "negative stereotypes about professional effectiveness," "discomfort with disclosure," "negative stereotypes about people with mental illness," and "stereotypes about professionals' mental health." The measure had good internal consistency. It was significantly related to measures of burnout and quality of care, but it was not related to measures of self-efficacy or expectations about recovery. Findings suggest that the CASS is internally consistent and shows evidence of convergent validity and that associative stigma is commonly experienced by mental health professionals who work with people with serious mental illnesses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.
The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less
Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuscamman, Stephanie; Glenn, Lewis; Schebler, Gregory
2011-09-12
This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).
Jutte, Lisa S; Long, Blaine C; Knight, Kenneth L
2010-01-01
Thermocouples' leads are often too short, necessitating the use of an extension lead. To determine if temperature measures were influenced by extension-lead use or lead temperature changes. Descriptive laboratory study. Laboratory. Experiment 1: 10 IT-21 thermocouples and 5 extension leads. Experiment 2: 5 IT-21 and PT-6 thermocouples. In experiment 1, temperature data were collected on 10 IT-21 thermocouples in a stable water bath with and without extension leads. In experiment 2, temperature data were collected on 5 IT-21 and PT-6 thermocouples in a stable water bath before, during, and after ice-pack application to extension leads. In experiment 1, extension leads did not influence IT-21 validity (P = .45) or reliability (P = .10). In experiment 2, postapplication IT-21 temperatures were greater than preapplication and application measures (P < .05). Extension leads had no influence on temperature measures. Ice application to leads may increase measurement error.
A novel cell culture model as a tool for forensic biology experiments and validations.
Feine, Ilan; Shpitzen, Moshe; Roth, Jonathan; Gafny, Ron
2016-09-01
To improve and advance DNA forensic casework investigation outcomes, extensive field and laboratory experiments are carried out in a broad range of relevant branches, such as touch and trace DNA, secondary DNA transfer and contamination confinement. Moreover, the development of new forensic tools, for example new sampling appliances, by commercial companies requires ongoing validation and assessment by forensic scientists. A frequent challenge in these kinds of experiments and validations is the lack of a stable, reproducible and flexible biological reference material. As a possible solution, we present here a cell culture model based on skin-derived human dermal fibroblasts. Cultured cells were harvested, quantified and dried on glass slides. These slides were used in adhesive tape-lifting experiments and tests of DNA crossover confinement by UV irradiation. The use of this model enabled a simple and concise comparison between four adhesive tapes, as well as a straightforward demonstration of the effect of UV irradiation intensities on DNA quantity and degradation. In conclusion, we believe this model has great potential to serve as an efficient research tool in forensic biology. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
CFD Validation Studies for Hypersonic Flow Prediction
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.
CFD Validation Studies for Hypersonic Flow Prediction
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2001-01-01
A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.
Sánchez, Renata; Rodríguez, Omaira; Rosciano, José; Vegas, Liumariel; Bond, Verónica; Rojas, Aram; Sanchez-Ismayel, Alexis
2016-09-01
The objective of this study is to determine the ability of the GEARS scale (Global Evaluative Assessment of Robotic Skills) to differentiate individuals with different levels of experience in robotic surgery, as a fundamental validation. This is a cross-sectional study that included three groups of individuals with different levels of experience in robotic surgery (expert, intermediate, novice) their performance were assessed by GEARS applied by two reviewers. The difference between groups was determined by Mann-Whitney test and the consistency between the reviewers was studied by Kendall W coefficient. The agreement between the reviewers of the scale GEARS was 0.96. The score was 29.8 ± 0.4 to experts, 24 ± 2.8 to intermediates and 16 ± 3 to novices, with a statistically significant difference between all of them (p < 0.05). All parameters from the scale allow discriminating between different levels of experience, with exception of the depth perception item. We conclude that the scale GEARS was able to differentiate between individuals with different levels of experience in robotic surgery and, therefore, is a validated and useful tool to evaluate surgeons in training.
Miller, Kathleen E.; Dermen, Kurt H.; Lucke, Joseph F.
2017-01-01
BACKGROUND Young adult use of alcohol mixed with energy drinks (AmEDs) has been linked with elevated risks for a constellation of problem behaviors. These risks may be conditioned by expectancies regarding the effects of caffeine in conjunction with alcohol consumption. The aim of this study was to describe the construction and psychometric evaluation of the Intoxication-Related AmED Expectancies Scale (AmED_EXPI), 15 self-report items measuring beliefs about how the experience of AmED intoxication differs from the experience of noncaffeinated alcohol (NCA) intoxication. METHODS Scale development and testing were conducted using data from a U.S. national sample of 3,105 adolescents and emerging adults aged 13–25. Exploratory and confirmatory factor analyses were conducted to evaluate the factor structure and establish factor invariance across gender, age, and prior experience with AmED use. Cross-sectional and longitudinal analyses examining correlates of AmED use were used to assess construct and predictive validity. RESULTS In confirmatory factor analyses, fit indices for the hypothesized four-factor structure (i.e., Intoxication Management [IM], Alertness [AL], Sociability [SO], and Jitters [JT]) revealed a moderately good fit to the data. Together, these factors accounted for 75.3% of total variance. The factor structure was stable across male/female, teen/young adult, and AmED experience/no experience subgroups. The resultant unit-weighted subscales showed strong internal consistency and satisfactory convergent validity. Baseline scores on the IM, SO, and JT subscales predicted changes in AmED use over a subsequent three-month period. CONCLUSIONS The AmED_EXPI appears to be a reliable and valid tool for measuring expectancies about the effects of caffeine during alcohol intoxication. PMID:28421613
Reactivity loss validation of high burn-up PWR fuels with pile-oscillation experiments in MINERVE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leconte, P.; Vaglio-Gaudard, C.; Eschbach, R.
2012-07-01
The ALIX experimental program relies on the experimental validation of the spent fuel inventory, by chemical analysis of samples irradiated in a PWR between 5 and 7 cycles, and also on the experimental validation of the spent fuel reactivity loss with bum-up, obtained by pile-oscillation measurements in the MINERVE reactor. These latter experiments provide an overall validation of both the fuel inventory and of the nuclear data responsible for the reactivity loss. This program offers also unique experimental data for fuels with a burn-up reaching 85 GWd/t, as spent fuels in French PWRs never exceeds 70 GWd/t up to now.more » The analysis of these experiments is done in two steps with the APOLLO2/SHEM-MOC/CEA2005v4 package. In the first one, the fuel inventory of each sample is obtained by assembly calculations. The calculation route consists in the self-shielding of cross sections on the 281 energy group SHEM mesh, followed by the flux calculation by the Method Of Characteristics in a 2D-exact heterogeneous geometry of the assembly, and finally a depletion calculation by an iterative resolution of the Bateman equations. In the second step, the fuel inventory is used in the analysis of pile-oscillation experiments in which the reactivity of the ALIX spent fuel samples is compared to the reactivity of fresh fuel samples. The comparison between Experiment and Calculation shows satisfactory results with the JEFF3.1.1 library which predicts the reactivity loss within 2% for burn-up of {approx}75 GWd/t and within 4% for burn-up of {approx}85 GWd/t. (authors)« less
Roberts, Tawna L; Kester, Kristi N; Hertle, Richard W
2018-04-01
This study presents test-retest reliability of optotype visual acuity (OVA) across 60° of horizontal gaze position in patients with infantile nystagmus syndrome (INS). Also, the validity of the metric gaze-dependent functional vision space (GDFVS) is shown in patients with INS. In experiment 1, OVA was measured twice in seven horizontal gaze positions from 30° left to right in 10° steps in 20 subjects with INS and 14 without INS. Test-retest reliability was assessed using intraclass correlation coefficient (ICC) in each gaze. OVA area under the curve (AUC) was calculated with horizontal eye position on the x-axis, and logMAR visual acuity on the y-axis and then converted to GDFVS. In experiment 2, validity of GDFVS was determined over 40° horizontal gaze by applying the 95% limits of agreement from experiment 1 to pre- and post-treatment GDFVS values from 85 patients with INS. In experiment 1, test-retest reliability for OVA was high (ICC ≥ 0.88) as the difference in test-retest was on average less than 0.1 logMAR in each gaze position. In experiment 2, as a group, INS subjects had a significant increase (P < 0.001) in the size of their GDFVS that exceeded the 95% limits of agreement found during test-retest. OVA is a reliable measure in INS patients across 60° of horizontal gaze position. GDFVS is a valid clinical method to be used to quantify OVA as a function of eye position in INS patients. This method captures the dynamic nature of OVA in INS patients and may be a valuable measure to quantify visual function patients with INS, particularly in quantifying change as part of clinical studies.
Miller, Kathleen E; Dermen, Kurt H; Lucke, Joseph F
2017-06-01
Young adult use of alcohol mixed with energy drinks (AmEDs) has been linked with elevated risks of a constellation of problem behaviors. These risks may be conditioned by expectancies regarding the effects of caffeine in conjunction with alcohol consumption. The aim of this study was to describe the construction and psychometric evaluation of the Intoxication-Related AmED Expectancies Scale (AmED_EXPI), 15 self-report items measuring beliefs about how the experience of AmED intoxication differs from the experience of noncaffeinated alcohol (NCA) intoxication. Scale development and testing were conducted using data from a U.S. national sample of 3,105 adolescents and emerging adults aged 13 to 25. Exploratory and confirmatory factor analyses were conducted to evaluate the factor structure and establish factor invariance across gender, age, and prior experience with AmED use. Cross-sectional and longitudinal analyses examining correlates of AmED use were used to assess construct and predictive validity. In confirmatory factor analyses, fit indices for the hypothesized 4-factor structure (i.e., Intoxication Management [IM], Alertness [AL], Sociability [SO], and Jitters [JT]) revealed a moderately good fit to the data. Together, these factors accounted for 75.3% of total variance. The factor structure was stable across male/female, teen/young adult, and AmED experience/no experience subgroups. The resultant unit-weighted subscales showed strong internal consistency and satisfactory convergent validity. Baseline scores on the IM, SO, and JT subscales predicted changes in AmED use over a subsequent 3-month period. The AmED_EXPI appears to be a reliable and valid tool for measuring expectancies about the effects of caffeine during alcohol intoxication. Copyright © 2017 by the Research Society on Alcoholism.
Crespo-Maraver, Mariacruz; Doval, Eduardo; Fernández-Castro, Jordi; Giménez-Salinas, Jordi; Prat, Gemma; Bonet, Pere
2018-04-04
To adapt and to validate the Experience of Caregiving Inventory (ECI) in a Spanish population, providing empirical evidence of its internal consistency, internal structure and validity. Psychometric validation of the adapted version of the ECI. One hundred and seventy-two caregivers (69.2% women), mean age 57.51 years (range: 21-89) participated. Demographic and clinical data, standardized measures (ECI, suffering scale of SCL-90-R, Zarit burden scale) were used. The two scales of negative evaluation of the ECI most related to serious mental disorders (disruptive behaviours [DB] and negative symptoms [NS]) and the two scales of positive appreciation (positive personal experiences [PPE], and good aspects of the relationship [GAR]) were analyzed. Exploratory structural equation modelling was used to analyze the internal structure. The relationship between the ECI scales and the SCL-90-R and Zarit scores was also studied. The four-factor model presented a good fit. Cronbach's alpha (DB: 0.873; NS: 0.825; PPE: 0.720; GAR: 0.578) showed a higher homogeneity in the negative scales. The SCL-90-R scores correlated with the negative ECI scales, and none of the ECI scales correlated with the Zarit scale. The Spanish version of the ECI can be considered a valid, reliable, understandable and feasible self-report measure for its administration in the health and community context. Copyright © 2018 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Marable, Brian R; Maurissen, Jacques P J
2004-01-01
Neurotoxicity regulatory guidelines mandate that automated test systems be validated using chemicals. However, in some cases, chemicals may not necessarily be needed to prove test system validity. To examine this issue, two independent experiments were conducted to validate an automated auditory startle response (ASR) system. In Experiment 1, we used adult (PND 63) and weanling (PND 22) Sprague-Dawley rats (10/sex/dose) to determine the effect of either d-amphetamine (4.0 or 8.0 mg/kg) or clonidine (0.4 or 0.8 mg/kg) on the ASR peak amplitude (ASR PA). The startle response of each rat to a short burst of white noise (120 dB SPL) was recorded over 50 consecutive trials. The ASR PA was significantly decreased (by clonidine) and increased (by d-amphetamine) compared to controls in PND 63 rats. In PND 22 rats, the response to clonidine was similar to adults, but d-amphetamine effects were not significant. Neither drug affected the rate of the decrease in ASR PA over time (habituation). In Experiment 2, PND 31 Sprague-Dawley rats (8/sex) were presented with 150 trials consisting of either white noise bursts of variable intensity (70-120 dB SPL in 10 dB increments, presented in random order) or null (0 dB SPL) trials. Statistically significant sex- and intensity-dependent differences were detected in the ASR PA. These results suggest that in some cases, parametric modulation may be an alternative to using chemicals for test system validation.
Performance-based comparison of neonatal intubation training outcomes: simulator and live animal.
Andreatta, Pamela B; Klotz, Jessica J; Dooley-Hash, Suzanne L; Hauptman, Joe G; Biddinger, Bea; House, Joseph B
2015-02-01
The purpose of this article was to establish psychometric validity evidence for competency assessment instruments and to evaluate the impact of 2 forms of training on the abilities of clinicians to perform neonatal intubation. To inform the development of assessment instruments, we conducted comprehensive task analyses including each performance domain associated with neonatal intubation. Expert review confirmed content validity. Construct validity was established using the instruments to differentiate between the intubation performance abilities of practitioners (N = 294) with variable experience (novice through expert). Training outcomes were evaluated using a quasi-experimental design to evaluate performance differences between 294 subjects randomly assigned to 1 of 2 training groups. The training intervention followed American Heart Association Pediatric Advanced Life Support and Neonatal Resuscitation Program protocols with hands-on practice using either (1) live feline or (2) simulated feline models. Performance assessment data were captured before and directly following the training. All data were analyzed using analysis of variance with repeated measures and statistical significance set at P < .05. Content validity, reliability, and consistency evidence were established for each assessment instrument. Construct validity for each assessment instrument was supported by significantly higher scores for subjects with greater levels of experience, as compared with those with less experience (P = .000). Overall, subjects performed significantly better in each assessment domain, following the training intervention (P = .000). After controlling for experience level, there were no significant differences among the cognitive, performance, and self-efficacy outcomes between clinicians trained with live animal model or simulator model. Analysis of retention scores showed that simulator trained subjects had significantly higher performance scores after 18 weeks (P = .01) and 52 weeks (P = .001) and cognitive scores after 52 weeks (P = .001). The results of this study demonstrate the feasibility of using valid, reliable assessment instruments to assess clinician competency and self-efficacy in the performance of neonatal intubation. We demonstrated the relative equivalency of live animal and simulation-based models as tools to support acquisition of neonatal intubation skills. Retention of performance abilities was greater for subjects trained using the simulator, likely because it afforded greater opportunity for repeated practice. Outcomes in each assessment area were influenced by the previous intubation experience of participants. This suggests that neonatal intubation training programs could be tailored to the level of provider experience to make efficient use of time and educational resources. Future research focusing on the uses of assessment in the applied clinical environment, as well as identification of optimal training cycles for performance retention, is merited.
Chan, Wallace Chi Ho; Tin, Agnes Fong; Wong, Karen Lok Yi
2015-07-01
Palliative care professionals often are confronted by death in their work. They may experience challenges to self, such as aroused emotions and queries about life's meaningfulness. Assessing their level of "self-competence" in coping with these challenges is crucial in understanding their needs in death work. This study aims to develop and validate the Self-Competence in Death Work Scale (SC-DWS). Development of this scale involved three steps: 1) items generated from a qualitative study with palliative care professionals, (2) expert panel review, and (3) pilot test. Analysis was conducted to explore the factor structure and examine the reliability and validity of the scale. Helping professionals involved in death work were recruited to complete questionnaires comprising the SC-DWS and other scales. A total of 151 participants were recruited. Both one-factor and two-factor structures were found. Emotional and existential coping were identified as subscales in the two-factor structure. Correlations of the whole scale and subscales with measures of death attitudes, meaning in life, burnout and depression provided evidence for the construct validity. Discriminative validity was supported by showing participants with bereavement experience and longer experience in the profession and death work possessed a significantly higher level of self-competence. Reliability analyses showed that the entire scale and subscales were internally consistent. The SC-DWS was found to be valid and reliable. This scale may facilitate helping professionals' understanding of their self-competence in death work, so appropriate professional support and training may be obtained. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Starr, David
2000-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community.
Eriksen, Anne Haahr Mellergaard; Andersen, Rikke Fredslund; Pallisgaard, Niels; Sørensen, Flemming Brandt; Jakobsen, Anders; Hansen, Torben Frøstrup
2016-01-01
MicroRNAs (miRNAs) play important roles in regulating biological processes at the post-transcriptional level. Deregulation of miRNAs has been observed in cancer, and miRNAs are being investigated as potential biomarkers regarding diagnosis, prognosis and prediction in cancer management. Real-time quantitative polymerase chain reaction (RT-qPCR) is commonly used, when measuring miRNA expression. Appropriate normalisation of RT-qPCR data is important to ensure reliable results. The aim of the present study was to identify stably expressed miRNAs applicable as normaliser candidates in future studies of miRNA expression in rectal cancer. We performed high-throughput miRNA profiling (OpenArray®) on ten pairs of laser micro-dissected rectal cancer tissue and adjacent stroma. A global mean expression normalisation strategy was applied to identify the most stably expressed miRNAs for subsequent validation. In the first validation experiment, a panel of miRNAs were analysed on 25 pairs of micro dissected rectal cancer tissue and adjacent stroma. Subsequently, the same miRNAs were analysed in 28 pairs of rectal cancer tissue and normal rectal mucosa. From the miRNA profiling experiment, miR-645, miR-193a-5p, miR-27a and let-7g were identified as stably expressed, both in malignant and stromal tissue. In addition, NormFinder confirmed high expression stability for the four miRNAs. In the RT-qPCR based validation experiments, no significant difference between tumour and stroma/normal rectal mucosa was detected for the mean of the normaliser candidates miR-27a, miR-193a-5p and let-7g (first validation P = 0.801, second validation P = 0.321). MiR-645 was excluded from the data analysis, because it was undetected in 35 of 50 samples (first validation) and in 24 of 56 samples (second validation), respectively. Significant difference in expression level of RNU6B was observed between tumour and adjacent stromal (first validation), and between tumour and normal rectal mucosa (second validation). We recommend the mean expression of miR-27a, miR-193a-5p and let-7g as normalisation factor, when performing miRNA expression analyses by RT-qPCR on rectal cancer tissue.
ERIC Educational Resources Information Center
Romine, William L.; Schaffer, Dane L.; Barrow, Lloyd
2015-01-01
We describe the development and validation of a three-tiered diagnostic test of the water cycle (DTWC) and use it to evaluate the impact of prior learning experiences on undergraduates' misconceptions. While most approaches to instrument validation take a positivist perspective using singular criteria such as reliability and fit with a measurement…
Increased importance of the documented development stage in process validation.
Mohammed-Ziegler, Ildikó; Medgyesi, Ildikó
2012-07-01
Current trends in pharmaceutical quality assurance moved when the Federal Drug Administration (FDA) of the USA published its new guideline on process validation in 2011. This guidance introduced the lifecycle approach of process validation. In this short communication some typical changes from the point of view of practice of API production are addressed in the light of inspection experiences. Some details are compared with the European regulations.
Validating the AIRS Version 5 CO Retrieval with DACOM In Situ Measurements During INTEX-A and -B
NASA Technical Reports Server (NTRS)
McMillan, Wallace W.; Evans, Keith D.; Barnet, Christopher D.; Maddy, Eric; Sachse, Glen W.; Diskin, Glenn S.
2011-01-01
Herein we provide a description of the atmospheric infrared sounder (AIRS) version 5 (v5) carbon monoxide (CO) retrieval algorithm and its validation with the DACOM in situ measurements during the INTEX-A and -B campaigns. All standard and support products in the AIRS v5 CO retrieval algorithm are documented. Building on prior publications, we describe the convolution of in situ measurements with the AIRS v5 CO averaging kernel and first-guess CO profile as required for proper validation. Validation is accomplished through comparison of AIRS CO retrievals with convolved in situ CO profiles acquired during the NASA Intercontinental Chemical Transport Experiments (INTEX) in 2004 and 2006. From 143 profiles in the northern mid-latitudes during these two experiments, we find AIRS v5 CO retrievals are biased high by 6% 10% between 900 and 300 hPa with a root-mean-square error of 8% 12%. No significant differences were found between validation using spiral profiles coincident with AIRS overpasses and in-transit profiles under the satellite track but up to 13 h off in time. Similarly, no significant differences in validation results were found for ocean versus land, day versus night, or with respect to retrieved cloud top pressure or cloud fraction.
Symbolic control of visual attention: semantic constraints on the spatial distribution of attention.
Gibson, Bradley S; Scheutz, Matthias; Davis, Gregory J
2009-02-01
Humans routinely use spatial language to control the spatial distribution of attention. In so doing, spatial information may be communicated from one individual to another across opposing frames of reference, which in turn can lead to inconsistent mappings between symbols and directions (or locations). These inconsistencies may have important implications for the symbolic control of attention because they can be translated into differences in cue validity, a manipulation that is known to influence the focus of attention. This differential validity hypothesis was tested in Experiment 1 by comparing spatial word cues that were predicted to have high learned spatial validity ("above/below") and low learned spatial validity ("left/right"). Consistent with this prediction, when two measures of selective attention were used, the results indicated that attention was less focused in response to "left/right" cues than in response to "above/below" cues, even when the actual validity of each of the cues was equal. In addition, Experiment 2 predicted that spatial words such as "left/right" would have lower spatial validity than would other directional symbols that specify direction along the horizontal axis, such as "<--/-->" cues. The results were also consistent with this hypothesis. Altogether, the present findings demonstrate important semantic-based constraints on the spatial distribution of attention.
Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta
2017-02-01
The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.
NASA Technical Reports Server (NTRS)
Emmons, L. K.; Pfister, G. G.; Edwards, D. P.; Gille, J. C.; Sachse, G.; Blake, D.; Wofsy, S.; Gerbig, C.; Matross, D.; Nedelec, P.
2007-01-01
Measurements of carbon monoxide (CO) made as part of three aircraft experiments during the summer of 2004 over North America have been used for the continued validation of the CO retrievals from the Measurements of Pollution in the Troposphere (MOPITT) instrument on board the Terra satellite. Vertical profiles measured during the NASA INTEX-A campaign, designed to be coincident with MOPITT overpasses, as well as measurements made during the COBRA-2004 and MOZAIC experiments, provided valuable validation comparisons. On average, the MOPITT CO retrievals are biased slightly high for these North America locations. While the mean bias differs between the different aircraft experiments (e.g., 7.0 ppbv for MOZAIC to 18.4 ppbv for COBRA at 700 hPa), the standard deviations are quite large, so the results for the three data sets can be considered consistent. On average, it is estimated that MOPITT is 7- 14% high at 700 hPa and 03% high at 350 hPa. These results are consistent with the validation results for the Carr, Colorado, Harvard Forest, Massachusetts, and Poker Flats, Alaska, aircraft profiles for "phase 2" presented by Emmons et al. (2004) and are generally within the design criteria of 10% accuracy.
Observations with the ROWS instrument during the Grand Banks calibration/validation experiments
NASA Technical Reports Server (NTRS)
Vandemark, D.; Chapron, B.
1994-01-01
As part of a global program to validate the ocean surface sensors on board ERS-1, a joint experiment on the Grand Banks of Newfoundland was carried out in Nov. 1991. The principal objective was to provide a field validation of ERS-1 Synthetic Aperture Radar (SAR) measurement of ocean surface structure. The NASA-P3 aircraft measurements made during this experiment provide independent measurements of the ocean surface along the validation swath. The Radar Ocean Wave Spectrometer (ROWS) is a radar sensor designed to measure direction of the long wave components using spectral analysis of the tilt induced radar backscatter modulation. This technique greatly differs from SAR and thus, provides a unique set of measurements for use in evaluating SAR performance. Also, an altimeter channel in the ROWS gives simultaneous information on the surface wave height and radar mean square slope parameter. The sets of geophysical parameters (wind speed, significant wave height, directional spectrum) are used to study the SAR's ability to accurately measure ocean gravity waves. The known distortion imposed on the true directional spectrum by the SAR imaging mechanism is discussed in light of the direct comparisons between ERS-1 SAR, airborne Canadian Center for Remote Sensing (CCRS) SAR, and ROWS spectra and the use of the nonlinear ocean SAR transform.
Individual Differences Methods for Randomized Experiments
ERIC Educational Resources Information Center
Tucker-Drob, Elliot M.
2011-01-01
Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily…
Style preference survey: a report on the psychometric properties and a cross-validation experiment.
Smith, Sherri L; Ricketts, Todd; McArdle, Rachel A; Chisolm, Theresa H; Alexander, Genevieve; Bratt, Gene
2013-02-01
Several self-report measures exist that target different aspects of outcomes for hearing aid use. Currently, no comprehensive questionnaire specifically assesses factors that may be important for differentiating outcomes pertaining to hearing aid style. The goal of this work was to develop the Style Preference Survey (SPS), a questionnaire aimed at outcomes associated with hearing aid style differences. Two experiments were conducted. After initial item development, Experiment 1 was conducted to refine the items and to determine its psychometric properties. Experiment 2 was designed to cross-validate the findings from the initial experiment. An observational design was used in both experiments. Participants who wore traditional, custom-fitted (TC) or open-canal (OC) style hearing aids from 3 mo to 3 yr completed the initial experiment. One-hundred and eighty-four binaural hearing aid users (120 of whom wore TC hearing aids and 64 of whom wore OC hearing aids) participated. A new sample of TC and OC users (n = 185) participated in the cross-validation experiment. Currently available self-report measures were reviewed to identify items that might differentiate between hearing aid styles, particularly preference for OC versus TC hearing aid styles. A total of 15 items were selected and modified from available self-report measures. An additional 55 items were developed through consensus of six audiologists for the initial version of the SPS. In the first experiment, the initial SPS version was mailed to 550 veterans who met the inclusion criteria. A total of 184 completed the SPS. Approximately three weeks later, a subset of participants (n = 83) completed the SPS a second time. Basic analyses were conducted to evaluate the psychometric properties of the SPS including subscale structure, internal consistency, test-retest reliability, and responsiveness. Based on the results of Experiment 1, the SPS was revised. A cross-validation experiment was then conducted using the revised version of the SPS to confirm the subscale structure, internal consistency, and responsiveness of the questionnaire in a new sample of participants. The final factor analysis led to the ultimate version of the SPS, which had a total of 35 items encompassing five subscales: (1) Feedback, (2) Occlusion/Own Voice Effects, (3) Localization, (4) Fit, Comfort, and Cosmetics, and (5) Ease of Use. The internal consistency of the total SPS (Cronbach's α = .92) and of the subscales (each Cronbach's α > .75) was high. Intraclass correlations (ICCs) showed that the test-retest reliability of the total SPS (ICC = .93) and of the subscales (each ICC > .80) also was high. TC hearing aid users had significantly poorer outcomes than OC hearing aid users on 4 of the 5 subscales, suggesting that the SPS largely is responsive to factors related to style-specific differences. The results suggest that the SPS has good psychometric properties and is a valid and reliable measure of outcomes related to style-specific, hearing aid preference. American Academy of Audiology.
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
MODELS FOR SUBMARINE OUTFALL - VALIDATION AND PREDICTION UNCERTAINTIES
This address reports on some efforts to verify and validate dilution models, including those found in Visual Plumes. This is done in the context of problem experience: a range of problems, including different pollutants such as bacteria; scales, including near-field and far-field...
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Dong; Liu, Yu; Liu, Jingxiao; Li, Jingsong; Yu, Benli
2017-11-01
We demonstrate the validity of the simultaneous reverse optimization reconstruction (SROR) algorithm in circular subaperture stitching interferometry (CSSI), which is previously proposed for non-null aspheric annular subaperture stitching interferometry (ASSI). The merits of the modified SROR algorithm in CSSI, such as auto retrace error correction, no need of overlap and even permission of missed coverage, are analyzed in detail in simulations and experiments. Meanwhile, a practical CSSI system is proposed for this demonstration. An optical wedge is employed to deflect the incident beam for subaperture scanning by its rotation and shift instead of the six-axis motion-control system. Also the reference path can provide variable Zernike defocus for each subaperture test, which would decrease the fringe density. Experiments validating the SROR algorithm in this CSSI is implemented with cross validation by testing of paraboloidal mirror, flat mirror and astigmatism mirror. It is an indispensable supplement in SROR application in general subaperture stitching interferometry.
Philips, Zoë; Whynes, David K; Avis, Mark
2006-02-01
This paper describes an experiment to test the construct validity of contingent valuation, by eliciting women's valuations for the NHS cervical cancer screening programme. It is known that, owing to low levels of knowledge of cancer and screening in the general population, women both over-estimate the risk of disease and the efficacy of screening. The study is constructed as a randomised experiment, in which one group is provided with accurate information about cervical cancer screening, whilst the other is not. The first hypothesis supporting construct validity, that controls who perceive greater benefits from screening will offer higher valuations, is substantiated. Both groups are then provided with objective information on an improvement to the screening programme, and are asked to value the improvement as an increment to their original valuations. The second hypothesis supporting construct validity, that controls who perceive the benefits of the programme to be high already will offer lower incremental valuations, is also substantiated. Copyright 2005 John Wiley & Sons, Ltd.
The marketing implications of affective product design.
Seva, Rosemary R; Duh, Henry Been-Lirn; Helander, Martin G
2007-11-01
Emotions are compelling human experiences and product designers can take advantage of this by conceptualizing emotion-engendering products that sell well in the market. This study hypothesized that product attributes influence users' emotions and that the relationship is moderated by the adherence of these product attributes to purchase criteria. It was further hypothesized that the emotional experience of the user influences purchase intention. A laboratory study was conducted to validate the hypotheses using mobile phones as test products. Sixty-two participants were asked to assess eight phones from a display of 10 phones and indicate their emotional experiences after assessment. Results suggest that some product attributes can cause intense emotional experience. The attributes relate to the phone's dimensions and the relationship between these dimensions. The study validated the notion of integrating affect in designing products that convey users' personalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Paul A.; Liao, Chang-hsien
2007-11-15
A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passamai, V.; Saravia, L.
1997-05-01
Drying of red pepper under solar radiation was investigated, and a simple model related to water evaporation was developed. Drying experiments at constant laboratory conditions were undertaken where solar radiation was simulated by a 1,000 W lamp. In this first part of the work, water evaporation under radiation is studied and laboratory experiments are presented with two objectives: to verify Penman`s model of evaporation under radiation, and to validate the laboratory experiments. Modifying Penman`s model of evaporation by introducing two drying conductances as a function of water content, allows the development of a drying model under solar radiation. In themore » second part of this paper, the model is validated by applying it to red pepper open air solar drying experiments.« less
Harman, Elena; Azzam, Tarek
2018-02-01
This exploratory study examines a novel tool for validating program theory through crowdsourced qualitative analysis. It combines a quantitative pattern matching framework traditionally used in theory-driven evaluation with crowdsourcing to analyze qualitative interview data. A sample of crowdsourced participants are asked to read an interview transcript and identify whether program theory components (Activities and Outcomes) are discussed and to highlight the most relevant passage about that component. The findings indicate that using crowdsourcing to analyze qualitative data can differentiate between program theory components that are supported by a participant's experience and those that are not. This approach expands the range of tools available to validate program theory using qualitative data, thus strengthening the theory-driven approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Soil Moisture Retrieval with Airborne PALS Instrument over Agricultural Areas in SMAPVEX16
NASA Technical Reports Server (NTRS)
Colliander, Andreas; Jackson, Thomas J.; Cosh, Mike; Misra, Sidharth; Bindlish, Rajat; Powers, Jarrett; McNairn, Heather; Bullock, P.; Berg, A.; Magagi, A.;
2017-01-01
NASA's SMAP (Soil Moisture Active Passive) calibration and validation program revealed that the soil moisture products are experiencing difficulties in meeting the mission requirements in certain agricultural areas. Therefore, the mission organized airborne field experiments at two core validation sites to investigate these anomalies. The SMAP Validation Experiment 2016 included airborne observations with the PALS (Passive Active L-band Sensor) instrument and intensive ground sampling. The goal of the PALS measurements are to investigate the soil moisture retrieval algorithm formulation and parameterization under the varying (spatially and temporally) conditions of the agricultural domains and to obtain high resolution soil moisture maps within the SMAP pixels. In this paper the soil moisture retrieval using the PALS brightness temperature observations in SMAPVEX16 is presented.
Unforgiveness: Refining theory and measurement of an understudied construct.
Stackhouse, Madelynn R D; Jones Ross, Rachel W; Boon, Susan D
2018-01-01
This research presents a multidimensional conceptualization of unforgiveness and the development and validation of the unforgiveness measure (UFM). The scale was developed based on a qualitative study of people's experiences of unforgiven interpersonal offences (Study 1). Three dimensions of unforgiveness emerged (Study 2): emotional-ruminative unforgiveness, cognitive-evaluative unforgiveness, and offender reconstrual. We supported the scale's factor structure, reliability, and validity (Study 3). We also established the convergent and discriminant validity of the UFM with measures of negative affect, rumination, forgiveness, cognitive reappraisal, and emotional suppression (Study 4). Together, our results suggest that the UFM can capture variability in victims' unforgiving experiences in the aftermath of interpersonal transgressions. Implications for understanding the construct of unforgiveness and directions for future research are discussed. © 2017 The British Psychological Society.
Torabinia, Mansour; Mahmoudi, Sara; Dolatshahi, Mojtaba; Abyaz, Mohamad Reza
2017-01-01
Background: Considering the overall tendency in psychology, researchers in the field of work and organizational psychology have become progressively interested in employees’ effective and optimistic experiments at work such as work engagement. This study was conducted to investigate 2 main purposes: assessing the psychometric properties of the Utrecht Work Engagement Scale, and finding any association between work engagement and burnout in nurses. Methods: The present methodological study was conducted in 2015 and included 248 females and 34 males with 6 months to 30 years of job experience. After the translation process, face and content validity were calculated by qualitative and quantitative methods. Moreover, content validation ratio, scale-level content validity index and item-level content validity index were measured for this scale. Construct validity was determined by factor analysis. Moreover, internal consistency and stability reliability were assessed. Factor analysis, test-retest, Cronbach’s alpha, and association analysis were used as statistical methods. Results: Face and content validity were acceptable. Exploratory factor analysis suggested a new 3- factor model. In this new model, some items from the construct model of the original version were dislocated with the same 17 items. The new model was confirmed by divergent Copenhagen Burnout Inventory as the Persian version of UWES. Internal consistency reliability for the total scale and the subscales was 0.76 to 0.89. Results from Pearson correlation test indicated a high degree of test-retest reliability (r = 0. 89). ICC was also 0.91. Engagement was negatively related to burnout and overtime per month, whereas it was positively related with age and job experiment. Conclusion: The Persian 3– factor model of Utrecht Work Engagement Scale is a valid and reliable instrument to measure work engagement in Iranian nurses as well as in other medical professionals. PMID:28955665
Design and validation of instruments to measure knowledge.
Elliott, T E; Regal, R R; Elliott, B A; Renier, C M
2001-01-01
Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.
Replicating the Z iron opacity experiments on the NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.
Here, X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity ofmore » iron at a temperature of ~160 eV and an electron density of ~7 x 10 21 cm -3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.« less
Replicating the Z iron opacity experiments on the NIF
Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; ...
2017-05-12
Here, X-ray opacity is a crucial factor of all radiation-hydrodynamics calculations, yet it is one of the least validated of the material properties in the simulation codes. Recent opacity experiments at the Sandia Z-machine have shown up to factors of two discrepancies between theory and experiment, casting doubt on the validity of the opacity models. Therefore, a new experimental opacity platform is being developed on the National Ignition Facility (NIF) not only to verify the Z-machine experimental results but also to extend the experiments to other temperatures and densities. The first experiments will be directed towards measuring the opacity ofmore » iron at a temperature of ~160 eV and an electron density of ~7 x 10 21 cm -3. Preliminary experiments on NIF have demonstrated the ability to create a sufficiently bright point backlighter using an imploding plastic capsule and also a hohlraum that can heat the opacity sample to the desired conditions. The first of these iron opacity experiments is expected to be performed in 2017.« less
Coderre, Sylvain; Woloschuk, Wayne; McLaughlin, Kevin
2009-04-01
Content validity is a requirement of every evaluation and is achieved when the evaluation content is congruent with the learning objectives and the learning experiences. Congruence between these three pillars of education can be facilitated by blueprinting. Here we describe an efficient process for creating a blueprint and explain how to use this tool to guide all aspects of course creation and evaluation. A well constructed blueprint is a valuable tool for medical educators. In addition to validating evaluation content, a blueprint can also be used to guide selection of curricular content and learning experiences.
Supersonic Coaxial Jet Experiment for CFD Code Validation
NASA Technical Reports Server (NTRS)
Cutler, A. D.; Carty, A. A.; Doerner, S. E.; Diskin, G. S.; Drummond, J. P.
1999-01-01
A supersonic coaxial jet facility has been designed to provide experimental data suitable for the validation of CFD codes used to analyze high-speed propulsion flows. The center jet is of a light gas and the coflow jet is of air, and the mixing layer between them is compressible. Various methods have been employed in characterizing the jet flow field, including schlieren visualization, pitot, total temperature and gas sampling probe surveying, and RELIEF velocimetry. A Navier-Stokes code has been used to calculate the nozzle flow field and the results compared to the experiment.
Barrett, Frederick S; Bradstreet, Matthew P; Leoutsakos, Jeannie-Marie S; Johnson, Matthew W; Griffiths, Roland R
2016-12-01
Acute adverse psychological reactions to classic hallucinogens ("bad trips" or "challenging experiences"), while usually benign with proper screening, preparation, and support in controlled settings, remain a safety concern in uncontrolled settings (such as illicit use contexts). Anecdotal and case reports suggest potential adverse acute symptoms including affective (panic, depressed mood), cognitive (confusion, feelings of losing sanity), and somatic (nausea, heart palpitation) symptoms. Responses to items from several hallucinogen-sensitive questionnaires (Hallucinogen Rating Scale, the States of Consciousness Questionnaire, and the Five-Dimensional Altered States of Consciousness questionnaire) in an Internet survey of challenging experiences with the classic hallucinogen psilocybin were used to construct and validate a Challenging Experience Questionnaire. The stand-alone Challenging Experience Questionnaire was then validated in a separate sample. Seven Challenging Experience Questionnaire factors (grief, fear, death, insanity, isolation, physical distress, and paranoia) provide a phenomenological profile of challenging aspects of experiences with psilocybin. Factor scores were associated with difficulty, meaningfulness, spiritual significance, and change in well-being attributed to the challenging experiences. The factor structure did not differ based on gender or prior struggle with anxiety or depression. The Challenging Experience Questionnaire provides a basis for future investigation of predictors and outcomes of challenging experiences with classic hallucinogens. © The Author(s) 2016.
The Question of Education Science: "Experiment"ism Versus "Experimental"ism
ERIC Educational Resources Information Center
Howe, Kenneth R.
2005-01-01
The ascendant view in the current debate about education science -- experimentism -- is a reassertion of the randomized experiment as the methodological gold standard. Advocates of this view have ignored, not answered, long-standing criticisms of the randomized experiment: its frequent impracticality, its lack of external validity, its confinement…
Identifying Attrition Risk Based on the First Year Experience
ERIC Educational Resources Information Center
Naylor, Ryan; Baik, Chi; Arkoudis, Sophia
2018-01-01
Using data collected from a recent national survey of Australian first-year students, this paper defines and validates four scales--belonging, feeling supported, intellectual engagement and workload stress--to measure the student experience of university. These scales provide insights into the university experience for both groups and individual…
Review and assessment of turbulence models for hypersonic flows
NASA Astrophysics Data System (ADS)
Roy, Christopher J.; Blottner, Frederick G.
2006-10-01
Accurate aerodynamic prediction is critical for the design and optimization of hypersonic vehicles. Turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating for these systems. The first goal of this article is to update the previous comprehensive review of hypersonic shock/turbulent boundary-layer interaction experiments published in 1991 by Settles and Dodson (Hypersonic shock/boundary-layer interaction database. NASA CR 177577, 1991). In their review, Settles and Dodson developed a methodology for assessing experiments appropriate for turbulence model validation and critically surveyed the existing hypersonic experiments. We limit the scope of our current effort by considering only two-dimensional (2D)/axisymmetric flows in the hypersonic flow regime where calorically perfect gas models are appropriate. We extend the prior database of recommended hypersonic experiments (on four 2D and two 3D shock-interaction geometries) by adding three new geometries. The first two geometries, the flat plate/cylinder and the sharp cone, are canonical, zero-pressure gradient flows which are amenable to theory-based correlations, and these correlations are discussed in detail. The third geometry added is the 2D shock impinging on a turbulent flat plate boundary layer. The current 2D hypersonic database for shock-interaction flows thus consists of nine experiments on five different geometries. The second goal of this study is to review and assess the validation usage of various turbulence models on the existing experimental database. Here we limit the scope to one- and two-equation turbulence models where integration to the wall is used (i.e., we omit studies involving wall functions). A methodology for validating turbulence models is given, followed by an extensive evaluation of the turbulence models on the current hypersonic experimental database. A total of 18 one- and two-equation turbulence models are reviewed, and results of turbulence model assessments for the six models that have been extensively applied to the hypersonic validation database are compiled and presented in graphical form. While some of the turbulence models do provide reasonable predictions for the surface pressure, the predictions for surface heat flux are generally poor, and often in error by a factor of four or more. In the vast majority of the turbulence model validation studies we review, the authors fail to adequately address the numerical accuracy of the simulations (i.e., discretization and iterative error) and the sensitivities of the model predictions to freestream turbulence quantities or near-wall y+ mesh spacing. We recommend new hypersonic experiments be conducted which (1) measure not only surface quantities but also mean and fluctuating quantities in the interaction region and (2) provide careful estimates of both random experimental uncertainties and correlated bias errors for the measured quantities and freestream conditions. For the turbulence models, we recommend that a wide-range of turbulence models (including newer models) be re-examined on the current hypersonic experimental database, including the more recent experiments. Any future turbulence model validation efforts should carefully assess the numerical accuracy and model sensitivities. In addition, model corrections (e.g., compressibility corrections) should be carefully examined for their effects on a standard, low-speed validation database. Finally, as new experiments or direct numerical simulation data become available with information on mean and fluctuating quantities, they should be used to improve the turbulence models and thus increase their predictive capability.
Hedlund, Lena; Gyllensten, Amanda Lundvik; Waldegren, Tomas; Hansson, Lars
2016-05-01
Motor disturbances and disturbed self-recognition are common features that affect mobility in persons with schizophrenia spectrum disorder and bipolar disorder. Physiotherapists in Scandinavia assess and treat movement difficulties in persons with severe mental illness. The Body Awareness Scale Movement Quality and Experience (BAS MQ-E) is a new and shortened version of the commonly used Body Awareness Scale-Health (BAS-H). The purpose of this study was to investigate the inter-rater reliability and the concurrent validity of BAS MQ-E in persons with severe mental illness. The concurrent validity was examined by investigating the relationships between neurological soft signs, alexithymia, fatigue, anxiety, and mastery. Sixty-two persons with severe mental illness participated in the study. The results showed a satisfactory inter-rater reliability (n = 53) and a concurrent validity (n = 62) with neurological soft signs, especially cognitive and perceptual based signs. There was also a concurrent validity linked to physical fatigue and aspects of alexithymia. The scores of BAS MQ-E were in general higher for persons with schizophrenia compared to persons with other diagnoses within the schizophrenia spectrum disorders and bipolar disorder. The clinical implications are presented in the discussion.
Mahler, H I; Kulik, J A
1995-02-01
The purpose of this study was to demonstrate the validation of videotape interventions that were designed to prepare patients for coronary artery bypass graft (CABG) surgery. First, three videotapes were developed. Two of the tapes featured the experiences of three actual CABG patients and were constructed to present either an optimistic portrayal of the recovery period (mastery tape) or a portrayal designed to inoculate patients against potential problems (coping tape). The third videotape contained the more general nurse scenes and narration used in the other two tapes, but did not include the experiences of particular patients. We then conducted a study to establish the convergent and discriminant validity of the three tapes. That is, we sought to demonstrate both that the tapes did differ along the mastery-coping dimension, and that they did not differ in other respects (such as in the degree of information provided or the perceived credibility of the narrator). The validation study, conducted with 42 males who had previously undergone CABG, demonstrated that the intended equivalences and differences between the tapes were achieved. The importance of establishing the validity of health-related interventions is discussed.
2011-12-02
construction and validation of predictive computer models such as those used in Time-domain Analysis Simulation for Advanced Tracking (TASAT), a...characterization data, successful construction and validation of predictive computer models was accomplished. And an investigation in pose determination from...currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES
Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2
NASA Technical Reports Server (NTRS)
Muylaert, Jean (Editor); Kumar, Ajay (Editor); Dujarric, Christian (Editor)
1998-01-01
The results of the phase 2 effort conducted under AGARD Working Group 18 on Hypersonic Experimental and Computational Capability, Improvement and Validation are presented in this report. The first volume, published in May 1996, mainly focused on the design methodology, plans and some initial results of experiments that had been conducted to serve as validation benchmarks. The current volume presents the detailed experimental and computational data base developed during this effort.
Comparative assessment of three standardized robotic surgery training methods.
Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C
2013-10-01
To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.
Benej, Martin; Bendlova, Bela; Vaclavikova, Eliska; Poturnajova, Martina
2011-10-06
Reliable and effective primary screening of mutation carriers is the key condition for common diagnostic use. The objective of this study is to validate the method high resolution melting (HRM) analysis for routine primary mutation screening and accomplish its optimization, evaluation and validation. Due to their heterozygous nature, germline point mutations of c-RET proto-oncogene, associated to multiple endocrine neoplasia type 2 (MEN2), are suitable for HRM analysis. Early identification of mutation carriers has a major impact on patients' survival due to early onset of medullary thyroid carcinoma (MTC) and resistance to conventional therapy. The authors performed a series of validation assays according to International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) guidelines for validation of analytical procedures, along with appropriate design and optimization experiments. After validated evaluation of HRM, the method was utilized for primary screening of 28 pathogenic c-RET mutations distributed among nine exons of c-RET gene. Validation experiments confirm the repeatability, robustness, accuracy and reproducibility of HRM. All c-RET gene pathogenic variants were detected with no occurrence of false-positive/false-negative results. The data provide basic information about design, establishment and validation of HRM for primary screening of genetic variants in order to distinguish heterozygous point mutation carriers among the wild-type sequence carriers. HRM analysis is a powerful and reliable tool for rapid and cost-effective primary screening, e.g., of c-RET gene germline and/or sporadic mutations and can be used as a first line potential diagnostic tool.
NASA Astrophysics Data System (ADS)
Banica, M. C.; Chun, J.; Scheuermann, T.; Weigand, B.; Wolfersdorf, J. v.
2009-01-01
Scramjet powered vehicles can decrease costs for access to space but substantial obstacles still exist in their realization. For example, experiments in the relevant Mach number regime are difficult to perform and flight testing is expensive. Therefore, numerical methods are often employed for system layout but they require validation against experimental data. Here, we validate the commercial code CFD++ against experimental results for hydrogen combustion in the supersonic combustion facility of the Institute of Aerospace Thermodynamics (ITLR) at the Universität Stuttgart. Fuel is injected through a lobed a strut injector, which provides rapid mixing. Our numerical data shows reasonable agreement with experiments. We further investigate effects of varying equivalence ratios on several important performance parameters.
Lievens, Filip; Sanchez, Juan I
2007-05-01
A quasi-experiment was conducted to investigate the effects of frame-of-reference training on the quality of competency modeling ratings made by consultants. Human resources consultants from a large consulting firm were randomly assigned to either a training or a control condition. The discriminant validity, interrater reliability, and accuracy of the competency ratings were significantly higher in the training group than in the control group. Further, the discriminant validity and interrater reliability of competency inferences were highest among an additional group of trained consultants who also had competency modeling experience. Together, these results suggest that procedural interventions such as rater training can significantly enhance the quality of competency modeling. 2007 APA, all rights reserved
Loeb, Danielle F; Crane, Lori A; Leister, Erin; Bayliss, Elizabeth A; Ludman, Evette; Binswanger, Ingrid A; Kline, Danielle M; Smith, Meredith; deGruy, Frank V; Nease, Donald E; Dickinson, L Miriam
Develop and validate self-efficacy scales for primary care provider (PCP) mental illness management and team-based care participation. We developed three self-efficacy scales: team-based care (TBC), mental illness management (MIM), and chronic medical illness (CMI). We developed the scales using Bandura's Social Cognitive Theory as a guide. The survey instrument included items from previously validated scales on team-based care and mental illness management. We administered a mail survey to 900 randomly selected Colorado physicians. We conducted exploratory principal factor analysis with oblique rotation. We constructed self-efficacy scales and calculated standardized Cronbach's alpha coefficients to test internal consistency. We calculated correlation coefficients between the MIM and TBC scales and previously validated measures related to each scale to evaluate convergent validity. We tested correlations between the TBC and the measures expected to correlate with the MIM scale and vice versa to evaluate discriminant validity. PCPs (n=402, response rate=49%) from diverse practice settings completed surveys. Items grouped into factors as expected. Cronbach's alphas were 0.94, 0.88, and 0.83 for TBC, MIM, and CMI scales respectively. In convergent validity testing, the TBC scale was correlated as predicted with scales assessing communications strategies, attitudes toward teams, and other teamwork indicators (r=0.25 to 0.40, all statistically significant). Likewise, the MIM scale was significantly correlated with several items about knowledge and experience managing mental illness (r=0.24 to 41, all statistically significant). As expected in discriminant validity testing, the TBC scale had only very weak correlations with the mental illness knowledge and experience managing mental illness items (r=0.03 to 0.12). Likewise, the MIM scale was only weakly correlated with measures of team-based care (r=0.09 to.17). This validation study of MIM and TBC self-efficacy scales showed high internal validity and good construct validity. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rutkiene, Ausra; Tereseviciene, Margarita
2010-01-01
The article presents the stages of the experiment planning that are necessary to ensure the validity and reliability of it. The research data reveal that doctoral students of Educational Research approach the planning of the experiment as the planning of the whole dissertation research; and the experiment as a research method is often confused…
NASA Astrophysics Data System (ADS)
Catanzarite, Joseph; Burke, Christopher J.; Li, Jie; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division
2016-06-01
The Kepler Mission is developing an Analytic Completeness Model (ACM) to estimate detection completeness contours as a function of exoplanet radius and period for each target star. Accurate completeness contours are necessary for robust estimation of exoplanet occurrence rates.The main components of the ACM for a target star are: detection efficiency as a function of SNR, the window function (WF) and the one-sigma depth function (OSDF). (Ref. Burke et al. 2015). The WF captures the falloff in transit detection probability at long periods that is determined by the observation window (the duration over which the target star has been observed). The OSDF is the transit depth (in parts per million) that yields SNR of unity for the full transit train. It is a function of period, and accounts for the time-varying properties of the noise and for missing or deweighted data.We are performing flux-level transit injection (FLTI) experiments on selected Kepler target stars with the goal of refining and validating the ACM. “Flux-level” injection machinery inserts exoplanet transit signatures directly into the flux time series, as opposed to “pixel-level” injection, which inserts transit signatures into the individual pixels using the pixel response function. See Jie Li's poster: ID #2493668, "Flux-level transit injection experiments with the NASA Pleiades Supercomputer" for details, including performance statistics.Since FLTI is affordable for only a small subset of the Kepler targets, the ACM is designed to apply to most Kepler target stars. We validate this model using “deep” FLTI experiments, with ~500,000 injection realizations on each of a small number of targets and “shallow” FLTI experiments with ~2000 injection realizations on each of many targets. From the results of these experiments, we identify anomalous targets, model their behavior and refine the ACM accordingly.In this presentation, we discuss progress in validating and refining the ACM, and we compare our detection efficiency curves with those derived from the associated pixel-level transit injection experiments.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
ERIC Educational Resources Information Center
Sørlie, Mari-Anne; Ogden, Terje
2014-01-01
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Culture Training: Validation Evidence for the Culture Assimilator.
ERIC Educational Resources Information Center
Mitchell, Terence R.; And Others
The culture assimilator, a programed self-instructional approach to culture training, is described and a series of laboratory experiments and field studies validating the culture assimilator are reviewed. These studies show that the culture assimilator is an effective method of decreasing some of the stress experienced when one works with people…
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
Students' Self-Evaluation and Reflection (Part 1): "Measurement"
ERIC Educational Resources Information Center
Cambra-Fierro, Jesus; Cambra-Berdun, Jesus
2007-01-01
Purpose: The objective of the paper is the development and validation of scales to assess reflective learning. Design/methodology/approach: The research is based on a literature review plus in-classroom experience. For the scale validation process, exploratory and confirmatory analyses were conducted, following proposals made by Anderson and…
Examining the Cultural Validity of a College Student Engagement Survey for Latinos
ERIC Educational Resources Information Center
Hernandez, Ebelia; Mobley, Michael; Coryell, Gayle; Yu, En-Hui; Martinez, Gladys
2013-01-01
Using critical race theory and quantitative criticalist stance, this study examines the construct validity of an engagement survey, "Student Experiences in the Research University" (SERU) for Latino college students through exploratory factor analysis. Results support the principal seven-factor SERU model. However subfactors exhibited…
Gayzur, Nora D.; Langley, Linda K.; Kelland, Chris; Wyman, Sara V.; Saville, Alyson L.; Ciernia, Annie T.; Padmanabhan, Ganesh
2013-01-01
Shifting visual focus based on the perceived gaze direction of another person is one form of joint attention. The present study investigated if this socially-relevant form of orienting is reflexive and whether it is influenced by age. Green and Woldorff (2012) argued that rapid cueing effects (faster responses to validly-cued targets than to invalidly-cued targets) were limited to conditions in which a cue overlapped in time with a target. They attributed slower responses following invalid cues to the time needed to resolve incongruent spatial information provided by the concurrently-presented cue and target. The present study examined orienting responses of young (18-31 years), young-old (60-74 years), and old-old adults (75-91 years) following uninformative central gaze cues that overlapped in time with the target (Experiment 1) or that were removed prior to target presentation (Experiment 2). When the cue and target overlapped, all three groups localized validly-cued targets faster than invalidly-cued targets, and validity effects emerged earlier for the two younger groups (at 100 ms post cue onset) than for the old-old group (at 300 ms post cue onset). With a short duration cue (Experiment 2), validity effects developed rapidly (by 100 ms) for all three groups, suggesting that validity effects resulted from reflexive orienting based on gaze cue information rather than from cue-target conflict. Thus, although old-old adults may be slow to disengage from persistent gaze cues, attention continues to be reflexively guided by gaze cues late in life. PMID:24170377
Design of experiments in medical physics: Application to the AAA beam model validation.
Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D
2017-09-01
The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Rubashkin, Nicholas; Szebik, Imre; Baji, Petra; Szántó, Zsuzsa; Susánszky, Éva; Vedam, Saraswathi
2017-11-16
Instruments to assess quality of maternity care in Central and Eastern European (CEE) region are scarce, despite reports of poor doctor-patient communication, non-evidence-based care, and informal cash payments. We validated and tested an online questionnaire to study maternity care experiences among Hungarian women. Following literature review, we collated validated items and scales from two previous English-language surveys and adapted them to the Hungarian context. An expert panel assessed items for clarity and relevance on a 4-point ordinal scale. We calculated item-level Content Validation Index (CVI) scores. We designed 9 new items concerning informal cash payments, as well as 7 new "model of care" categories based on mode of payment. The final questionnaire (N = 111 items) was tested in two samples of Hungarian women, representative (N = 600) and convenience (N = 657). We conducted bivariate analysis and thematic analysis of open-ended responses. Experts rated pre-existing English-language items as clear and relevant to Hungarian women's maternity care experiences with an average CVI for included questions of 0.97. Significant differences emerged across the model of care categories in terms of informal payments, informed consent practices, and women's perceptions of autonomy. Thematic analysis (N = 1015) of women's responses identified 13 priority areas of the maternity care experience, 9 of which were addressed by the questionnaire. We developed and validated a comprehensive questionnaire that can be used to evaluate respectful maternity care, evidence-based practice, and informal cash payments in CEE region and beyond.
A self-report measure of legal and administrative aggression within intimate relationships.
Hines, Denise A; Douglas, Emily M; Berger, Joshua L
2015-01-01
Although experts agree that intimate partner violence (IPV) is a multidimensional phenomenon comprised of both physical and non-physical acts, there is no measure of legal and administrative (LA) forms of IPV. LA aggression is when one partner manipulates the legal and other administrative systems to the detriment of his/her partner. Our measure was developed using the qualitative literature on male IPV victims' experiences. We tested the reliability and validity of our LA aggression measure on two samples of men: 611 men who sustained IPV and sought help, and 1,601 men in a population-based sample. Construct validity of the victimization scale was supported through factor analyses, correlations with other forms of IPV victimization, and comparisons of the rates of LA aggression between the two samples; reliability was established through Cronbach's alpha. Evidence for the validity and reliability of the perpetration scale was mixed and therefore needs further analyses and revisions before we can recommend its use in empirical work. There is initial support for the victimization scale as a valid and reliable measure of LA aggression victimization among men, but work is needed using women's victimization's experiences to establish reliability and validity of this measure for women. An LA aggression measure should be developed using LGBTQ victims' experiences, and for couples who are well into the divorce and child custody legal process. Legal personnel and practitioners should be educated on this form of IPV so that they can appropriately work with clients who have been victimized or perpetrate LA aggression. © 2014 Wiley Periodicals, Inc.
Inter-Disciplinary Collaboration in Support of the Post-Standby TREAT Mission
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark; Baker, Benjamin; Ortensi, Javier
Although analysis methods have advanced significantly in the last two decades, high fidelity multi- physics methods for reactors systems have been under development for only a few years and are not presently mature nor deployed. Furthermore, very few methods provide the ability to simulate rapid transients in three dimensions. Data for validation of advanced time-dependent multi- physics is sparse; at TREAT, historical data were not collected for the purpose of validating three-dimensional methods, let alone multi-physics simulations. Existing data continues to be collected to attempt to simulate the behavior of experiments and calibration transients, but it will be insufficient formore » the complete validation of analysis methods used for TREAT transient simulations. Hence, a 2018 restart will most likely occur without the direct application of advanced modeling and simulation methods. At present, the current INL modeling and simulation team plans to work with TREAT operations staff in performing reactor simulations with MAMMOTH, in parallel with the software packages currently being used in preparation for core restart (e.g., MCNP5, RELAP5, ABAQUS). The TREAT team has also requested specific measurements to be performed during startup testing, currently scheduled to run from February to August of 2018. These startup measurements will be crucial in validating the new analysis methods in preparation for ultimate application for TREAT operations and experiment design. This document describes the collaboration between modeling and simulation staff and restart, operations, instrumentation and experiment development teams to be able to effectively interact and achieve successful validation work during restart testing.« less
Padmanabhanunni, Anita; Edwards, David
2016-05-01
This article examines the experiences of nine rape survivors who participated in the Silent Protest, an annual protest march at Rhodes University that aims to highlight the sexual abuse of women, validate the harm done, and foster solidarity among survivors. Participants responded to a semi-structured interview focusing on the context of their rape and its impact, and their experiences of participation in the Protest In the first phase of data analysis, synoptic case narratives were written. In the second, themes from participants' experience were identified using interpretative phenomenological analysis. In the third, the data were examined in light of questions around the extent to which participation contributed to healing. Participants reported experiences of validation and empowerment but the majority were suffering from posttraumatic stress disorder. In some cases, participation had exacerbated self-blame and avoidant coping. Recommendations are made about the provision of psychoeducation and counseling at such events. © The Author(s) 2015.
External Standards or Standard Addition? Selecting and Validating a Method of Standardization
NASA Astrophysics Data System (ADS)
Harvey, David T.
2002-05-01
A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.
Multi-Evaporator Miniature Loop Heat Pipe for Small Spacecraft Thermal Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Ottenstein, Laura; Douglas, Donya
2008-01-01
This paper presents the development of the Thermal Loop experiment under NASA's New Millennium Program Space Technology 8 (ST8) Project. The Thermal Loop experiment was originally planned for validating in space an advanced heat transport system consisting of a miniature loop heat pipe (MLHP) with multiple evaporators and multiple condensers. Details of the thermal loop concept, technical advances and benefits, Level 1 requirements and the technology validation approach are described. An MLHP breadboard has been built and tested in the laboratory and thermal vacuum environments, and has demonstrated excellent performance that met or exceeded the design requirements. The MLHP retains all features of state-of-the-art loop heat pipes and offers additional advantages to enhance the functionality, performance, versatility, and reliability of the system. In addition, an analytical model has been developed to simulate the steady state and transient operation of the MHLP, and the model predictions agreed very well with experimental results. A protoflight MLHP has been built and is being tested in a thermal vacuum chamber to validate its performance and technical readiness for a flight experiment.
Continued Development and Validation of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2015-11-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks; determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and provide an intermediate step between theory and future experiments. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (~ 36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. Results from verification of the PSI-TET extended MHD model using the GEM magnetic reconnection challenge will also be presented along with investigation of injector configurations for future SIHI experiments using Taylor state equilibrium calculations. Work supported by DoE.
Zig-zag tape influence in NREL Phase VI wind turbine
NASA Astrophysics Data System (ADS)
Gomez-Iradi, Sugoi; Munduate, Xabier
2014-06-01
Two bladed 10 metre diameter wind turbine was tested in the 24.4m × 36.6m NASA-Ames wind tunnel (Phase VI). These experiments have been extensively used for validation purposes for CFD and other engineering tools. The free transition case (S), has been, and is, the most employed one for validation purposes, and consist in a 3° pitch case with a rotational speed of 72rpm upwind configuration with and without yaw misalignment. However, there is another less visited case (M) where identical configuration was tested but with the inclusion of a zig-zag tape. This was called transition fixed sequence. This paper shows the differences between the free and the fix transition cases, that should be more appropriate for comparison with fully turbulent simulations. Steady k-ω SST fully turbulent computations performed with WMB CFD method are compared with the experiments showing, better predictions in the attached flow region when it is compared with the transition fixed experiments. This work wants to prove the utility of M case (transition fixed) and show its differences respect the S case (free transition) for validation purposes.
NASA Technical Reports Server (NTRS)
Cayeux, P.; Raballand, F.; Borde, J.; Berges, J.-C.; Meyssignac, B.
2007-01-01
Within the framework of a partnership agreement, EADS ASTRIUM has worked since June 2006 for the CNES formation flying experiment on the PRISMA mission. EADS ASTRIUM is responsible for the anti-collision function. This responsibility covers the design and the development of the function as a Matlab/Simulink library, as well as its functional validation and performance assessment. PRISMA is a technology in-orbit testbed mission from the Swedish National Space Board, mainly devoted to formation flying demonstration. PRISMA is made of two micro-satellites that will be launched in 2009 on a quasi-circular SSO at about 700 km of altitude. The CNES FFIORD experiment embedded on PRISMA aims at flight validating an FFRF sensor designed for formation control, and assessing its performances, in preparation to future formation flying missions such as Simbol X; FFIORD aims as well at validating various typical autonomous rendezvous and formation guidance and control algorithms. This paper presents the principles of the collision avoidance function developed by EADS ASTRIUM for FFIORD; three kinds of maneuvers were implemented and are presented in this paper with their performances.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2016-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. A zero-beta Hall-MHD model has shown good agreement with experimental data at 14.5 kHz injector operation. Experimental observations at higher frequency, where the best performance is achieved, indicate pressure effects are important and likely required to attain quantitative agreement with simulations. Efforts to extend the existing validation to high frequency (36-68 kHz) using an extended MHD model implemented in the PSI-TET arbitrary-geometry 3D MHD code will be presented. An implementation of anisotropic viscosity, a feature observed to improve agreement between NIMROD simulations and experiment, will also be presented, along with investigations of flux conserver features and their impact on density control for future SIHI experiments. Work supported by DoE.
A verification and validation effort for high explosives at Los Alamos National Lab (u)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovel, Christina A; Menikoff, Ralph S
2009-01-01
We have started a project to verify and validate ASC codes used to simulate detonation waves in high explosives. Since there are no non-trivial analytic solutions, we are going to compare simulated results with experimental data that cover a wide range of explosive phenomena. The intent is to compare both different codes and different high explosives (HE) models. The first step is to test the products equation of state used for the HE models, For this purpose, the cylinder test, flyer plate and plate-push experiments are being used. These experiments sample different regimes in thermodynamic phase space: the CJ isentropemore » for the cylinder tests, the isentrope behind an overdriven detonation wave for the flyer plate experiment, and expansion following a reflected CJ detonation for the plate-push experiment, which is sensitive to the Gruneisen coefficient. The results of our findings for PBX 9501 are presented here.« less
Introspection of subjective feelings is sensitive and specific.
Questienne, Laurence; van Dijck, Jean-Philippe; Gevers, Wim
2018-02-01
Conversely to behaviorist ideas, recent studies suggest that introspection can be accurate and reliable. However, an unresolved question is whether people are able to report specific aspects of their phenomenal experience, or whether they report more general nonspecific experiences. To address this question, we investigated the sensitivity and validity of our introspection for different types of conflict. Taking advantage of the congruency sequence effect, we dissociated response conflict while keeping visual conflict unchanged in a Stroop and in a priming task. Participants were subsequently asked to report on either their experience of urge to err or on their feeling of visual conflict. Depending on the focus of the introspection, subjective reports specifically followed either the response conflict or the visual conflict. These results demonstrate that our introspective reports can be sensitive and that we are able to dissociate specific aspects of our phenomenal experiences in a valid manner. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Further cross-cultural validation of the theory of mental self-government.
Zhang, L F
1999-03-01
This study was designed to achieve two objectives. The 1st was to investigate the cross-cultural validity of the Thinking Styles Inventory (TSI; R. J. Sternberg & R. K. Wagner, 1992), which is based on the theory of mental self-government (R. J. Sternberg, 1988, 1990, 1997). The 2nd was to examine the relationships between thinking styles as assessed by the TSI and a number of student characteristics, including age, gender, college class level, work experience, and travel experience. One hundred fifty-one students from the University of Hong Kong participated in the study. Results indicated that the thinking styles evaluated by the TSI could be identified among the participants. Moreover, there were significant relationships between certain thinking styles, especially creativity-relevant styles and 3 student characteristics: age, work experience, and travel experience. Implications of these findings for teaching and learning in and outside the classroom are discussed.
Takase, Miyuki; Imai, Takiko; Uemura, Chizuru
2016-06-01
This paper examines the psychometric properties of the Learning Experience Scale. A survey method was used to collect data from a total of 502 nurses. Data were analyzed by factor analysis and the known-groups technique to examine the construct validity of the scale. In addition, internal consistency was evaluated by Cronbach's alpha, and stability was examined by test-retest correlation. Factor analysis showed that the Learning Experience Scale consisted of five factors: learning from practice, others, training, feedback, and reflection. The scale also had the power to discriminate between nurses with high and low levels of nursing competence. The internal consistency and the stability of the scale were also acceptable. The Learning Experience Scale is a valid and reliable instrument, and helps organizations to effectively design learning interventions for nurses. © 2015 Wiley Publishing Asia Pty Ltd.
Traverse Planning Experiments for Future Planetary Surface Exploration
NASA Technical Reports Server (NTRS)
Hoffman, Stephen J.; Voels, Stephen A.; Mueller, Robert P.; Lee, Pascal C.
2012-01-01
The purpose of the investigation is to evaluate methodology and data requirements for remotely-assisted robotic traverse of extraterrestrial planetary surface to support human exploration program, assess opportunities for in-transit science operations, and validate landing site survey and selection techniques during planetary surface exploration mission analog demonstration at Haughton Crater on Devon Island, Nunavut, Canada. Additionally, 1) identify quality of remote observation data sets (i.e., surface imagery from orbit) required for effective pre-traverse route planning and determine if surface level data (i.e., onboard robotic imagery or other sensor data) is required for a successful traverse, and if additional surface level data can improve traverse efficiency or probability of success (TRPF Experiment). 2) Evaluate feasibility and techniques for conducting opportunistic science investigations during this type of traverse. (OSP Experiment). 3) Assess utility of remotely-assisted robotic vehicle for landing site validation survey. (LSV Experiment).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scaglione, John M; Mueller, Don; Wagner, John C
2011-01-01
One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only ismore » based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission product LCE data to predict and verify individual biases for relevant minor actinides and fission products. This paper (1) provides a detailed description of the approach and its technical bases, (2) describes the application of the approach for representative pressurized water reactor and boiling water reactor safety analysis models to demonstrate its usage and applicability, (3) provides reference bias results based on the prerelease SCALE 6.1 code package and ENDF/B-VII nuclear cross-section data, and (4) provides recommendations for application of the results and methods to other code and data packages.« less
Design and validation of the Health Professionals' Attitudes Toward the Homeless Inventory (HPATHI).
Buck, David S; Monteiro, F Marconi; Kneuper, Suzanne; Rochon, Donna; Clark, Dana L; Melillo, Allegra; Volk, Robert J
2005-01-10
Recent literature has called for humanistic care of patients and for medical schools to begin incorporating humanism into medical education. To assess the attitudes of health-care professionals toward homeless patients and to demonstrate how those attitudes might impact optimal care, we developed and validated a new survey instrument, the Health Professional Attitudes Toward the Homeless Inventory (HPATHI). An instrument that measures providers' attitudes toward the homeless could offer meaningful information for the design and implementation of educational activities that foster more compassionate homeless health care. Our intention was to describe the process of designing and validating the new instrument and to discuss the usefulness of the instrument for assessing the impact of educational experiences that involve working directly with the homeless on the attitudes, interest, and confidence of medical students and other health-care professionals. The study consisted of three phases: identifying items for the instrument; pilot testing the initial instrument with a group of 72 third-year medical students; and modifying and administering the instrument in its revised form to 160 health-care professionals and third-year medical students. The instrument was analyzed for reliability and validity throughout the process. A 19-item version of the HPATHI had good internal consistency with a Cronbach's alpha of 0.88 and a test-retest reliability coefficient of 0.69. The HPATHI showed good concurrent validity, and respondents with more than one year of experience with homeless patients scored significantly higher than did those with less experience. Factor analysis yielded three subscales: Personal Advocacy, Social Advocacy, and Cynicism. The HPATHI demonstrated strong reliability for the total scale and satisfactory test-retest reliability. Extreme group comparisons suggested that experience with the homeless rather than medical training itself could affect health-care professionals' attitudes toward the homeless. This could have implications for the evaluation of medical school curricula.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
NASA Astrophysics Data System (ADS)
Sutherland, Herbert J.
1988-08-01
Sandia National Laboratories has erected a research oriented, 34- meter diameter, Darrieus vertical axis wind turbine near Bushland, Texas. This machine, designated the Sandia 34-m VAWT Test Bed, is equipped with a large array of strain gauges that have been placed at critical positions about the blades. This manuscript details a series of four-point bend experiments that were conducted to validate the output of the blade strain gauge circuits. The output of a particular gauge circuit is validated by comparing its output to equivalent gauge circuits (in this stress state) and to theoretical predictions. With only a few exceptions, the difference between measured and predicted strain values for a gauge circuit was found to be of the order of the estimated repeatability for the measurement system.
Experimental validation of predicted cancer genes using FRET
NASA Astrophysics Data System (ADS)
Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.
2018-07-01
Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.
Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet
2003-06-20
A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.
Fun and games in reviewing neonatal emergency care.
Gordon, D W; Brown, H N
1995-04-01
To develop a game-based review instrument for use by newborn caregivers in preparing for emergency situations. One hundred and one test questions covering pathophysiology, resuscitation, and medications were developed. The questions then underwent expert and peer review, psychometric testing for content validity and test-retest reliability, and a game trial. The needs of adult learners are different from those of other learners. The gaming format uses knowledge gained through experience and provides an avenue for validating knowledge and sharing experiences. This format has been found effective for review and reinforcement of facts. Twelve nurses participated in a trial game and completed a written evaluation using a Likert scale. The Neonatal Emergency Trivia Game is an effective tool for reviewing material related to neonatal emergency care decisions. Additional testing with a larger group would strengthen validity and reliability data.
NASA Astrophysics Data System (ADS)
Hilmy, N.; Febrida, A.; Basril, A.
2007-11-01
Problems of tissue allografts in using International Standard (ISO) 11137 for validation of radiation sterilization dose (RSD) are limited and low numbers of uniform samples per production batch, those are products obtained from one donor. Allograft is a graft transplanted between two different individuals of the same species. The minimum number of uniform samples needed for verification dose (VD) experiment at the selected sterility assurance level (SAL) per production batch according to the IAEA Code is 20, i.e., 10 for bio-burden determination and the remaining 10 for sterilization test. Three methods of the IAEA Code have been used for validation of RSD, i.e., method A1 that is a modification of method 1 of ISO 11137:1995, method B (ISO 13409:1996), and method C (AAMI TIR 27:2001). This paper describes VD experiments using uniform products obtained from one cadaver donor, i.e., cancellous bones, demineralized bone powders and amnion grafts from one life donor. Results of the verification dose experiments show that RSD is 15.4 kGy for cancellous and demineralized bone grafts and 19.2 kGy for amnion grafts according to method A1 and 25 kGy according to methods B and C.
Dimension-based attention in visual short-term memory.
Pilling, Michael; Barrett, Doug J K
2016-07-01
We investigated how dimension-based attention influences visual short-term memory (VSTM). This was done through examining the effects of cueing a feature dimension in two perceptual comparison tasks (change detection and sameness detection). In both tasks, a memory array and a test array consisting of a number of colored shapes were presented successively, interleaved by a blank interstimulus interval (ISI). In Experiment 1 (change detection), the critical event was a feature change in one item across the memory and test arrays. In Experiment 2 (sameness detection), the critical event was the absence of a feature change in one item across the two arrays. Auditory cues indicated the feature dimension (color or shape) of the critical event with 80 % validity; the cues were presented either prior to the memory array, during the ISI, or simultaneously with the test array. In Experiment 1, the cue validity influenced sensitivity only when the cue was given at the earliest position; in Experiment 2, the cue validity influenced sensitivity at all three cue positions. We attributed the greater effectiveness of top-down guidance by cues in the sameness detection task to the more active nature of the comparison process required to detect sameness events (Hyun, Woodman, Vogel, Hollingworth, & Luck, Journal of Experimental Psychology: Human Perception and Performance, 35; 1140-1160, 2009).
Sharma, Ram C; Hara, Keitarou; Hirayama, Hidetake
2017-01-01
This paper presents the performance and evaluation of a number of machine learning classifiers for the discrimination between the vegetation physiognomic classes using the satellite based time-series of the surface reflectance data. Discrimination of six vegetation physiognomic classes, Evergreen Coniferous Forest, Evergreen Broadleaf Forest, Deciduous Coniferous Forest, Deciduous Broadleaf Forest, Shrubs, and Herbs, was dealt with in the research. Rich-feature data were prepared from time-series of the satellite data for the discrimination and cross-validation of the vegetation physiognomic types using machine learning approach. A set of machine learning experiments comprised of a number of supervised classifiers with different model parameters was conducted to assess how the discrimination of vegetation physiognomic classes varies with classifiers, input features, and ground truth data size. The performance of each experiment was evaluated by using the 10-fold cross-validation method. Experiment using the Random Forests classifier provided highest overall accuracy (0.81) and kappa coefficient (0.78). However, accuracy metrics did not vary much with experiments. Accuracy metrics were found to be very sensitive to input features and size of ground truth data. The results obtained in the research are expected to be useful for improving the vegetation physiognomic mapping in Japan.
Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh
2015-01-15
Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowlymore » biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.« less
An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle
NASA Astrophysics Data System (ADS)
Wang, Yue; Gao, Dan; Mao, Xuming
2018-03-01
A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.
THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring
2005-09-01
This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.
e-Learning Continuance Intention: Moderating Effects of User e-Learning Experience
ERIC Educational Resources Information Center
Lin, Kan-Min
2011-01-01
This study explores the determinants of the e-learning continuance intention of users with different levels of e-learning experience and examines the moderating effects of e-learning experience on the relationships among the determinants. The research hypotheses are empirically validated using the responses received from a survey of 256 users. The…
77 FR 52131 - FY 2012 Discretionary Funding Opportunity: Paul S. Sarbanes Transit in Parks Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-28
... application was validated or rejected by the system. If interested parties experience difficulties at any... experience and the protection of sensitive natural and cultural resources. Since 2006, the Transit in Parks... impacts of automobile traffic congestion, improve the safety and recreational experience of visitors, and...
Item generation in the development of an inpatient experience questionnaire: a qualitative study
2013-01-01
Background Patient experience is a key feature of quality improvement in modern health-care delivery. Measuring patient experience is one of several tools used to assess and monitor the quality of health services. This study aims to develop a tool for assessing patient experience with inpatient care in public hospitals in Hong Kong. Methods Based on the General Inpatient Questionnaire (GIQ) framework of the Care Quality Commission as a discussion guide, a qualitative study involving focus group discussions and in-depth individual interviews with patients was employed to develop a tool for measuring inpatient experience in Hong Kong. Results All participants agreed that a patient satisfaction survey is an important platform for collecting patients’ views on improving the quality of health-care services. Findings of the focus group discussions and in-depth individual interviews identified nine key themes as important hospital quality indicators: prompt access, information provision, care and involvement in decision making, physical and emotional needs, coordination of care, respect and privacy, environment and facilities, handling of patient feedback, and overall care from health-care professionals and quality of care. Privacy, complaint mechanisms, patient involvement, and information provision were further highlighted as particularly important areas for item revision by the in-depth individual interviews. Thus, the initial version of the Hong Kong Inpatient Experience Questionnaire (HKIEQ), comprising 58 core items under nine themes, was developed. Conclusions A set of dimensions and core items of the HKIEQ was developed and the instrument will undergo validity and reliability tests through a validation survey. A valid and reliable tool is important in accurately assessing patient experience with care delivery in hospitals to improve the quality of health-care services. PMID:23835186
2003-03-01
Different?," Jour. of Experimental & Theoretical Artificial Intelligence, Special Issue on Al for Systems Validation and Verification, 12(4), 2000, pp...Hamilton, D., " Experiences in Improving the State of Practice in Verification and Validation of Knowledge-Based Systems," Workshop Notes of the AAAI...Unsuspected Power of the Standard Turing Test," Jour. of Experimental & Theoretical Artificial Intelligence., 12, 2000, pp3 3 1-3 4 0 . [30] Gaschnig
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.
The Use of Virtual Reality in the Study of People's Responses to Violent Incidents
Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel
2009-01-01
This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762
Validation of gamma irradiator controls for quality and regulatory compliance
NASA Astrophysics Data System (ADS)
Harding, Rorry B.; Pinteric, Francis J. A.
1995-09-01
Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, Gerald
The Small Particles in Cirrus (SPartICus) campaign took place from January through June, 2011 and the Storm Peak Lab Cloud Property Validation Experiment (StormVEx) took place from November, 2011 through April, 2012. The PI of this project, Dr. Gerald Mace, had the privilege to be the lead on both of these campaigns. The essence of the project that we report on here was to conduct preliminary work that was necessary to bring the field data sets to a point where they could be used for their intended science purposes
Analytic Modeling of Pressurization and Cryogenic Propellant Conditions for Lunar Landing Vehicle
NASA Technical Reports Server (NTRS)
Corpening, Jeremy
2010-01-01
This slide presentation reviews the development, validation and application of the model to the Lunar Landing Vehicle. The model named, Computational Propellant and Pressurization Program -- One Dimensional (CPPPO), is used to model in this case cryogenic propellant conditions of the Altair Lunar lander. The validation of CPPPO was accomplished via comparison to an existing analytic model (i.e., ROCETS), flight experiment and ground experiments. The model was used to the Lunar Landing Vehicle perform a parametric analysis on pressurant conditions and to examine the results of unequal tank pressurization and draining for multiple tank designs.
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Tran, Daniel; Bellardo, John; Williams, Austin; Piug-Suari, Jordi; Crum, Gary; Flatley, Thomas
2012-01-01
The Intelligent Payload Experiment (IPEX) is a cubesat manifested for launch in October 2013 that will flight validate autonomous operations for onboard instrument processing and product generation for the Intelligent Payload Module (IPM) of the Hyperspectral Infra-red Imager (HyspIRI) mission concept. We first describe the ground and flight operations concept for HyspIRI IPM operations. We then describe the ground and flight operations concept for the IPEX mission and how that will validate HyspIRI IPM operations. We then detail the current status of the mission and outline the schedule for future development.
A Surrogate Approach to the Experimental Optimization of Multielement Airfoils
NASA Technical Reports Server (NTRS)
Otto, John C.; Landman, Drew; Patera, Anthony T.
1996-01-01
The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.
Bor, Jacob; Geldsetzer, Pascal; Venkataramani, Atheendar; Bärnighausen, Till
2015-01-01
Purpose of review Randomized, population-representative trials of clinical interventions are rare. Quasi-experiments have been used successfully to generate causal evidence on the cascade of HIV care in a broad range of real-world settings. Recent findings Quasi-experiments exploit exogenous, or quasi-random, variation occurring naturally in the world or because of an administrative rule or policy change to estimate causal effects. Well designed quasi-experiments have greater internal validity than typical observational research designs. At the same time, quasi-experiments may also have potential for greater external validity than experiments and can be implemented when randomized clinical trials are infeasible or unethical. Quasi-experimental studies have established the causal effects of HIV testing and initiation of antiretroviral therapy on health, economic outcomes and sexual behaviors, as well as indirect effects on other community members. Recent quasi-experiments have evaluated specific interventions to improve patient performance in the cascade of care, providing causal evidence to optimize clinical management of HIV. Summary Quasi-experiments have generated important data on the real-world impacts of HIV testing and treatment and on interventions to improve the cascade of care. With the growth in large-scale clinical and administrative data, quasi-experiments enable rigorous evaluation of policies implemented in real-world settings. PMID:26371463
Bor, Jacob; Geldsetzer, Pascal; Venkataramani, Atheendar; Bärnighausen, Till
2015-11-01
Randomized, population-representative trials of clinical interventions are rare. Quasi-experiments have been used successfully to generate causal evidence on the cascade of HIV care in a broad range of real-world settings. Quasi-experiments exploit exogenous, or quasi-random, variation occurring naturally in the world or because of an administrative rule or policy change to estimate causal effects. Well designed quasi-experiments have greater internal validity than typical observational research designs. At the same time, quasi-experiments may also have potential for greater external validity than experiments and can be implemented when randomized clinical trials are infeasible or unethical. Quasi-experimental studies have established the causal effects of HIV testing and initiation of antiretroviral therapy on health, economic outcomes and sexual behaviors, as well as indirect effects on other community members. Recent quasi-experiments have evaluated specific interventions to improve patient performance in the cascade of care, providing causal evidence to optimize clinical management of HIV. Quasi-experiments have generated important data on the real-world impacts of HIV testing and treatment and on interventions to improve the cascade of care. With the growth in large-scale clinical and administrative data, quasi-experiments enable rigorous evaluation of policies implemented in real-world settings.
ERIC Educational Resources Information Center
Leadbitter, Kathy; Aldred, Catherine; McConachie, Helen; Le Couteur, Ann; Kapadia, Dharmi; Charman, Tony; Macdonald, Wendy; Salomone, Erica; Emsley, Richard; Green, Jonathan; Barrett, Barbara; Barron, Sam; Beggs, Karen; Blazey, Laura; Bourne, Katy; Byford, Sarah; Cole-Fletcher, Rachel; Collino, Julia; Colmer, Ruth; Cutress, Anna; Gammer, Isobel; Harrop, Clare; Houghton, Tori; Howlin, Pat; Hudry, Kristelle; Leach, Sue; Maxwell, Jessica; Parr, Jeremy; Pickles, Andrew; Randles, Sarah; Slonims, Vicky; Taylor, Carol; Temple, Kathryn; Tobin, Hannah; Vamvakas, George; White, Lydia
2018-01-01
There is a lack of measures that reflect the intervention priorities of parents of children with autism spectrum disorder (ASD) and that assess the impact of interventions on family experience and quality of life. The Autism Family Experience Questionnaire (AFEQ) was developed through focus groups and online consultation with parents, and…
Blast Load Simulator Experiments for Computational Model Validation: Report 2
2017-02-01
repeatability. The uncertainty in the experimental pressures and impulses was evaluated by computing 95% confidence intervals on the results. DISCLAIMER: The...Experiment uncertainty The uncertainty in the experimental pressure and impulse was evaluated for the five replicate experiments for which, as closely as...comparisons were made among the replicated experiments to evaluate repeatability. The uncertainty in the experimental pressures and impulses was
The Validity of Communication Experiments Using Human Subjects: A Review
ERIC Educational Resources Information Center
Rossiter, Charles M.
1976-01-01
Reviews sixty-eight experiments published in various journals in 1973 and 1974 and concludes that communication experimentation may be severely limited by the nature of the subjects studied and the inappropriate handling of experimental reactivity. (MH)
NASA Technical Reports Server (NTRS)
Benard, Doug; Dorais, Gregory A.; Gamble, Ed; Kanefsky, Bob; Kurien, James; Millar, William; Muscettola, Nicola; Nayak, Pandu; Rouquette, Nicolas; Rajan, Kanna;
2000-01-01
Remote Agent (RA) is a model-based, reusable artificial intelligence (At) software system that enables goal-based spacecraft commanding and robust fault recovery. RA was flight validated during an experiment on board of DS1 between May 17th and May 21th, 1999.
Development of a questionnaire for assessing the childbirth experience (QACE).
Carquillat, Pierre; Vendittelli, Françoise; Perneger, Thomas; Guittier, Marie-Julia
2017-08-30
Due to its potential impact on women's psychological health, assessing perceptions of their childbirth experience is important. The aim of this study was to develop a multidimensional self-reporting questionnaire to evaluate the childbirth experience. Factors influencing the childbirth experience were identified from a literature review and the results of a previous qualitative study. A total of 25 items were combined from existing instruments or were created de novo. A draft version was pilot tested for face validity with 30 women and submitted for evaluation of its construct validity to 477 primiparous women at one-month post-partum. The recruitment took place in two obstetric clinics from Swiss and French university hospitals. To evaluate the content validity, we compared item responses to general childbirth experience assessments on a numeric, 0 to 10 rating scale. We dichotomized two group assessment scores: "0 to 7" and "8 to 10". We performed an exploratory factor analysis to identify underlying dimensions. In total, 291 women completed the questionnaire (response rate = 61%). The responses to 22 items were statistically significant between the 0 to 7 and 8 to 10 groups for the general childbirth experience assessments. An exploratory factor analysis yielded four sub-scales, which were labelled "relationship with staff" (4 items), "emotional status" (3 items), "first moments with the new born," (3 items) and "feelings at one month postpartum" (3 items). All 4 scales had satisfactory internal consistency levels (alpha coefficients from 0.70 to 0.85). The full 25-item version can be used to analyse each item by itself, and the short 4-dimension version can be scored to summarize the general assessment of the childbirth experience. The Questionnaire for Assessing the Childbirth Experience (QACE) could be useful as a screening instrument to identify women with negative childbirth experiences. It can be used as both a research instrument in its short version and a questionnaire for use in clinical practice in its full version.
Emotionality in growing pigs: is the open field a valid test?
Donald, Ramona D; Healy, Susan D; Lawrence, Alistair B; Rutherford, Kenneth M D
2011-10-24
The ability to assess emotionality is important within animal welfare research. Yet, for farm animals, few tests of emotionality have been well validated. Here we investigated the construct validity of behavioural measures of pig emotionality in an open-field test by manipulating the experiences of pigs in three ways. In Experiment One (pharmacological manipulation), pigs pre-treated with Azaperone, a drug used to reduce stress in commercial pigs, were more active, spent more time exploring and vocalised less than control pigs. In Experiment Two (social manipulation), pigs that experienced the open-field arena with a familiar companion were also more exploratory, spent less time behaviourally idle, and were less vocal than controls although to a lesser degree than in Experiment One. In Experiment Three (novelty manipulation), pigs experiencing the open field for a second time were less active, explored less and vocalised less than they had done in the first exposure to the arena. A principal component analysis was conducted on data from all three trials. The first two components could be interpreted as relating to the form (cautious to exploratory) and magnitude (low to high arousal) of the emotional response to open-field testing. Based on these dimensions, in Experiment One, Azaperone pigs appeared to be less fearful than saline-treated controls. However, in Experiment Two, exposure to the arena with a conspecific did not affect the first two dimensions but did affect a third behavioural dimension, relating to oro-nasal exploration of the arena floor. In Experiment Three, repeat exposure altered the form but not the magnitude of emotional response: pigs were less exploratory in the second test. In conclusion, behavioural measures taken from pigs in an open-field test are sensitive to manipulations of their prior experience in a manner that suggests they reflect underlying emotionality. Behavioural measures taken during open-field exposure can be useful for making assessments of both pig emotionality and of their welfare. Copyright © 2011 Elsevier Inc. All rights reserved.
Hariharan, Prasanna; D’Souza, Gavin A.; Horner, Marc; Morrison, Tina M.; Malinauskas, Richard A.; Myers, Matthew R.
2017-01-01
A “credible” computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing “model credibility” is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a “threshold-based” validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results (“S”) of velocity and viscous shear stress were compared with inter-laboratory experimental measurements (“D”). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student’s t-test. However, following the threshold-based approach, a Student’s t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices. PMID:28594889
Hariharan, Prasanna; D'Souza, Gavin A; Horner, Marc; Morrison, Tina M; Malinauskas, Richard A; Myers, Matthew R
2017-01-01
A "credible" computational fluid dynamics (CFD) model has the potential to provide a meaningful evaluation of safety in medical devices. One major challenge in establishing "model credibility" is to determine the required degree of similarity between the model and experimental results for the model to be considered sufficiently validated. This study proposes a "threshold-based" validation approach that provides a well-defined acceptance criteria, which is a function of how close the simulation and experimental results are to the safety threshold, for establishing the model validity. The validation criteria developed following the threshold approach is not only a function of Comparison Error, E (which is the difference between experiments and simulations) but also takes in to account the risk to patient safety because of E. The method is applicable for scenarios in which a safety threshold can be clearly defined (e.g., the viscous shear-stress threshold for hemolysis in blood contacting devices). The applicability of the new validation approach was tested on the FDA nozzle geometry. The context of use (COU) was to evaluate if the instantaneous viscous shear stress in the nozzle geometry at Reynolds numbers (Re) of 3500 and 6500 was below the commonly accepted threshold for hemolysis. The CFD results ("S") of velocity and viscous shear stress were compared with inter-laboratory experimental measurements ("D"). The uncertainties in the CFD and experimental results due to input parameter uncertainties were quantified following the ASME V&V 20 standard. The CFD models for both Re = 3500 and 6500 could not be sufficiently validated by performing a direct comparison between CFD and experimental results using the Student's t-test. However, following the threshold-based approach, a Student's t-test comparing |S-D| and |Threshold-S| showed that relative to the threshold, the CFD and experimental datasets for Re = 3500 were statistically similar and the model could be considered sufficiently validated for the COU. However, for Re = 6500, at certain locations where the shear stress is close the hemolysis threshold, the CFD model could not be considered sufficiently validated for the COU. Our analysis showed that the model could be sufficiently validated either by reducing the uncertainties in experiments, simulations, and the threshold or by increasing the sample size for the experiments and simulations. The threshold approach can be applied to all types of computational models and provides an objective way of determining model credibility and for evaluating medical devices.
Using plastic instability to validate and test the strength law of a material under pressure
NASA Astrophysics Data System (ADS)
Bolis, Cyril; Counilh, Denis; Savale, Brice
2015-09-01
In dynamical experiments (pressures higher than 10 GPa, strain rate around 104-106 s-1), metals are classically described using an equation of state and a strength law which is usually set using data from compression or traction tests at low pressure (few MPa) and low strain rates (less than 103 s-1). In consequence, it needs to be extrapolated during dynamical experiments. Classical shock experiments do not allow a fine validation of the stress law due to the interaction with the equation of state. To achieve this aim, we propose to use a dedicated experiment. We started from the works of Barnes et al. (1974 and 1980) where plastic instabilities initiated by a sinusoidal perturbation at the surface of the metal develop with the pressure. We adapted this principle to a new shape of initial perturbation and realized several experiments. We will present the setup and its use on a simple material: gold. We will detail how the interpretation of the experiments, coupled with previous characterization experiments helps us to test the strength lax of this material at high pressure and high strain rate.
Integrated multiscale biomaterials experiment and modelling: a perspective
Buehler, Markus J.; Genin, Guy M.
2016-01-01
Advances in multiscale models and computational power have enabled a broad toolset to predict how molecules, cells, tissues and organs behave and develop. A key theme in biological systems is the emergence of macroscale behaviour from collective behaviours across a range of length and timescales, and a key element of these models is therefore hierarchical simulation. However, this predictive capacity has far outstripped our ability to validate predictions experimentally, particularly when multiple hierarchical levels are involved. The state of the art represents careful integration of multiscale experiment and modelling, and yields not only validation, but also insights into deformation and relaxation mechanisms across scales. We present here a sampling of key results that highlight both challenges and opportunities for integrated multiscale experiment and modelling in biological systems. PMID:28981126
STORMVEX: The Storm Peak Lab Cloud Property Validation Experiment Science and Operations Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mace, J; Matrosov, S; Shupe, M
2010-09-29
During the Storm Peak Lab Cloud Property Validation Experiment (STORMVEX), a substantial correlative data set of remote sensing observations and direct in situ measurements from fixed and airborne platforms will be created in a winter season, mountainous environment. This will be accomplished by combining mountaintop observations at Storm Peak Laboratory and the airborne National Science Foundation-supported Colorado Airborne Multi-Phase Cloud Study campaign with collocated measurements from the second ARM Mobile Facility (AMF2). We describe in this document the operational plans and motivating science for this experiment, which includes deployment of AMF2 to Steamboat Springs, Colorado. The intensive STORMVEX field phasemore » will begin nominally on 1 November 2010 and extend to approximately early April 2011.« less
NASA Technical Reports Server (NTRS)
Livingston, John M.
2004-01-01
NASA Cooperative Agreement NCC2-1251 provided funding from April 2001 through December 2003 for Mr. John Livingston of SRI International to collaborate with NASA Ames Research Center scientists and engineers in the acquisition and analysis of airborne sunphotometer measurements during various atmospheric field studies. Mr. Livingston participated in instrument calibrations at Mauna Loa Observatory, pre-mission hardware and software preparations, acquisition and analysis of sunphotometer measurements during the missions, and post-mission analysis of data and reporting of scientific findings. The atmospheric field missions included the spring 2001 Intensive of the Asian Pacific Regional Aerosol Characterization Experiment (ACE-Asia), the Asian Dust Above Monterey-2003 (ADAM-2003) experiment, and the winter 2003 Second SAGE III Ozone Loss and Validation Experiment (SOLVE II).
ERIC Educational Resources Information Center
Pike, Gary R.
1989-01-01
A study investigated the appropriateness of the American College Testing Program's College Outcome Measures Program, conducted at the University of Tennessee, Knoxville, by applying the criterion of construct validity. Results indicated that while the test primarily measures individual differences, it is also sensitive to the effects of higher…
ERIC Educational Resources Information Center
Regmi, Kapil Dev
2009-01-01
This study was an exploration on the various issues related to recognition, accreditation and validation of non-formal and informal learning to open up avenues for lifelong learning and continuing education in Nepal. The perceptions, experiences, and opinions of Nepalese Development Activists, Educational Administrators, Policy Actors and…
Development and Validation of a Mathematics Anxiety Scale for Students
ERIC Educational Resources Information Center
Ko, Ho Kyoung; Yi, Hyun Sook
2011-01-01
This study developed and validated a Mathematics Anxiety Scale for Students (MASS) that can be used to measure the level of mathematics anxiety that students experience in school settings and help them overcome anxiety and perform better in mathematics achievement. We conducted a series of preliminary analyses and panel reviews to evaluate quality…
ERIC Educational Resources Information Center
Shakoor, Sania; Jaffee, Sara R.; Andreou, Penelope; Bowes, Lucy; Ambler, Antony P.; Caspi, Avshalom; Moffitt, Terrie E.; Arseneault, Louise
2011-01-01
Stressful events early in life can affect children's mental health problems. Collecting valid and reliable information about children's bad experiences is important for research and clinical purposes. This study aimed to (1) investigate whether mothers and children provide valid reports of bullying victimization, (2) examine the inter-rater…
The Development and Validation of the Student Response System Benefit Scale
ERIC Educational Resources Information Center
Hooker, J. F.; Denker, K. J.; Summers, M. E.; Parker, M.
2016-01-01
Previous research into the benefits student response systems (SRS) that have been brought into the classroom revealed that SRS can contribute positively to student experiences. However, while the benefits of SRS have been conceptualized and operationalized into a widely cited scale, the validity of this scale had not been tested. Furthermore,…
Validation of Automated Scoring of Oral Reading
ERIC Educational Resources Information Center
Balogh, Jennifer; Bernstein, Jared; Cheng, Jian; Van Moere, Alistair; Townshend, Brent; Suzuki, Masanori
2012-01-01
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the…
Reliability and Validity of a Spanish Version of the Posttraumatic Growth Inventory
ERIC Educational Resources Information Center
Weiss, Tzipi; Berger, Roni
2006-01-01
Objectives. This study was designed to adapt and validate a Spanish translation of the Posttraumatic Growth Inventory (PTGI) for the assessment of positive life changes following the stressful experiences of immigration. Method. A cross-cultural equivalence model was used to pursue semantic, content, conceptual, and technical equivalence.…
Voices from Test-Takers: Further Evidence for Language Assessment Validation and Use
ERIC Educational Resources Information Center
Cheng, Liying; DeLuca, Christopher
2011-01-01
Test-takers' interpretations of validity as related to test constructs and test use have been widely debated in large-scale language assessment. This study contributes further evidence to this debate by examining 59 test-takers' perspectives in writing large-scale English language tests. Participants wrote about their test-taking experiences in…
ERIC Educational Resources Information Center
Rivera, Jennifer E.
2011-01-01
The State of New York Agriculture Science Education secondary program is required to have a certification exam for students to assess their agriculture science education experience as a Regent's requirement towards graduation. This paper focuses on the procedure used to develop and validate two content sub-test questions within a…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... Program (NFJP), and Senior Community Service Employment Program (SCSEP). The current expiration date is May 31, 2014. Please note that the data submission processes within the new data validation software..., 2014). ETA believes the software will be completed and states will have experience with using it by the...
Targeting Low Career Confidence Using the Career Planning Confidence Scale
ERIC Educational Resources Information Center
McAuliffe, Garrett; Jurgens, Jill C.; Pickering, Worth; Calliotte, James; Macera, Anthony; Zerwas, Steven
2006-01-01
The authors describe the development and validation of a test of career planning confidence that makes possible the targeting of specific problem issues in employment counseling. The scale, developed using a rational process and the authors' experience with clients, was tested for criterion-related validity against 2 other measures. The scale…
ERIC Educational Resources Information Center
Raynor, Samantha L.
2017-01-01
Investigating the validity and applicability of student success theories for minority students uncovers the nuance and context of student experiences. This study examines the validity and applicability of student engagement and involvement for Latino students. Specifically, this study employs a critical quantitative lens to question current…
ERIC Educational Resources Information Center
Fan, Weiqiao; Zhang, Li-Fang; Watkins, David
2010-01-01
The study examined the incremental validity of thinking styles in predicting academic achievement after controlling for personality and achievement motivation in the hypermedia-based learning environment. Seventy-two Chinese college students from Shanghai, the People's Republic of China, took part in this instructional experiment. The…
Achieving external validity in home advantage research: generalizing crowd noise effects
Myers, Tony D.
2014-01-01
Different factors have been postulated to explain the home advantage phenomenon in sport. One plausible explanation investigated has been the influence of a partisan home crowd on sports officials' decisions. Different types of studies have tested the crowd influence hypothesis including purposefully designed experiments. However, while experimental studies investigating crowd influences have high levels of internal validity, they suffer from a lack of external validity; decision-making in a laboratory setting bearing little resemblance to decision-making in live sports settings. This focused review initially considers threats to external validity in applied and theoretical experimental research. Discussing how such threats can be addressed using representative design by focusing on a recently published study that arguably provides the first experimental evidence of the impact of live crowd noise on officials in sport. The findings of this controlled experiment conducted in a real tournament setting offer a level of confirmation of the findings of laboratory studies in the area. Finally directions for future research and the future conduct of crowd noise studies are discussed. PMID:24917839
Validation of spatial variability in downscaling results from the VALUE perfect predictor experiment
NASA Astrophysics Data System (ADS)
Widmann, Martin; Bedia, Joaquin; Gutiérrez, Jose Manuel; Maraun, Douglas; Huth, Radan; Fischer, Andreas; Keller, Denise; Hertig, Elke; Vrac, Mathieu; Wibig, Joanna; Pagé, Christian; Cardoso, Rita M.; Soares, Pedro MM; Bosshard, Thomas; Casado, Maria Jesus; Ramos, Petra
2016-04-01
VALUE is an open European network to validate and compare downscaling methods for climate change research. Within VALUE a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods has been developed. In the first validation experiment the downscaling methods are validated in a setup with perfect predictors taken from the ERA-interim reanalysis for the period 1997 - 2008. This allows to investigate the isolated skill of downscaling methods without further error contributions from the large-scale predictors. One aspect of the validation is the representation of spatial variability. As part of the VALUE validation we have compared various properties of the spatial variability of downscaled daily temperature and precipitation with the corresponding properties in observations. We have used two test validation datasets, one European-wide set of 86 stations, and one higher-density network of 50 stations in Germany. Here we present results based on three approaches, namely the analysis of i.) correlation matrices, ii.) pairwise joint threshold exceedances, and iii.) regions of similar variability. We summarise the information contained in correlation matrices by calculating the dependence of the correlations on distance and deriving decorrelation lengths, as well as by determining the independent degrees of freedom. Probabilities for joint threshold exceedances and (where appropriate) non-exceedances are calculated for various user-relevant thresholds related for instance to extreme precipitation or frost and heat days. The dependence of these probabilities on distance is again characterised by calculating typical length scales that separate dependent from independent exceedances. Regionalisation is based on rotated Principal Component Analysis. The results indicate which downscaling methods are preferable if the dependency of variability at different locations is relevant for the user.
A Hardware Model Validation Tool for Use in Complex Space Systems
NASA Technical Reports Server (NTRS)
Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.
2010-01-01
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
2016-08-15
HLA ISSN 2059-2302 A comparative reference study for the validation of HLA-matching algorithms in the search for allogeneic hematopoietic stem cell...from different inter- national donor registries by challenging them with simulated input data and subse- quently comparing the output. This experiment...original work is properly cited, the use is non-commercial and no modifications or adaptations are made. Comparative reference validation of HLA
1975-07-01
I WIWIHIHlipi pqpv<Hi^«^Rii.i ii mmw AD-A016 282 ASSESSING THE REALIBILITY AND VALIDITY OF MULTI-ATTRIBUTE UTILITY PROCEDURES: AN...more complicated and use data from actual experiments. Example 1: Analysis of raters making Importance judgments about attributes. In MAU studies...generaluablllty of JUDGE as contrasted to ÜASC. To do this, we win reanaIyze the data for each syste™ separately. This 1. valid since the initial
Validation Results for LEWICE 2.0. [Supplement
NASA Technical Reports Server (NTRS)
Wright, William B.; Rutkowski, Adam
1999-01-01
Two CD-ROMs contain experimental ice shapes and code prediction used for validation of LEWICE 2.0 (see NASA/CR-1999-208690, CASI ID 19990021235). The data include ice shapes for both experiment and for LEWICE, all of the input and output files for the LEWICE cases, JPG files of all plots generated, an electronic copy of the text of the validation report, and a Microsoft Excel(R) spreadsheet containing all of the quantitative measurements taken. The LEWICE source code and executable are not contained on the discs.
Hybrid Soft Soil Tire Model (HSSTM). Part 1: Tire Material and Structure Modeling
2015-04-28
commercially available vehicle simulation packages. Model parameters are obtained using a validated finite element tire model, modal analysis, and other...design of experiment matrix. This data, in addition to modal analysis data were used to validate the tire model. Furthermore, to study the validity...é ë ê ê ê ê ê ê ê ù û ú ú ú ú ú ú ú (78) The applied forces to the rim center consist of the axle forces and suspension forces: FFF Gsuspension G
Coral Reef Early Warning System (CREWS) RPC Experiment
NASA Technical Reports Server (NTRS)
Estep, Leland; Spruce, Joseph P.; Hall, Callie
2007-01-01
This viewgraph document reviews the background, objectives, methodology, validation, and present status of the Coral Reef Early Warning System (CREWS) Rapid Prototyping Capability (RPC) experiment. The potential NASA contribution to CREWS Decision Support Tool (DST) centers on remotely sensed imagery products.
INL Experimental Program Roadmap for Thermal Hydraulic Code Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn McCreery; Hugh McIlroy
2007-09-01
Advanced computer modeling and simulation tools and protocols will be heavily relied on for a wide variety of system studies, engineering design activities, and other aspects of the Next Generation Nuclear Power (NGNP) Very High Temperature Reactor (VHTR), the DOE Global Nuclear Energy Partnership (GNEP), and light-water reactors. The goal is for all modeling and simulation tools to be demonstrated accurate and reliable through a formal Verification and Validation (V&V) process, especially where such tools are to be used to establish safety margins and support regulatory compliance, or to design a system in a manner that reduces the role ofmore » expensive mockups and prototypes. Recent literature identifies specific experimental principles that must be followed in order to insure that experimental data meet the standards required for a “benchmark” database. Even for well conducted experiments, missing experimental details, such as geometrical definition, data reduction procedures, and manufacturing tolerances have led to poor Benchmark calculations. The INL has a long and deep history of research in thermal hydraulics, especially in the 1960s through 1980s when many programs such as LOFT and Semiscle were devoted to light-water reactor safety research, the EBRII fast reactor was in operation, and a strong geothermal energy program was established. The past can serve as a partial guide for reinvigorating thermal hydraulic research at the laboratory. However, new research programs need to fully incorporate modern experimental methods such as measurement techniques using the latest instrumentation, computerized data reduction, and scaling methodology. The path forward for establishing experimental research for code model validation will require benchmark experiments conducted in suitable facilities located at the INL. This document describes thermal hydraulic facility requirements and candidate buildings and presents examples of suitable validation experiments related to VHTRs, sodium-cooled fast reactors, and light-water reactors. These experiments range from relatively low-cost benchtop experiments for investigating individual phenomena to large electrically-heated integral facilities for investigating reactor accidents and transients.« less
Fundamental arthroscopic skill differentiation with virtual reality simulation.
Rose, Kelsey; Pedowitz, Robert
2015-02-01
The purpose of this study was to investigate the use and validity of virtual reality modules as part of the educational approach to mastering arthroscopy in a safe environment by assessing the ability to distinguish between experience levels. Additionally, the study aimed to evaluate whether experts have greater ambidexterity than do novices. Three virtual reality modules (Swemac/Augmented Reality Systems, Linkoping, Sweden) were created to test fundamental arthroscopic skills. Thirty participants-10 experts consisting of faculty, 10 intermediate participants consisting of orthopaedic residents, and 10 novices consisting of medical students-performed each exercise. Steady and Telescope was designed to train centering and image stability. Steady and Probe was designed to train basic triangulation. Track and Moving Target was designed to train coordinated motions of arthroscope and probe. Metrics reflecting speed, accuracy, and efficiency of motion were used to measure construct validity. Steady and Probe and Track a Moving Target both exhibited construct validity, with better performance by experts and intermediate participants than by novices (P < .05), whereas Steady and Telescope did not show validity. There was an overall trend toward better ambidexterity as a function of greater surgical experience, with experts consistently more proficient than novices throughout all 3 modules. This study represents a new way to assess basic arthroscopy skills using virtual reality modules developed through task deconstruction. Participants with the most arthroscopic experience performed better and were more consistent than novices on all 3 virtual reality modules. Greater arthroscopic experience correlates with more symmetry of ambidextrous performance. However, further adjustment of the modules may better simulate fundamental arthroscopic skills and discriminate between experience levels. Arthroscopy training is a critical element of orthopaedic surgery resident training. Developing techniques to safely and effectively train these skills is critical for patient safety and resident education. Copyright © 2015 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Explicating Experience: Development of a Valid Scale of Past Hazard Experience for Tornadoes.
Demuth, Julie L
2018-03-23
People's past experiences with a hazard theoretically influence how they approach future risks. Yet, past hazard experience has been conceptualized and measured in wide-ranging, often simplistic, ways, resulting in mixed findings about its relationship with risk perception. This study develops a scale of past hazard experiences, in the context of tornadoes, that is content and construct valid. A conceptual definition was developed, a set of items were created to measure one's most memorable and multiple tornado experiences, and the measures were evaluated through two surveys of the public who reside in tornado-prone areas. Four dimensions emerged of people's most memorable experience, reflecting their awareness of the tornado risk that day, their personalization of the risk, the intrusive impacts on them personally, and impacts experienced vicariously through others. Two dimensions emerged of people's multiple experiences, reflecting common types of communication received and negative emotional responses. These six dimensions are novel in that they capture people's experience across the timeline of a hazard as well as intangible experiences that are both direct and indirect. The six tornado experience dimensions were correlated with tornado risk perceptions measured as cognitive-affective and as perceived probability of consequences. The varied experience-risk perception results suggest that it is important to understand the nuances of these concepts and their relationships. This study provides a foundation for future work to continue explicating past hazard experience, across different risk contexts, and for understanding its effect on risk assessment and responses. © 2018 Society for Risk Analysis.
Openness to Experience as a Basic Dimension of Personality.
ERIC Educational Resources Information Center
McCrae, Robert R.
This paper opens by describing research since 1975 (McCrae and Costa) on a set of related traits that identified as aspects of Openness to Experience. The historic roots of the concept of Openness to Experience are traced. Data are provided on the convergent and discriminant validity of the six Revised NEO-Personality Inventory facets of Fantasy,…
How Generalizable Is Your Experiment? An Index for Comparing Samples and Populations
ERIC Educational Resources Information Center
Tipton, Elizabeth
2013-01-01
Recent research on the design of social experiments has highlighted the effects of different design choices on research findings. Since experiments rarely collect their samples using random selection, in order to address these external validity problems and design choices, recent research has focused on two areas. The first area is on methods for…
2009-03-01
applications. RIGEX was an Air Force Institute of Technology graduate-student-built Space Shuttle cargo bay experiment intended to heat and inflate...suggestions for future experiments and applications are provided. RIGEX successfully accomplished its mission statement by validating the heating and...Inflatable/Rigidizable Solar Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.6. RIGEX Student Involvement
ERIC Educational Resources Information Center
Huber, Martin
2012-01-01
As any empirical method used for causal analysis, social experiments are prone to attrition which may flaw the validity of the results. This article considers the problem of partially missing outcomes in experiments. First, it systematically reveals under which forms of attrition--in terms of its relation to observable and/or unobservable…
ERIC Educational Resources Information Center
Gambescia, Stephen F.; Lysoby, Linda; Perko, Michael; Sheu, Jiunn-Jye
2016-01-01
The purpose of this article is to demonstrate how one profession used an "experience documentation process" to grant advanced certification to qualified certified health education specialists. The competency validation process approved by the certifying organization serves as an example of an additional method, aside from traditional…
Observing System Simulation Experiments: An Overview
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2016-01-01
An overview of Observing System Simulation Experiments (OSSEs) will be given, with focus on calibration and validation of OSSE frameworks. Pitfalls and practice will be discussed, including observation error characteristics, incestuousness, and experimental design. The potential use of OSSEs for investigation of the behaviour of data assimilation systems will be explored, including some results from experiments using the NASAGMAO OSSE.
NASA Technical Reports Server (NTRS)
Moes, Timothy R.
2009-01-01
The principal objective of the Supersonics Project is to develop and validate multidisciplinary physics-based predictive design, analysis and optimization capabilities for supersonic vehicles. For aircraft, the focus will be on eliminating the efficiency, environmental and performance barriers to practical supersonic flight. Previous flight projects found that a shaped sonic boom could propagate all the way to the ground (F-5 SSBD experiment) and validated design tools for forebody shape modifications (F-5 SSBD and Quiet Spike experiments). The current project, Lift and Nozzle Change Effects on Tail Shock (LaNCETS) seeks to obtain flight data to develop and validate design tools for low-boom tail shock modifications. Attempts will be made to alter the shock structure of NASA's NF-15B TN/837 by changing the lift distribution by biasing the canard positions, changing the plume shape by under- and over-expanding the nozzles, and changing the plume shape using thrust vectoring. Additional efforts will measure resulting shocks with a probing aircraft (F-15B TN/836) and use the results to validate and update predictive tools. Preliminary flight results are presented and are available to provide truth data for developing and validating the CFD tools required to design low-boom supersonic aircraft.
van Dongen, Koen W; Ahlberg, Gunnar; Bonavina, Luigi; Carter, Fiona J; Grantcharov, Teodor P; Hyltander, Anders; Schijven, Marlies P; Stefani, Alessandro; van der Zee, David C; Broeders, Ivo A M J
2011-01-01
Virtual reality (VR) simulators have been demonstrated to improve basic psychomotor skills in endoscopic surgery. The exercise configuration settings used for validation in studies published so far are default settings or are based on the personal choice of the tutors. The purpose of this study was to establish consensus on exercise configurations and on a validated training program for a virtual reality simulator, based on the experience of international experts to set criterion levels to construct a proficiency-based training program. A consensus meeting was held with eight European teams, all extensively experienced in using the VR simulator. Construct validity of the training program was tested by 20 experts and 60 novices. The data were analyzed by using the t test for equality of means. Consensus was achieved on training designs, exercise configuration, and examination. Almost all exercises (7/8) showed construct validity. In total, 50 of 94 parameters (53%) showed significant difference. A European, multicenter, validated, training program was constructed according to the general consensus of a large international team with extended experience in virtual reality simulation. Therefore, a proficiency-based training program can be offered to training centers that use this simulator for training in basic psychomotor skills in endoscopic surgery.
Kawata, Ariane K; Wilson, Hilary; Ong, Siew Hwa; Kulich, Karoly; Coyne, Karin
2016-10-01
The aim of this study was to evaluate the factor structure and psychometric characteristics of the Hypoglycemia Perspectives Questionnaire (HPQ) assessing experience and perceptions of hypoglycemia in patients with type 2 diabetes mellitus (T2DM). HPQ was administered to adults with T2DM in a clinical sample from Cyprus (HYPO-Cyprus, n = 500) and a community sample in the United States (US, n = 1257) from the 2011 US National Health and Wellness Survey. Demographic and clinical data were collected. Analysis of HPQ data from two convenience samples examined item performance, factor structure, and HPQ measurement properties (reliability, convergent validity, known-groups validity). Analyses supported three HPQ domains: symptom concern (six items), compensatory behavior (five items), and worry (five items). Internal consistency was high for all three domains (all ≥0.75), supporting reliability. Convergent validity was supported by moderate Spearman correlations between HPQ domain scores and the Audit of Diabetes-Dependent Quality of Life (ADDQoL-19) total score. Patients with recent hypoglycemia events had significantly higher HPQ scores, supporting known-group validity. HPQ may be a valid and reliable measure capturing the experience and impact of hypoglycemia and useful in clinical trials and community-based settings.
A French validation study of the Coma Recovery Scale-Revised (CRS-R).
Schnakers, Caroline; Majerus, Steve; Giacino, Joseph; Vanhaudenhuyse, Audrey; Bruno, Marie-Aurelie; Boly, Melanie; Moonen, Gustave; Damas, Pierre; Lambermont, Bernard; Lamy, Maurice; Damas, Francois; Ventura, Manfredi; Laureys, Steven
2008-09-01
The aim of the present study was to explore the concurrent validity, inter-rater agreement and diagnostic sensitivity of a French adaptation of the Coma Recovery Scale-Revised (CRS-R) as compared to other coma scales such as the Glasgow Coma Scale (GCS), the Full Outline of UnResponsiveness scale (FOUR) and the Wessex Head Injury Matrix (WHIM). Multi-centric prospective study. To test concurrent validity and diagnostic sensitivity, the four behavioural scales were administered in a randomized order in 77 vegetative and minimally conscious patients. Twenty-four clinicians with different professional backgrounds, levels of expertise and CRS-R experience were recruited to assess inter-rater agreement. Good concurrent validity was obtained between the CRS-R and the three other standardized behavioural scales. Inter-rater reliability for the CRS-R total score and sub-scores was good, indicating that the scale yields reproducible findings across examiners and does not appear to be systematically biased by profession, level of expertise or CRS-R experience. Finally, the CRS-R demonstrated a significantly higher sensitivity to detect MCS patients, as compared to the GCS, the FOUR and the WHIM. The results show that the French version of the CRS-R is a valid and sensitive scale which can be used in severely brain damaged patients by all members of the medical staff.
Smirnova, Alina; Lombarts, Kiki M J M H; Arah, Onyebuchi A; van der Vleuten, Cees P M
2017-10-01
Evaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures. To validate the Consumer Quality Index (CQI) Inpatient Hospital Care for use on both department and hospital levels. Using cross-sectional observational data, we investigated the internal validity of the questionnaire using confirmatory factor analyses (CFA), and the generalizability of the questionnaire for use at the department and hospital levels using generalizability theory. 22924 adults hospitalized for ≥24 hours between 1 January 2013 and 31 December 2014 in 23 Dutch hospitals (515 department evaluations). CQI Inpatient Hospital Care questionnaire. CFA results showed a good fit on individual level (CFI=0.96, TLI=0.95, RMSEA=0.04), which was comparable between specialties. When scores were aggregated to the department level, the fit was less desirable (CFI=0.83, TLI=0.81, RMSEA=0.06), and there was a significant overlap between communication with doctors and explanation of treatment subscales. Departments and hospitals explained ≤5% of total variance in subscale scores. In total, 4-8 departments and 50 respondents per department are needed to reliably evaluate subscales rated on a 4-point scale, and 10 departments with 100-150 respondents per department for binary subscales. The CQI Inpatient Hospital Care is a valid and reliable questionnaire to evaluate inpatient experiences in Dutch hospitals provided sufficient sampling is done. Results can facilitate meaningful comparisons and guide quality improvement activities in individual departments and hospitals. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.
Validating the BISON fuel performance code to integral LWR experiments
Williamson, R. L.; Gamble, K. A.; Perez, D. M.; ...
2016-03-24
BISON is a modern finite element-based nuclear fuel performance code that has been under development at the Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON’s computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to datemore » for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Our results demonstrate that 1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, 2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and 3) comparison of rod diameter results indicates a tendency to overpredict clad diameter reduction early in life, when clad creepdown dominates, and more significantly overpredict the diameter increase late in life, when fuel expansion controls the mechanical response. In the initial rod diameter comparisons they were unsatisfactory and have lead to consideration of additional separate effects experiments to better understand and predict clad and fuel mechanical behavior. Results from this study are being used to define priorities for ongoing code development and validation activities.« less
Hu, Yinhuan; Zhang, Zixia; Xie, Jinzhu; Wang, Guanping
2017-02-01
The objective of this study is to describe the development of the Outpatient Experience Questionnaire (OPEQ) and to assess the validity and reliability of the scale. Literature review, patient interviews, Delphi method and Cross-sectional validation survey. Six comprehensive public hospitals in China. The survey was carried out on a sample of 600 outpatients. Acceptability of the questionnaire was assessed according to the overall response rate, item non-response rate and the average completion time. Correlation coefficients and confirmatory factor analysis were used to test construct validity. Delphi method was used to assess the content validity of the questionnaire. Cronbach's coefficient alpha and split-half reliability coefficient were used to estimate the internal reliability of the questionnaire. The overall response rate was 97.2% and the item non-response rate ranged from 0% to 0.3%. The mean completion time was 6 min. The Spearman correlations of item-total score ranged from 0.466 to 0.765. The results of confirmatory factor analysis showed that all items had factor loadings above 0.40 and the dimension intercorrelation ranged from 0.449 to 0.773, the goodness of fit of the questionnaire was reasonable. The overall authority grade of expert consultation was 0.80 and Kendall's coefficient of concordance W was 0.186. The Cronbach's coefficients alpha of six dimensions ranged from 0.708 to 0.895, the split-half reliability coefficient (Spearman-Brown coefficient) was 0.969. The OPEQ is a promising instrument covering the most important aspects which influence outpatient experiences of comprehensive public hospital in China. It has good evidence for acceptability, validity and reliability. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Haugum, Mona; Iversen, Hilde Hestad; Bjertnaes, Oyvind; Lindahl, Anne Karin
2017-02-20
Patient experiences are an important aspect of health care quality, but there is a lack of validated instruments for their measurement in the substance dependence literature. A new questionnaire to measure inpatients' experiences of interdisciplinary treatment for substance dependence has been developed in Norway. The aim of this study was to psychometrically test the new questionnaire, using data from a national survey in 2013. The questionnaire was developed based on a literature review, qualitative interviews with patients, expert group discussions and pretesting. Data were collected in a national survey covering all residential facilities with inpatients in treatment for substance dependence in 2013. Data quality and psychometric properties were assessed, including ceiling effects, item missing, exploratory factor analysis, and tests of internal consistency reliability, test-retest reliability and construct validity. The sample included 978 inpatients present at 98 residential institutions. After correcting for excluded patients (n = 175), the response rate was 91.4%. 28 out of 33 items had less than 20.5% of missing data or replies in the "not applicable" category. All but one item met the ceiling effect criterion of less than 50.0% of the responses in the most favorable category. Exploratory factor analysis resulted in three scales: "treatment and personnel", "milieu" and "outcome". All scales showed satisfactory internal consistency reliability (Cronbach's alpha ranged from 0.75-0.91) and test-retest reliability (ICC ranged from 0.82-0.85). 17 of 18 significant associations between single variables and the scales supported construct validity of the PEQ-ITSD. The content validity of the PEQ-ITSD was secured by a literature review, consultations with an expert group and qualitative interviews with patients. The PEQ-ITSD was used in a national survey in Norway in 2013 and psychometric testing showed that the instrument had satisfactory internal consistency reliability and construct validity.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
Development and preliminary validation of an interactive remote physical therapy system.
Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen
2015-01-01
In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.
A Possible Tool for Checking Errors in the INAA Results, Based on Neutron Data and Method Validation
NASA Astrophysics Data System (ADS)
Cincu, Em.; Grigore, Ioana Manea; Barbos, D.; Cazan, I. L.; Manu, V.
2008-08-01
This work presents preliminary results of a new type of possible application in the INAA experiments of elemental analysis, useful to check errors occurred during investigation of unknown samples; it relies on the INAA method validation experiments and accuracy of the neutron data from the literature. The paper comprises 2 sections, the first one presents—in short—the steps of the experimental tests carried out for INAA method validation and for establishing the `ACTIVA-N' laboratory performance, which is-at the same time-an illustration of the laboratory evolution on the way to get performance. Section 2 presents our recent INAA results on CRMs, of which interpretation opens discussions about the usefulness of using a tool for checking possible errors, different from the usual statistical procedures. The questionable aspects and the requirements to develop a practical checking tool are discussed.
Validation of SAM 2 and SAGE satellite
NASA Technical Reports Server (NTRS)
Kent, G. S.; Wang, P.-H.; Farrukh, U. O.; Yue, G. K.
1987-01-01
Presented are the results of a validation study of data obtained by the Stratospheric Aerosol and Gas Experiment I (SAGE I) and Stratospheric Aerosol Measurement II (SAM II) satellite experiments. The study includes the entire SAGE I data set (February 1979 - November 1981) and the first four and one-half years of SAM II data (October 1978 - February 1983). These data sets have been validated by their use in the analysis of dynamical, physical and chemical processes in the stratosphere. They have been compared with other existing data sets and the SAGE I and SAM II data sets intercompared where possible. The study has shown the data to be of great value in the study of the climatological behavior of stratospheric aerosols and ozone. Several scientific publications and user-oriented data summaries have appeared as a result of the work carried out under this contract.
The Multidimensional Loss Scale: validating a cross-cultural instrument for measuring loss.
Vromans, Lyn; Schweitzer, Robert D; Brough, Mark
2012-04-01
The Multidimensional Loss Scale (MLS) represents the first instrument designed specifically to index Experience of Loss Events and Loss Distress across multiple domains (cultural, social, material, and intrapersonal) relevant to refugee settlement. Recently settled Burmese adult refugees (N = 70) completed a questionnaire battery, including MLS items. Analyses explored MLS internal consistency, convergent and divergent validity, and factor structure. Cronbach alphas indicated satisfactory internal consistency for Experience of Loss Events (0.85) and Loss Distress (0.92), reflecting a unitary construct of multidimensional loss. Loss Distress did not correlate with depression or anxiety symptoms and correlated moderately with interpersonal grief and trauma symptoms, supporting divergent and convergent validity. Factor analysis provided preliminary support for a five-factor model: Loss of Symbolic Self, Loss of Interdependence, Loss of Home, Interpersonal Loss, and Loss of Intrapersonal Integrity. Received well by participants, the new scale shows promise for application in future research and practice.
Validation and Continued Development of Methods for Spheromak Simulation
NASA Astrophysics Data System (ADS)
Benedett, Thomas
2017-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. An extended MHD model has shown good agreement with experimental data at 14 kHz injector operation. Efforts to extend the existing validation to a range of higher frequencies (36, 53, 68 kHz) using the PSI-Tet 3D extended MHD code will be presented, along with simulations of potential combinations of flux conserver features and helicity injector configurations and their impact on current drive performance, density control, and temperature for future SIHI experiments. Work supported by USDoE.
Outsourcing bioanalytical services at Janssen Research and Development: the sequel anno 2017.
Dillen, Lieve; Verhaeghe, Tom
2017-08-01
The strategy of outsourcing bioanalytical services at Janssen has been evolving over the last years and an update will be given on the recent changes in our processes. In 2016, all internal GLP-related activities were phased out and this decision lead to the re-orientation of the in-house bioanalytical activities. As a consequence, in-depth experience with the validated bioanalytical assays for new drug candidates is currently gained together with the external partner, since development and validation of the assay and execution of GLP preclinical studies are now transferred to the CRO. The evolution to externalize more bioanalytical support has created opportunities to build even stronger partnerships with the CROs and to refocus internal resources. Case studies are presented illustrating challenges encountered during method development and validation at preferred partners when limited internal experience is obtained or with introduction of new technology.
Autonomous rendezvous and docking: A commercial approach to on-orbit technology validation
NASA Technical Reports Server (NTRS)
Tchoryk, Peter, Jr.; Whitten, Raymond P.
1991-01-01
SpARC, in conjunction with its corporate affiliates, is planning an on-orbit validation of autonomous rendezvous and docking (ARD) technology. The emphasis in this program is to utilize existing technology and commercially available components wherever possible. The primary subsystems to be validated by this demonstration include GPS receivers for navigation, a video-based sensor for proximity operations, a fluid connector mechanism to demonstrate fluid resupply capability, and a compliant, single-point docking mechanism. The focus for this initial experiment will be ELV based and will make use of two residual Commercial Experiment Transporter (COMET) service modules. The first COMET spacecraft will be launched in late 1992 and will serve as the target vehicle. After the second COMET spacecraft has been launched in late 1994, the ARD demonstration will take place. The service module from the second COMET will serve as the chase vehicle.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
Ensuring the validity of calculated subcritical limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H.K.
1977-01-01
The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less
Oldaker, Teri; Whitby, Liam; Saber, Maryam; Holden, Jeannine; Wallace, Paul K; Litwin, Virginia
2018-01-01
Over the past six years, a diverse group of stakeholders have put forth recommendations regarding the analytical validation of flow cytometric methods and described in detail the differences between cell-based and traditional soluble analyte assay validations. This manuscript is based on these general recommendations as well as the published experience of experts in the area of PNH testing. The goal is to provide practical assay-specific guidelines for the validation of high-sensitivity flow cytometric PNH assays. Examples of the reports and validation data described herein are provided in Supporting Information. © 2017 International Clinical Cytometry Society. © 2017 International Clinical Cytometry Society.
NASA's In-Space Technology Experiments Program
NASA Technical Reports Server (NTRS)
Levine, J.; Prusha, S. L.
1992-01-01
The objective of the In-Space Technology Experiments Program is to evaluate and validate innovative space technologies and to provide better knowledge of the effects of microgravity and the space environment. The history, organization, methodology, and current program characteristics are presented. Results of the tank pressure control experiment and the middeck zero-gravity dynamics experiment are described to demonstrate the types of technologies that have flown and the experimental results obtained from these low-cost space flight experiments.
Burston, Adam; Eley, Robert; Parker, Deborah; Tuckett, Anthony
2017-06-01
The aim of this study was to gain insight into the experience of moral distress within the aged care workforce. The objective of this study was to use and validate an existing instrument to measure moral distress within the aged care setting. Moral distress, a phenomenon associated with worker satisfaction and retention, is common within nursing. Instruments to measure moral distress exist; however, there are no validated instruments to measure moral distress within an aged care setting. An existing instrument, the Moral Distress Scale (Revised) was identified and amended. Amendments were subject to expert review for face and content validity. Data were collected from aged care nurses working in residential and community aged care, in Australia. Reliability was assessed using Cronbach's alpha with exploratory factor analysis undertaken for construct validity. 106 participants completed the survey, 93 (87.7%) identified as female and 13 (12.3%) male. Participants ranged in age from 21 to 73 years, with a mean time working in nursing of 20.6 years. The frequency component of the instrument demonstrated an alpha of 0.89, the intensity component 0.95 and the instrument as a whole 0.94. Three factors were identified and labelled as: Quality of Care, Capacity of Team and Professional Practice. Mean scores indicate a low occurrence of moral distress, but this distress, when experienced, was felt with a moderate level of intensity. Primary causes of moral distress were insufficient staff competency levels, poor quality care because of poor communication and delays in implementing palliation. The instrument demonstrates validity and reliability within the Australian aged care setting. Further analysis with larger populations is required to support these findings. Australian aged care workers do experience moral distress. They suffer adverse consequences of this distress and quality of care is negatively impacted. This newly validated instrument can be used to quantify the occurrence of moral distress and to inform targeted interventions to reduce the occurrence and intensity of the experience. © 2016 John Wiley & Sons Ltd.
Soler, Joaquim; Franquesa, Alba; Feliu-Soler, Albert; Cebolla, Ausias; García-Campayo, Javier; Tejedor, Rosa; Demarzo, Marcelo; Baños, Rosa; Pascual, Juan Carlos; Portella, Maria J
2014-11-01
Decentering is defined as the ability to observe one's thoughts and feelings in a detached manner. The Experiences Questionnaire (EQ) is a self-report instrument that originally assessed decentering and rumination. The purpose of this study was to evaluate the psychometric properties of the Spanish version of EQ-Decentering and to explore its clinical usefulness. The 11-item EQ-Decentering subscale was translated into Spanish and psychometric properties were examined in a sample of 921 adult individuals, 231 with psychiatric disorders and 690 without. The subsample of nonpsychiatric participants was also split according to their previous meditative experience (meditative participants, n=341; and nonmeditative participants, n=349). Additionally, differences among these three subgroups were explored to determine clinical validity of the scale. Finally, EQ-Decentering was administered twice in a group of borderline personality disorder, before and after a 10-week mindfulness intervention. Confirmatory factor analysis indicated acceptable model fit, sbχ(2)=243.8836 (p<.001), CFI=.939, GFI=.936, SRMR=.040, and RMSEA=.06 (.060-.077), and psychometric properties were found to be satisfactory (reliability: Cronbach's α=.893; convergent validity: r>.46; and divergent validity: r<-.35). The scale detected changes in decentering after a 10-session intervention in mindfulness (t=-4.692, p<.00001). Differences among groups were significant (F=134.8, p<.000001), where psychiatric participants showed the lowest scores compared to nonpsychiatric meditative and nonmeditative participants. The Spanish version of the EQ-Decentering is a valid and reliable instrument to assess decentering either in clinical and nonclinical samples. In addition, the findings show that EQ-Decentering seems an adequate outcome instrument to detect changes after mindfulness-based interventions. Copyright © 2014. Published by Elsevier Ltd.
Herzog, Annabel; Voigt, Katharina; Meyer, Björn; Wollburg, Eileen; Weinmann, Nina; Langs, Gernot; Löwe, Bernd
2015-06-01
The new DSM-5 Somatic Symptom Disorder (SSD) emphasizes the importance of psychological processes related to somatic symptoms in patients with somatoform disorders. To address this, the Somatic Symptoms Experiences Questionnaire (SSEQ), the first self-report scale that assesses a broad range of psychological and interactional characteristics relevant to patients with a somatoform disorder or SSD, was developed. This prospective study was conducted to validate the SSEQ. The 15-item SSEQ was administered along with a battery of self-report questionnaires to psychosomatic inpatients. Patients were assessed with the Structured Clinical Interview for DSM-IV to confirm a somatoform, depressive, or anxiety disorder. Confirmatory factor analyses, tests of internal consistency and tests of validity were performed. Patients (n=262) with a mean age of 43.4 years, 60.3% women, were included in the analyses. The previously observed four-factor model was replicated and internal consistency was good (Cronbach's α=.90). Patients with a somatoform disorder had significantly higher scores on the SSEQ (t=4.24, p<.001) than patients with a depressive/anxiety disorder. Construct validity was shown by high correlations with other instruments measuring related constructs. Hierarchical multiple regression analyses showed that the questionnaire predicted health-related quality of life. Sensitivity to change was shown by significantly higher effect sizes of the SSEQ change scores for improved patients than for patients without improvement. The SSEQ appears to be a reliable, valid, and efficient instrument to assess a broad range of psychological and interactional features related to the experience of somatic symptoms. Copyright © 2015 Elsevier Inc. All rights reserved.
English, Devin; Bowleg, Lisa; del Río-González, Ana Maria; Tschann, Jeanne M.; Agans, Robert; Malebranche, David J
2017-01-01
Objectives Although social science research has examined police and law enforcement-perpetrated discrimination against Black men using policing statistics and implicit bias studies, there is little quantitative evidence detailing this phenomenon from the perspective of Black men. Consequently, there is a dearth of research detailing how Black men’s perspectives on police and law enforcement-related stress predict negative physiological and psychological health outcomes. This study addresses these gaps with the qualitative development and quantitative test of the Police and Law Enforcement (PLE) scale. Methods In Study 1, we employed thematic analysis on transcripts of individual qualitative interviews with 90 Black men to assess key themes and concepts and develop quantitative items. In Study 2, we used 2 focus groups comprised of 5 Black men each (n=10), intensive cognitive interviewing with a separate sample of Black men (n=15), and piloting with another sample of Black men (n=13) to assess the ecological validity of the quantitative items. For study 3, we analyzed data from a sample of 633 Black men between the ages of 18 and 65 to test the factor structure of the PLE, as we all as its concurrent validity and convergent/discriminant validity. Results Qualitative analyses and confirmatory factor analyses suggested that a 5-item, 1-factor measure appropriately represented respondents’ experiences of police/law enforcement discrimination. As hypothesized, the PLE was positively associated with measures of racial discrimination and depressive symptoms. Conclusions Preliminary evidence suggests that the PLE is a reliable and valid measure of Black men’s experiences of discrimination with police/law enforcement. PMID:28080104