Lu, Hong-Fa; Du, Li-Na; Li, Zhi-Qiang; Chen, Xiao-Yong; Yang, Jun-Xing
2014-11-18
Viviparidae are widely distributed around the globe, but there are considerable gaps in the taxonomic record. To date, 18 species of the viviparid genus Cipangopaludina have been recorded in China, but there is substantial disagreement on the validity of this taxonomy. In this study, we described the shell and internal traits of these species to better discuss the validity of related species. We found that C. ampulliformis is synonym of C. lecythis, and C. wingatei is synonym of C. chinensis,while C. ampullacea and C. fluminalis are subspecies of C. lecythis and C. chinensis, respectively. C. dianchiensis should be paled in the genus Margarya, while C. menglaensis and C. yunnanensisbelong to genus Mekongia. Totally, this leaves 11 species and 2 subspecies recorded in China. Based on whether these specimens' spiral whorl depth was longer than aperture depth, these species or subspecies can be further divided into two groups, viz. chinensis group and cathayensis group, which can be determined from one another via the ratio of spiral depth and aperture depth, vas deferens and number of secondary branches of vas deferens. Additionally, Principal Component Analysis indicated that body whorl depth, shell width, aperture width and aperture length were main variables during species of Cipangopaludina. A key to all valid Chinese Cipangopaludina specieswere given.
LU, Hong-Fa; DU, Li-Na; LI, Zhi-Qiang; CHEN, Xiao-Yong; YANG, Jun-Xing
2014-01-01
Viviparidae are widely distributed around the globe, but there are considerable gaps in the taxonomic record. To date, 18 species of the viviparid genus Cipangopaludina have been recorded in China, but there is substantial disagreement on the validity of this taxonomy. In this study, we described the shell and internal traits of these species to better discuss the validity of related species. We found that C. ampulliformis is synonym of C. lecythis, and C. wingatei is synonym of C. chinensis, while C. ampullacea and C. fluminalis are subspecies of C. lecythis and C. chinensis, respectively. C. dianchiensis should be paled in the genus Margarya, while C. menglaensis and C. yunnanensis belong to genus Mekongia. Totally, this leaves 11 species and 2 subspecies recorded in China. Based on whether these specimens’ spiral whorl depth was longer than aperture depth, these species or subspecies can be further divided into two groups, viz. chinensis group and cathayensis group, which can be determined from one another via the ratio of spiral depth and aperture depth, vas deferens and number of secondary branches of vas deferens. Additionally, Principal Component Analysis indicated that body whorl depth, shell width, aperture width and aperture length were main variables during species of Cipangopaludina. A key to all valid Chinese Cipangopaludina species were given. PMID:25465086
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, M; Salinas Aranda, F; 21st Century Oncology, Ft. Myers, FL
2014-06-01
Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanningmore » system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use was reduced to a few hours.« less
NASA Astrophysics Data System (ADS)
Xu, Yue-Ping; Yu, Chaofeng; Zhang, Xujie; Zhang, Qingqing; Xu, Xiao
2012-02-01
Hydrological predictions in ungauged basins are of significant importance for water resources management. In hydrological frequency analysis, regional methods are regarded as useful tools in estimating design rainfall/flood for areas with only little data available. The purpose of this paper is to investigate the performance of two regional methods, namely the Hosking's approach and the cokriging approach, in hydrological frequency analysis. These two methods are employed to estimate 24-h design rainfall depths in Hanjiang River Basin, one of the largest tributaries of Yangtze River, China. Validation is made through comparing the results to those calculated from the provincial handbook approach which uses hundreds of rainfall gauge stations. Also for validation purpose, five hypothetically ungauged sites from the middle basin are chosen. The final results show that compared to the provincial handbook approach, the Hosking's approach often overestimated the 24-h design rainfall depths while the cokriging approach most of the time underestimated. Overall, the Hosking' approach produced more accurate results than the cokriging approach.
Assessing Temporal Stability for Coarse Scale Satellite Moisture Validation in the Maqu Area, Tibet
Bhatti, Haris Akram; Rientjes, Tom; Verhoef, Wouter; Yaseen, Muhammad
2013-01-01
This study evaluates if the temporal stability concept is applicable to a time series of satellite soil moisture images so to extend the common procedure of satellite image validation. The area of study is the Maqu area, which is located in the northeastern part of the Tibetan plateau. The network serves validation purposes of coarse scale (25–50 km) satellite soil moisture products and comprises 20 stations with probes installed at depths of 5, 10, 20, 40, 80 cm. The study period is 2009. The temporal stability concept is applied to all five depths of the soil moisture measuring network and to a time series of satellite-based moisture products from the Advance Microwave Scanning Radiometer (AMSR-E). The in-situ network is also assessed by Pearsons's correlation analysis. Assessments by the temporal stability concept proved to be useful and results suggest that probe measurements at 10 cm depth best match to the satellite observations. The Mean Relative Difference plot for satellite pixels shows that a RMSM pixel can be identified but in our case this pixel does not overlay any in-situ station. Also, the RMSM pixel does not overlay any of the Representative Mean Soil Moisture (RMSM) stations of the five probe depths. Pearson's correlation analysis on in-situ measurements suggests that moisture patterns over time are more persistent than over space. Since this study presents first results on the application of the temporal stability concept to a series of satellite images, we recommend further tests to become more conclusive on effectiveness to broaden the procedure of satellite validation. PMID:23959237
Hashim, Hairul Anuar; Shaharuddin, Saidatin Sabiyah; Hamidan, Shazarina; Grove, J Robert
2017-02-01
This study examined psychometric properties of a Malaysian-language Sport Anxiety Scale-2 (SAS-2) in three separate studies. Study 1 examined the criterion validity and internal consistency of SAS-2 among 119 developmental hockey players. Measures of trait anxiety and mood states along with digit vigilance, choice reaction time, and depth perception tests were administered. Regression analysis revealed that somatic anxiety and concentration disruption were significantly associated with sustained attention. Worry was significantly associated with depth perception but not sustained attention. Pearson correlation coefficients also revealed significant relationships between SAS-2 subscales and negative mood state dimensions. Study 2 examined the convergent and discriminant validity of SAS-2 by correlating it with state anxiety measured by the CSAI-2R. Significant positive relationships were obtained between SAS-2 subscales and somatic and cognitive state anxiety. Conversely, state self-confidence was negatively related to SAS-2 subscales. In addition, significant differences were observed between men and women in somatic anxiety. Study 3 examined the factorial validity of the Malaysian SAS-2 using confirmatory factor analysis in a sample of 539 young athletes. Confirmatory factor analysis results provided strong support for the SAS-2 factor structure. Path loadings exceeding 0.5 indicated convergent validity among the subscales, and low to moderate subscale intercorrelations provided evidence of discriminant validity. Overall, the results supported the criterion and construct validity of this Malaysian-language SAS-2 instrument.
Validation of MODIS Aerosol Optical Depth Retrieval Over Land
NASA Technical Reports Server (NTRS)
Chu, D. A.; Kaufman, Y. J.; Ichoku, C.; Remer, L. A.; Tanre, D.; Holben, B. N.; Einaudi, Franco (Technical Monitor)
2001-01-01
Aerosol optical depths are derived operationally for the first time over land in the visible wavelengths by MODIS (Moderate Resolution Imaging Spectroradiometer) onboard the EOSTerra spacecraft. More than 300 Sun photometer data points from more than 30 AERONET (Aerosol Robotic Network) sites globally were used in validating the aerosol optical depths obtained during July - September 2000. Excellent agreement is found with retrieval errors within (Delta)tau=+/- 0.05 +/- 0.20 tau, as predicted, over (partially) vegetated surfaces, consistent with pre-launch theoretical analysis and aircraft field experiments. In coastal and semi-arid regions larger errors are caused predominantly by the uncertainty in evaluating the surface reflectance. The excellent fit was achieved despite the ongoing improvements in instrument characterization and calibration. This results show that MODIS-derived aerosol optical depths can be used quantitatively in many applications with cautions for residual clouds, snow/ice, and water contamination.
ERIC Educational Resources Information Center
Pan, Jia-Yan; Wong, Daniel Fu Keung; Chan, Kin Sun; Chan, Cecilia Lai Wan
2008-01-01
Objective: The objective of this study is to develop and validate the Chinese Making Sense of Adversity Scale (CMSAS) to measure the cognitive coping strategies that Chinese people adopt to make sense of adversity. Method: A 12-item CMSAS was developed by in-depth interview and item analysis. The scale was validated with a sample of 627 Chinese…
Classification of river water pollution using Hyperion data
NASA Astrophysics Data System (ADS)
Kar, Soumyashree; Rathore, V. S.; Champati ray, P. K.; Sharma, Richa; Swain, S. K.
2016-06-01
A novel attempt is made to use hyperspectral remote sensing to identify the spatial variability of metal pollutants present in river water. It was also attempted to classify the hyperspectral image - Earth Observation-1 (EO-1) Hyperion data of an 8 km stretch of the river Yamuna, near Allahabad city in India depending on its chemical composition. For validating image analysis results, a total of 10 water samples were collected and chemically analyzed using Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). Two different spectral libraries from field and image data were generated for the 10 sample locations. Advanced per-pixel supervised classifications such as Spectral Angle Mapper (SAM), SAM target finder using BandMax and Support Vector Machine (SVM) were carried out along with the unsupervised clustering procedure - Iterative Self-Organizing Data Analysis Technique (ISODATA). The results were compared and assessed with respect to ground data. Analytical Spectral Devices (ASD), Inc. spectroradiometer, FieldSpec 4 was used to generate the spectra of the water samples which were compiled into a spectral library and used for Spectral Absorption Depth (SAD) analysis. The spectral depth pattern of image and field spectral libraries was found to be highly correlated (correlation coefficient, R2 = 0.99) which validated the image analysis results with respect to the ground data. Further, we carried out a multivariate regression analysis to assess the varying concentrations of metal ions present in water based on the spectral depth of the corresponding absorption feature. Spectral Absorption Depth (SAD) analysis along with metal analysis of field data revealed the order in which the metals affected the river pollution, which was in conformity with the findings of Central Pollution Control Board (CPCB). Therefore, it is concluded that hyperspectral imaging provides opportunity that can be used for satellite based remote monitoring of water quality from space.
Sadek, H.S.; Rashad, S.M.; Blank, H.R.
1984-01-01
If proper account is taken of the constraints of the method, it is capable of providing depth estimates to within an accuracy of about 10 percent under suitable circumstances. The estimates are unaffected by source magnetization and are relatively insensitive to assumptions as to source shape or distribution. The validity of the method is demonstrated by analyses of synthetic profiles and profiles recorded over Harrat Rahat, Saudi Arabia, and Diyur, Egypt, where source depths have been proved by drilling.
NASA Astrophysics Data System (ADS)
Hao, Ling; Greer, Tyler; Page, David; Shi, Yatao; Vezina, Chad M.; Macoska, Jill A.; Marker, Paul C.; Bjorling, Dale E.; Bushman, Wade; Ricke, William A.; Li, Lingjun
2016-08-01
Lower urinary tract symptoms (LUTS) are a range of irritative or obstructive symptoms that commonly afflict aging population. The diagnosis is mostly based on patient-reported symptoms, and current medication often fails to completely eliminate these symptoms. There is a pressing need for objective non-invasive approaches to measure symptoms and understand disease mechanisms. We developed an in-depth workflow combining urine metabolomics analysis and machine learning bioinformatics to characterize metabolic alterations and support objective diagnosis of LUTS. Machine learning feature selection and statistical tests were combined to identify candidate biomarkers, which were statistically validated with leave-one-patient-out cross-validation and absolutely quantified by selected reaction monitoring assay. Receiver operating characteristic analysis showed highly-accurate prediction power of candidate biomarkers to stratify patients into disease or non-diseased categories. The key metabolites and pathways may be possibly correlated with smooth muscle tone changes, increased collagen content, and inflammation, which have been identified as potential contributors to urinary dysfunction in humans and rodents. Periurethral tissue staining revealed a significant increase in collagen content and tissue stiffness in men with LUTS. Together, our study provides the first characterization and validation of LUTS urinary metabolites and pathways to support the future development of a urine-based diagnostic test for LUTS.
NASA Astrophysics Data System (ADS)
Babaeian, E.; Tuller, M.; Sadeghi, M.; Franz, T.; Jones, S. B.
2017-12-01
Soil Moisture Active Passive (SMAP) soil moisture products are commonly validated based on point-scale reference measurements, despite the exorbitant spatial scale disparity. The difference between the measurement depth of point-scale sensors and the penetration depth of SMAP further complicates evaluation efforts. Cosmic-ray neutron probes (CRNP) with an approximately 500-m radius footprint provide an appealing alternative for SMAP validation. This study is focused on the validation of SMAP level-4 root zone soil moisture products with 9-km spatial resolution based on CRNP observations at twenty U.S. reference sites with climatic conditions ranging from semiarid to humid. The CRNP measurements are often biased by additional hydrogen sources such as surface water, atmospheric vapor, or mineral lattice water, which sometimes yield unrealistic moisture values in excess of the soil water storage capacity. These effects were removed during CRNP data analysis. Comparison of SMAP data with corrected CRNP observations revealed a very high correlation for most of the investigated sites, which opens new avenues for validation of current and future satellite soil moisture products.
Assessment of Reinforced Concrete Surface Breaking Crack Using Rayleigh Wave Measurement.
Lee, Foo Wei; Chai, Hwa Kian; Lim, Kok Sing
2016-03-05
An improved single sided Rayleigh wave (R-wave) measurement was suggested to characterize surface breaking crack in steel reinforced concrete structures. Numerical simulations were performed to clarify the behavior of R-waves interacting with surface breaking crack with different depths and degrees of inclinations. Through analysis of simulation results, correlations between R-wave parameters of interest and crack characteristics (depth and degree of inclination) were obtained, which were then validated by experimental measurement of concrete specimens instigated with vertical and inclined artificial cracks of different depths. Wave parameters including velocity and amplitude attenuation for each case were studied. The correlations allowed us to estimate the depth and inclination of cracks measured experimentally with acceptable discrepancies, particularly for cracks which are relatively shallow and when the crack depth is smaller than the wavelength.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Validation of luminescent source reconstruction using spectrally resolved bioluminescence images
NASA Astrophysics Data System (ADS)
Virostko, John M.; Powers, Alvin C.; Jansen, E. D.
2008-02-01
This study examines the accuracy of the Living Image® Software 3D Analysis Package (Xenogen, Alameda, CA) in reconstruction of light source depth and intensity. Constant intensity light sources were placed in an optically homogeneous medium (chicken breast). Spectrally filtered images were taken at 560, 580, 600, 620, 640, and 660 nanometers. The Living Image® Software 3D Analysis Package was employed to reconstruct source depth and intensity using these spectrally filtered images. For sources shallower than the mean free path of light there was proportionally higher inaccuracy in reconstruction. For sources deeper than the mean free path, the average error in depth and intensity reconstruction was less than 4% and 12%, respectively. The ability to distinguish multiple sources decreased with increasing source depth and typically required a spatial separation of twice the depth. The constant intensity light sources were also implanted in mice to examine the effect of optical inhomogeneity. The reconstruction accuracy suffered in inhomogeneous tissue with accuracy influenced by the choice of optical properties used in reconstruction.
Gamma Ray Observatory (GRO) OBC attitude error analysis
NASA Technical Reports Server (NTRS)
Harman, R. R.
1990-01-01
This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
ERIC Educational Resources Information Center
Francis, Leslie John
The purpose of this study was to provide an in-depth analysis of vision-screening programs in relation to their efficacy, appropriateness, and feasibility for public school use. Twenty-two vision-screening programs were analyzed for reliability, validity, efficiency of identification and referral cost, and required testing time. Findings are that…
Color difference threshold of chromostereopsis induced by flat display emission.
Ozolinsh, Maris; Muizniece, Kristine
2015-01-01
The study of chromostereopsis has gained attention in the backdrop of the use of computer displays in daily life. In this context, we analyze the illusory depth sense using planar color images presented on a computer screen. We determine the color difference threshold required to induce an illusory sense of depth psychometrically using a constant stimuli paradigm. Isoluminant stimuli are presented on a computer screen, which stimuli are aligned along the blue-red line in the computer display CIE xyY color space. Stereo disparity is generated by increasing the color difference between the central and surrounding areas of the stimuli with both areas consisting of random dots on a black background. The observed altering of illusory depth sense, thus also stereo disparity is validated using the "center-of-gravity" model. The induced illusory sense of the depth effect undergoes color reversal upon varying the binocular lateral eye pupil covering conditions (lateral or medial). Analysis of the retinal image point spread function for the display red and blue pixel radiation validates the altering of chromostereopsis retinal disparity achieved by increasing the color difference, and also the chromostereopsis color reversal caused by varying the eye pupil covering conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tougaard, Sven
The author reports a systematic study of the range of validity of a previously developed algorithm for automated x-ray photoelectron spectroscopy analysis, which takes into account the variation in both peak intensity and the intensity in the background of inelastically scattered electrons. This test was done by first simulating spectra for the Au4d peak with gold atoms distributed in the form of a wide range of nanostructures, which includes overlayers with varying thickness, a 5 A layer of atoms buried at varying depths and a substrate covered with an overlayer of varying thickness. Next, the algorithm was applied to analyzemore » these spectra. The algorithm determines the number of atoms within the outermost 3 {lambda} of the surface. This amount of substance is denoted AOS{sub 3{lambda}} (where {lambda} is the electron inelastic mean free path). In general the determined AOS{sub 3{lambda}} is found to be accurate to within {approx}10-20% depending on the depth distribution of the atoms. The algorithm also determines a characteristic length L, which was found to give unambiguous information on the depth distribution of the atoms for practically all studied cases. A set of rules for this parameter, which relates the value of L to the depths where the atoms are distributed, was tested, and these rules were found to be generally valid with only a few exceptions. The results were found to be rather independent of the spectral energy range (from 20 to 40 eV below the peak energy) used in the analysis.« less
NASA Astrophysics Data System (ADS)
Adams, L. M.; LeVeque, R. J.
2015-12-01
The ability to measure, predict, and compute tsunami flow velocities is ofimportance in risk assessment and hazard mitigation. Until recently, fewdirect measurements of tsunami velocities existed to compare with modelresults. During the 11 March 2001 Tohoku Tsunami, 328 current meters werewere in place around the Hawaiian Islands, USA, that captured time seriesof water velocity in 18 locations, in both harbors and deep channels, ata series of depths. Arcos and LeVeque[1] compared these records againstnumerical simulations performed using the GeoClaw numerical tsunami modelwhich is based on the depth-averaged shallow water equations. They confirmedthat GeoClaw can accurately predict velocities at nearshore locations, andthat tsunami current velocity is more spatially variable than wave formor height and potentially more sensitive for model validation.We present a new approach to detiding this sensitive current data. Thisapproach can be used separately on data at each depth of a current gauge.When averaged across depths, the Geoclaw results in [1] are validated. Withoutaveraging, the results should be useful to researchers wishing to validate their3D codes. These results can be downloaded from the project website below.The approach decomposes the pre-tsunami component of the data into three parts:a tidal component, a fast component (noise), and a slow component (not matchedby the harmonic analysis). Each part is extended to the time when the tsunamiis present and subtracted from the current data then to give the ''tsunami current''that can be compared with 2D or 3D codes that do not model currents in thepre-tsunami regime. [1] "Validating Velocities in the GeoClaw Tsunami Model using Observations NearHawaii from the 2001 Tohoku Tsunami"M.E.M. Arcos and Randall J. LeVequearXiv:1410.2884v1 [physics.geo-py], 10 Oct. 2014.project website: http://faculty.washington.edu/lma3/research.html
NASA Astrophysics Data System (ADS)
Basu, Biswajit
2017-12-01
Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.
Martín-Campos, Trinidad; Mylonas, Roman; Masselot, Alexandre; Waridel, Patrice; Petricevic, Tanja; Xenarios, Ioannis; Quadroni, Manfredo
2017-08-04
Mass spectrometry (MS) has become the tool of choice for the large scale identification and quantitation of proteins and their post-translational modifications (PTMs). This development has been enabled by powerful software packages for the automated analysis of MS data. While data on PTMs of thousands of proteins can nowadays be readily obtained, fully deciphering the complexity and combinatorics of modification patterns even on a single protein often remains challenging. Moreover, functional investigation of PTMs on a protein of interest requires validation of the localization and the accurate quantitation of its changes across several conditions, tasks that often still require human evaluation. Software tools for large scale analyses are highly efficient but are rarely conceived for interactive, in-depth exploration of data on individual proteins. We here describe MsViz, a web-based and interactive software tool that supports manual validation of PTMs and their relative quantitation in small- and medium-size experiments. The tool displays sequence coverage information, peptide-spectrum matches, tandem MS spectra and extracted ion chromatograms through a single, highly intuitive interface. We found that MsViz greatly facilitates manual data inspection to validate PTM location and quantitate modified species across multiple samples.
Validation of a computerized algorithm to quantify fetal heart rate deceleration area.
Gyllencreutz, Erika; Lu, Ke; Lindecrantz, Kaj; Lindqvist, Pelle G; Nordstrom, Lennart; Holzmann, Malin; Abtahi, Farhad
2018-05-16
Reliability in visual cardiotocography interpretation is unsatisfying, which has led to development of computerized cardiotocography. Computerized analysis is well established for antenatal fetal surveillance, but has yet not performed sufficiently during labor. We aimed to investigate the capacity of a new computerized algorithm compared to visual assessment in identifying intrapartum fetal heart rate baseline and decelerations. Three-hundred-and-twelve intrapartum cardiotocography tracings with variable decelerations were analysed by the computerized algorithm and visually examined by two observers, blinded to each other and the computer analysis. The width, depth and area of each deceleration was measured. Four cases (>100 variable decelerations) were subject to in-depth detailed analysis. The outcome measures were bias in seconds (width), beats per minute (depth), and beats (area) between computer and observers by using Bland-Altman analysis. Interobserver reliability was determined by calculating intraclass correlation and Spearman rank analysis. The analysis (312 cases) showed excellent intraclass correlation (0.89-0.95) and very strong Spearman correlation (0.82-0.91). The detailed analysis of > 100 decelerations in 4 cases revealed low bias between the computer and the two observers; width 1.4 and 1.4 seconds, depth 5.1 and 0.7 beats per minute, and area 0.1 and -1.7 beats. This was comparable to the bias between the two observers; 0.3 seconds (width), 4.4 beats per minute (depth), and 1.7 beats (area). The intraclass correlation was excellent (0.90-0.98). A novel computerized algorithm for intrapartum cardiotocography analysis is as accurate as gold standard visual assessment with high correlation and low bias. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Structural and reliability analysis of quality of relationship index in cancer patients.
Cousson-Gélie, Florence; de Chalvron, Stéphanie; Zozaya, Carole; Lafaye, Anaïs
2013-01-01
Among psychosocial factors affecting emotional adjustment and quality of life, social support is one of the most important and widely studied in cancer patients, but little is known about the perception of support in specific significant relationships in patients with cancer. This study examined the psychometric properties of the Quality of Relationship Inventory (QRI) by evaluating its factor structure and its convergent and discriminant validity in a sample of cancer patients. A total of 388 patients completed the QRI. Convergent validity was evaluated by testing the correlations between the QRI subscales and measures of general social support, anxiety and depression symptoms. Discriminant validity was examined by testing group comparison. The QRI's longitudinal invariance across time was also tested. Principal axis factor analysis with promax rotation identified three factors accounting for 42.99% of variance: perceived social support, depth, and interpersonal conflict. Estimates of reliability with McDonald's ω coefficient were satisfactory for all the QRI subscales (ω ranging from 0.75 - 0.85). Satisfaction from general social support was negatively correlated with the interpersonal conflict subscale and positively with the depth subscale. The interpersonal conflict and social support scales were correlated with depression and anxiety scores. We also found a relative stability of QRI subscales (measured 3 months after the first evaluation) and differences between partner status and gender groups. The Quality of Relationship Inventory is a valid tool for assessing the quality of social support in a particular relationship with cancer patients.
Implementations of the Navy Coupled Ocean Data Assimilation System at the Naval Oceanographic Office
2010-06-01
Clim ( GDEM ) +−2std = 95.4% GDEM POE at Depth MODAS Synthetic Profile T,S with Sat SST Local OI of Nearby Valid Data Global3D Analysis Fig. 3. NCODA...observation (Obs), NCODA analysis (Anal), RNCOM nowcast (NCST) for today, RNCOM 24–hour forecast (FCST) from yesterday, GDEM climatology (Clim), and the
Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin
2014-01-01
Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314
Forage quantity estimation from MERIS using band depth parameters
NASA Astrophysics Data System (ADS)
Ullah, Saleem; Yali, Si; Schlerf, Martin
Saleem Ullah1 , Si Yali1 , Martin Schlerf1 Forage quantity is an important factor influencing feeding pattern and distribution of wildlife. The main objective of this study was to evaluate the predictive performance of vegetation indices and band depth analysis parameters for estimation of green biomass using MERIS data. Green biomass was best predicted by NBDI (normalized band depth index) and yielded a calibration R2 of 0.73 and an accuracy (independent validation dataset, n=30) of 136.2 g/m2 (47 % of the measured mean) compared to a much lower accuracy obtained by soil adjusted vegetation index SAVI (444.6 g/m2, 154 % of the mean) and by other vegetation indices. This study will contribute to map and monitor foliar biomass over the year at regional scale which intern can aid the understanding of bird migration pattern. Keywords: Biomass, Nitrogen density, Nitrogen concentration, Vegetation indices, Band depth analysis parameters 1 Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, The Netherlands
Vasli, Parvaneh; Dehghan-Nayeri, Nahid; Khosravi, Laleh
2018-01-01
Despite the emphasis placed on the implementation of continuing professional education programs in Iran, researchers or practitioners have not developed an instrument for assessing the factors that affect the knowledge transfer from such programs to clinical practice. The aim of this study was to design and validate such instrument for the Iranian context. The research used a three-stage mix method. In the first stage, in-depth interviews with nurses and content analysis were conducted, after which themes were extracted from the data. In the second stage, the findings of the content analysis and literature review were examined, and preliminary instrument options were developed. In the third stage, qualitative content validity, face validity, content validity ratio, content validity index, and construct validity using exploratory factor analysis was conducted. The reliability of the instrument was measured before and after the determination of construct validity. Primary tool instrument initially comprised 53 items, and its content validity index was 0.86. In the multi-stage factor analysis, eight questions were excluded, thereby reducing 11 factors to five and finally, to four. The final instrument with 43 items consists of the following dimensions: structure and organizational climate, personal characteristics, nature and status of professionals, and nature of educational programs. Managers can use the Iranian instrument to identify factors affecting knowledge transfer of continuing professional education to clinical practice. Copyright © 2017. Published by Elsevier Ltd.
Luyckx, Koen; Goossens, Luc; Soenens, Bart; Beyers, Wim
2006-06-01
A model of identity formation comprising four structural dimensions (Commitment Making, Identification with Commitment, Exploration in Depth, and Exploration in Breadth) was developed through confirmatory factor analysis. In a sample of 565 emerging adults, this model provided a better fit than did alternative two- and three-dimensional models, thereby validating the unpacking of both exploration and commitment. Regression analyses indicated that Commitment Making was significantly related to family context in accordance with hypotheses. Identification with Commitment and both exploration dimensions were significantly related to adjustment and family context, again in accordance with hypotheses. Identification with Commitment was positively related to positive adjustment indicators and negatively to depressive symptoms, whereas Exploration in Breadth was positively related to depressive symptoms and substance use. Exploration in Depth, on the other hand, was positively related to academic adjustment and negatively to substance use. Implications and suggestions for future research are discussed.
Screening for trace explosives by AccuTOF™-DART®: an in-depth validation study.
Sisco, Edward; Dake, Jeffrey; Bridge, Candice
2013-10-10
Ambient ionization mass spectrometry is finding increasing utility as a rapid analysis technique in a number of fields. In forensic science specifically, analysis of many types of samples, including drugs, explosives, inks, bank dye, and lotions, has been shown to be possible using these techniques [1]. This paper focuses on one type of ambient ionization mass spectrometry, Direct Analysis in Real Time Mass Spectrometry (DART-MS or DART), and its viability as a screening tool for trace explosives analysis. In order to assess viability, a validation study was completed which focused on the analysis of trace amounts of nitro and peroxide based explosives. Topics which were studied, and are discussed, include method optimization, reproducibility, sensitivity, development of a search library, discrimination of mixtures, and blind sampling. Advantages and disadvantages of this technique over other similar screening techniques are also discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Leightley, Daniel; Yap, Moi Hoon
2018-03-02
The aim of this study was to compare the performance between young adults ( n = 15), healthy old people ( n = 10), and masters athletes ( n = 15) using a depth sensor and automated digital assessment framework. Participants were asked to complete a clinically validated assessment of the sit-to-stand technique (five repetitions), which was recorded using a depth sensor. A feature encoding and evaluation framework to assess balance, core, and limb performance using time- and speed-related measurements was applied to markerless motion capture data. The associations between the measurements and participant groups were examined and used to evaluate the assessment framework suitability. The proposed framework could identify phases of sit-to-stand, stability, transition style, and performance between participant groups with a high degree of accuracy. In summary, we found that a depth sensor coupled with the proposed framework could identify performance subtleties between groups.
2018-01-01
The aim of this study was to compare the performance between young adults (n = 15), healthy old people (n = 10), and masters athletes (n = 15) using a depth sensor and automated digital assessment framework. Participants were asked to complete a clinically validated assessment of the sit-to-stand technique (five repetitions), which was recorded using a depth sensor. A feature encoding and evaluation framework to assess balance, core, and limb performance using time- and speed-related measurements was applied to markerless motion capture data. The associations between the measurements and participant groups were examined and used to evaluate the assessment framework suitability. The proposed framework could identify phases of sit-to-stand, stability, transition style, and performance between participant groups with a high degree of accuracy. In summary, we found that a depth sensor coupled with the proposed framework could identify performance subtleties between groups. PMID:29498644
Measuring stress variation with depth using Barkhausen signals
NASA Astrophysics Data System (ADS)
Kypris, O.; Nlebedim, I. C.; Jiles, D. C.
2016-06-01
Magnetic Barkhausen noise analysis (BNA) is an established technique for the characterization of stress in ferromagnetic materials. An important application is the evaluation of residual stress in aerospace components, where shot-peening is used to strengthen the part by inducing compressive residual stresses on its surface. However, the evaluation of the resulting stress-depth gradients cannot be achieved by conventional BNA methods, where signals are interpreted in the time domain. The immediate alternative of using x-ray diffraction stress analysis is less than ideal, as the use of electropolishing to remove surface layers renders the part useless after inspection. Thus, a need for advancing the current BNA techniques prevails. In this work, it is shown how a parametric model for the frequency spectrum of Barkhausen emissions can be used to detect variations of stress along depth in ferromagnetic materials. Proof of concept is demonstrated by inducing linear stress-depth gradients using four-point bending, and fitting the model to the frequency spectra of measured Barkhausen signals, using a simulated annealing algorithm to extract the model parameters. Validation of our model suggests that in bulk samples the Barkhausen frequency spectrum can be expressed by a multi-exponential function with a dependence on stress and depth. One practical application of this spectroscopy method is the non-destructive evaluation of residual stress-depth profiles in aerospace components, thus helping to prevent catastrophic failures.
Reliable assessment of laparoscopic performance in the operating room using videotape analysis.
Chang, Lily; Hogle, Nancy J; Moore, Brianna B; Graham, Mark J; Sinanan, Mika N; Bailey, Robert; Fowler, Dennis L
2007-06-01
The Global Operative Assessment of Laparoscopic Skills (GOALS) is a valid assessment tool for objectively evaluating the technical performance of laparoscopic skills in surgery residents. We hypothesized that GOALS would reliably differentiate between an experienced (expert) and an inexperienced (novice) laparoscopic surgeon (construct validity) based on a blinded videotape review of a laparoscopic cholecystectomy procedure. Ten board-certified surgeons actively engaged in the practice and teaching of laparoscopy reviewed and evaluated the videotaped operative performance of one novice and one expert laparoscopic surgeon using GOALS. Each reviewer recorded a score for both the expert and the novice videotape reviews in each of the 5 domains in GOALS (depth perception, bimanual dexterity, efficiency, tissue handling, and overall competence). The scores for the expert and the novice were compared and statistically analyzed using single-factor analysis of variance (ANOVA). The expert scored significantly higher than the novice did in the domains of depth perception (p = .005), bimanual dexterity (p = .001), efficiency (p = .001), and overall competence ( p = .001). Interrater reliability for the reviewers of the novice tape was Cronbach alpha = .93 and the expert tape was Cronbach alpha = .87. There was no difference between the two for tissue handling. The Global Operative Assessment of Laparoscopic Skills is a valid, objective assessment tool for evaluating technical surgical performance when used to blindly evaluate an intraoperative videotape recording of a laparoscopic procedure.
Validation of a Thermo-Ablative Model of Elastomeric Internal Insulation Materials
NASA Technical Reports Server (NTRS)
Martin, Heath T.
2017-01-01
In thermo-ablative material modeling, as in many fields of analysis, the quality of the existing models significantly exceeds that of the experimental data required for their validation. In an effort to narrow this gap, a laboratory-scale internal insulation test bed was developed that exposes insulation samples to realistic solid rocket motor (SRM) internal environments while being instrumented to record real-time rates of both model inputs (i.e., chamber pressure, total surface heat flux, and radiative heat flux) as well as model outputs (i.e., material decomposition depths (MDDs) and in-depth material temperatures). In this work, the measured SRM internal environment parameters were used in conjunction with equilibrium thermochemistry codes as inputs to one-dimensional thermo-ablative models of the PBINBR and CFEPDM insulation samples used in the lab-scale test firings. The computed MDD histories were then compared with those deduced from real-time X-ray radiography of the insulation samples, and the calculated in-depth temperatures were compared with those measured by embedded thermocouples. The results of this exercise emphasize the challenges of modeling and testing elastomeric materials in SRM environments while illuminating the path forward to improved fidelity.
Kogo, Haruki; Murata, Jun; Murata, Shin; Higashi, Toshio
2017-01-01
This study examined the validity of a practical evaluation method for pitting edema by comparing it to other methods, including circumference measurements and ultrasound image measurements. Fifty-one patients (102 legs) from a convalescent ward in Maruyama Hospital were recruited for study 1, and 47 patients (94 legs) from a convalescent ward in Morinaga Hospital were recruited for study 2. The relationship between the depth of the surface imprint and circumferential measurements, as well as the relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue on an ultrasonogram, were analyzed using a Spearman correlation coefficient by rank. There was no significant relationship between the surface imprint depth and circumferential measurements. However, there was a significant relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue as measured on an ultrasonogram (correlation coefficient 0.736). Our findings suggest that our novel evaluation method for pitting edema, based on a measurement of the surface imprint depth, is both valid and useful.
Cognitive Task Analysis of En Route Air Traffic Control: Model Extension and Validation.
ERIC Educational Resources Information Center
Redding, Richard E.; And Others
Phase II of a project extended data collection and analytic procedures to develop a model of expertise and skill development for en route air traffic control (ATC). New data were collected by recording the Dynamic Simulator (DYSIM) performance of five experts with a work overload problem. Expert controllers were interviewed in depth for mental…
In vitro burn model illustrating heat conduction patterns using compressed thermal papers.
Lee, Jun Yong; Jung, Sung-No; Kwon, Ho
2015-01-01
To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.
Falling head ponded infiltration in the nonlinear limit
NASA Astrophysics Data System (ADS)
Triadis, D.
2014-12-01
The Green and Ampt infiltration solution represents only an extreme example of behavior within a larger class of very nonlinear, delta function diffusivity soils. The mathematical analysis of these soils is greatly simplified by the existence of a sharp wetting front below the soil surface. Solutions for more realistic delta function soil models have recently been presented for infiltration under surface saturation without ponding. After general formulation of the problem, solutions for a full suite of delta function soils are derived for ponded surface water depleted by infiltration. Exact expressions for the cumulative infiltration as a function of time, or the drainage time as a function of the initial ponded depth may take implicit or parametric forms, and are supplemented by simple asymptotic expressions valid for small times, and small and large initial ponded depths. As with surface saturation without ponding, the Green-Ampt model overestimates the effect of the soil hydraulic conductivity. At the opposing extreme, a low-conductivity model is identified that also takes a very simple mathematical form and appears to be more accurate than the Green-Ampt model for larger ponded depths. Between these two, the nonlinear limit of Gardner's soil is recommended as a physically valid first approximation. Relative discrepancies between different soil models are observed to reach a maximum for intermediate values of the dimensionless initial ponded depth, and in general are smaller than for surface saturation without ponding.
Reeves, Todd D.; Marbach-Ad, Gili
2016-01-01
Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498
NASA Technical Reports Server (NTRS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Smirnov, Alexander
2013-01-01
In this study, aerosol optical depths over oceans are analyzed from satellite and surface perspectives. Multiangle Imaging SpectroRadiometer (MISR) aerosol retrievals are investigated and validated primarily against Maritime Aerosol Network (MAN) observations. Furthermore, AErosol RObotic NETwork (AERONET) data from 19 island and coastal sites is incorporated in this study. The 270 MISRMAN comparison points scattered across all oceans were identified. MISR on average overestimates aerosol optical depths (AODs) by 0.04 as compared to MAN; the correlation coefficient and root-mean-square error are 0.95 and 0.06, respectively. A new screening procedure based on retrieval region characterization is proposed, which is capable of substantially reducing MISR retrieval biases. Over 1000 additional MISRAERONET comparison points are added to the analysis to confirm the validity of the method. The bias reduction is effective within all AOD ranges. Setting a clear flag fraction threshold to 0.6 reduces the bias to below 0.02, which is close to a typical ground-based measurement uncertainty. Twelve years of MISR data are analyzed with the new screening procedure. The average over ocean AOD is reduced by 0.03, from 0.15 to 0.12. The largest AOD decrease is observed in high latitudes of both hemispheres, regions with climatologically high cloud cover. It is postulated that the screening procedure eliminates spurious retrieval errors associated with cloud contamination and cloud adjacency effects. The proposed filtering method can be used for validating aerosol and chemical transport models.
NASA Astrophysics Data System (ADS)
Letort, Jean; Guilbert, Jocelyn; Cotton, Fabrice; Bondár, István; Cano, Yoann; Vergoz, Julien
2015-06-01
The depth of an earthquake is difficult to estimate because of the trade-off between depth and origin time estimations, and because it can be biased by lateral Earth heterogeneities. To face this challenge, we have developed a new, blind and fully automatic teleseismic depth analysis. The results of this new method do not depend on epistemic uncertainties due to depth-phase picking and identification. The method consists of a modification of the cepstral analysis from Letort et al. and Bonner et al., which aims to detect surface reflected (pP, sP) waves in a signal at teleseismic distances (30°-90°) through the study of the spectral holes in the shape of the signal spectrum. The ability of our automatic method to improve depth estimations is shown by relocation of the recent moderate seismicity of the Guerrero subduction area (Mexico). We have therefore estimated the depth of 152 events using teleseismic data from the IRIS stations and arrays. One advantage of this method is that it can be applied for single stations (from IRIS) as well as for classical arrays. In the Guerrero area, our new cepstral analysis efficiently clusters event locations and provides an improved view of the geometry of the subduction. Moreover, we have also validated our method through relocation of the same events using the new International Seismological Centre (ISC)-locator algorithm, as well as comparing our cepstral depths with the available Harvard-Centroid Moment Tensor (CMT) solutions and the three available ground thrust (GT5) events (where lateral localization is assumed to be well constrained with uncertainty <5 km) for this area. These comparisons indicate an overestimation of focal depths in the ISC catalogue for deeper parts of the subduction, and they show a systematic bias between the estimated cepstral depths and the ISC-locator depths. Using information from the CMT catalogue relating to the predominant focal mechanism for this area, this bias can be explained as a misidentification of sP phases by pP phases, which shows the greater interest for the use of this new automatic cepstral analysis, as it is less sensitive to phase identification.
NASA Astrophysics Data System (ADS)
Jethva, Hiren; Torres, Omar; Remer, Lorraine; Redemann, Jens; Livingston, John; Dunagan, Stephen; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal Rosenheimer, Michal; Spurr, Rob
2016-10-01
We present the validation analysis of above-cloud aerosol optical depth (ACAOD) retrieved from the "color ratio" method applied to MODIS cloudy-sky reflectance measurements using the limited direct measurements made by NASA's airborne Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) sensors. A thorough search of the airborne database collection revealed a total of five significant events in which an airborne sun photometer, coincident with the MODIS overpass, observed partially absorbing aerosols emitted from agricultural biomass burning, dust, and wildfires over a low-level cloud deck during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS 2013 campaigns, respectively. The co-located satellite-airborne matchups revealed a good agreement (root-mean-square difference < 0.1), with most matchups falling within the estimated uncertainties associated the MODIS retrievals (about -10 to +50 %). The co-retrieved cloud optical depth was comparable to that of the MODIS operational cloud product for ACE-ASIA and SEAC4RS, however, higher by 30-50 % for the SAFARI-2000 case study. The reason for this discrepancy could be attributed to the distinct aerosol optical properties encountered during respective campaigns. A brief discussion on the sources of uncertainty in the satellite-based ACAOD retrieval and co-location procedure is presented. Field experiments dedicated to making direct measurements of aerosols above cloud are needed for the extensive validation of satellite-based retrievals.
NASA Technical Reports Server (NTRS)
Jethva, Hiren; Torres, Omar; Remer, Lorraine; Redemann, Jens; Livingston, John; Dunagan, Stephen; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal Rozenhaimer, Michal; Spurr, Rob
2016-01-01
We present the validation analysis of above-cloud aerosol optical depth (ACAOD) retrieved from the color ratio method applied to MODIS cloudy-sky reflectance measurements using the limited direct measurements made by NASAs airborne Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) sensors. A thorough search of the airborne database collection revealed a total of five significant events in which an airborne sun photometer, coincident with the MODIS overpass, observed partially absorbing aerosols emitted from agricultural biomass burning, dust, and wildfires over a low-level cloud deck during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS 2013 campaigns, respectively. The co-located satellite-airborne match ups revealed a good agreement (root-mean-square difference less than 0.1), with most match ups falling within the estimated uncertainties associated with the MODIS retrievals (about -10 to +50 ). The co-retrieved cloud optical depth was comparable to that of the MODIS operational cloud product for ACE-ASIA and SEAC4RS, however, higher by 30-50% for the SAFARI-2000 case study. The reason for this discrepancy could be attributed to the distinct aerosol optical properties encountered during respective campaigns. A brief discussion on the sources of uncertainty in the satellite-based ACAOD retrieval and co-location procedure is presented. Field experiments dedicated to making direct measurements of aerosols above cloud are needed for the extensive validation of satellite based retrievals.
Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E
2008-01-15
The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.
Assessment of vertical excursions and open-sea psychological performance at depths to 250 fsw.
Miller, J W; Bachrach, A J; Walsh, J M
1976-12-01
A series of 10 two-man descending vertical excursion dives was carried out in the open sea from an ocean-floor habitat off the coast of Puerto Rico by four aquanauts saturated on a normoxic-nitrogen breathing mixture at a depth of 106 fsw. The purpose of these dives was two-fold: to validate laboratory findings with respect to decompression schedules and to determine whether such excursions would produce evidence of adaptation to nitrogen narcosis. For the latter, tests designed to measure time estimation, short-term memory, and auditory vigilance were used. The validation of experimental excursion tables was carried out without incidence of decompression sickness. Although no signs of nitrogen narcosis were noted during testing, all subjects made significantly longer time estimates in the habitat and during the excursions than on the surface. Variability and incomplete data prevented a statistical analysis of the short-term memory results, and the auditory vigilance proved unusable in the water.
Czuba, J.A.; Oberg, K.
2008-01-01
Previous work by Oberg and Mueller of the U.S. Geological Survey in 2007 concluded that exposure time (total time spent sampling the flow) is a critical factor in reducing measurement uncertainty. In a subsequent paper, Oberg and Mueller validated these conclusions using one set of data to show that the effect of exposure time on the uncertainty of the measured discharge is independent of stream width, depth, and range of boat speeds. Analysis of eight StreamPro acoustic Doppler current profiler (ADCP) measurements indicate that they fall within and show a similar trend to the Rio Grande ADCP data previously reported. Four special validation measurements were made for the purpose of verifying the conclusions of Oberg and Mueller regarding exposure time for Rio Grande and StreamPro ADCPs. Analysis of these measurements confirms that exposure time is a critical factor in reducing measurement uncertainty and is independent of stream width, depth, and range of boat speeds. Furthermore, it appears that the relation between measured discharge uncertainty and exposure time is similar for both Rio Grande and StreamPro ADCPs. These results are applicable to ADCPs that make use of broadband technology using bottom-tracking to obtain the boat velocity. Based on this work, a minimum of two transects should be collected with an exposure time for all transects greater than or equal to 720 seconds in order to achieve an uncertainty of ??5 percent when using bottom-tracking ADCPs. ?? 2008 IEEE.
Olson, Scott A.; Song, Donald L.
1996-01-01
Contraction scour for all modelled flows ranged from 0.0 to 0.8 ft. Abutment scour ranged from 6.6 to 14.9 ft. with the worst-case scenario occurring at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, scour protection measures, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein, based on the consideration of additional contributing factors and experienced engineering judgement.
NASA Astrophysics Data System (ADS)
Castillo-López, Elena; Dominguez, Jose Antonio; Pereda, Raúl; de Luis, Julio Manuel; Pérez, Ruben; Piña, Felipe
2017-10-01
Accurate determination of water depth is indispensable in multiple aspects of civil engineering (dock construction, dikes, submarines outfalls, trench control, etc.). To determine the type of atmospheric correction most appropriate for the depth estimation, different accuracies are required. Accuracy in bathymetric information is highly dependent on the atmospheric correction made to the imagery. The reduction of effects such as glint and cross-track illumination in homogeneous shallow-water areas improves the results of the depth estimations. The aim of this work is to assess the best atmospheric correction method for the estimation of depth in shallow waters, considering that reflectance values cannot be greater than 1.5 % because otherwise the background would not be seen. This paper addresses the use of hyperspectral imagery to quantitative bathymetric mapping and explores one of the most common problems when attempting to extract depth information in conditions of variable water types and bottom reflectances. The current work assesses the accuracy of some classical bathymetric algorithms (Polcyn-Lyzenga, Philpot, Benny-Dawson, Hamilton, principal component analysis) when four different atmospheric correction methods are applied and water depth is derived. No atmospheric correction is valid for all type of coastal waters, but in heterogeneous shallow water the model of atmospheric correction 6S offers good results.
Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R
2018-05-21
Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cronin, Nigel J.; Clegg, Peter J.
2005-04-01
Microwave Endometrial Ablation (MEA) is a technique that can be used for the treatment of abnormal uterine bleeding. The procedure involves sweeping a specially designed microwave applicator throughout the uterine cavity to achieve an ideally uniform depth of tissue necrosis of between 5 and 6mm. We have performed a computer analysis of the MEA procedure in which finite element analysis was used to determine the SAR pattern around the applicator. This was followed by a Green Function based solution of the Bioheat equation to determine the resulting induced temperatures. The method developed is applicable to situations involving a moving microwave source, as used in MEA. The validity of the simulation was verified by measurements in a tissue phantom material using a purpose built applicator and a calibrated pulling device. From the calculated temperatures the depth of necrosis was assessed through integration of the resulting rates of cell death estimated using the Arrhenius equation. The Arrhenius parameters used were derived from published data on BHK cells. Good agreement was seen between the calculated depths of cell necrosis and those found in human in-vivo testing.
Parametric Study of Shear Strength of Concrete Beams Reinforced with FRP Bars
NASA Astrophysics Data System (ADS)
Thomas, Job; Ramadass, S.
2016-09-01
Fibre Reinforced Polymer (FRP) bars are being widely used as internal reinforcement in structural elements in the last decade. The corrosion resistance of FRP bars qualifies its use in severe and marine exposure conditions in structures. A total of eight concrete beams longitudinally reinforced with FRP bars were cast and tested over shear span to depth ratio of 0.5 and 1.75. The shear strength test data of 188 beams published in various literatures were also used. The model originally proposed by Indian Standard Code of practice for the prediction of shear strength of concrete beams reinforced with steel bars IS:456 (Plain and reinforced concrete, code of practice, fourth revision. Bureau of Indian Standards, New Delhi, 2000) is considered and a modification to account for the influence of the FRP bars is proposed based on regression analysis. Out of the 196 test data, 110 test data is used for the regression analysis and 86 test data is used for the validation of the model. In addition, the shear strength of 86 test data accounted for the validation is assessed using eleven models proposed by various researchers. The proposed model accounts for compressive strength of concrete ( f ck ), modulus of elasticity of FRP rebar ( E f ), longitudinal reinforcement ratio ( ρ f ), shear span to depth ratio ( a/ d) and size effect of beams. The predicted shear strength of beams using the proposed model and 11 models proposed by other researchers is compared with the corresponding experimental results. The mean of predicted shear strength to the experimental shear strength for the 86 beams accounted for the validation of the proposed model is found to be 0.93. The result of the statistical analysis indicates that the prediction based on the proposed model corroborates with the corresponding experimental data.
Funane, Tsukasa; Sato, Hiroki; Yahata, Noriaki; Takizawa, Ryu; Nishimura, Yukika; Kinoshita, Akihide; Katura, Takusige; Atsumori, Hirokazu; Fukuda, Masato; Kasai, Kiyoto; Koizumi, Hideaki; Kiguchi, Masashi
2015-01-01
Abstract. It has been reported that a functional near-infrared spectroscopy (fNIRS) signal can be contaminated by extracerebral contributions. Many algorithms using multidistance separations to address this issue have been proposed, but their spatial separation performance has rarely been validated with simultaneous measurements of fNIRS and functional magnetic resonance imaging (fMRI). We previously proposed a method for discriminating between deep and shallow contributions in fNIRS signals, referred to as the multidistance independent component analysis (MD-ICA) method. In this study, to validate the MD-ICA method from the spatial aspect, multidistance fNIRS, fMRI, and laser-Doppler-flowmetry signals were simultaneously obtained for 12 healthy adult males during three tasks. The fNIRS signal was separated into deep and shallow signals by using the MD-ICA method, and the correlation between the waveforms of the separated fNIRS signals and the gray matter blood oxygenation level–dependent signals was analyzed. A three-way analysis of variance (signal depth×Hb kind×task) indicated that the main effect of fNIRS signal depth on the correlation is significant [F(1,1286)=5.34, p<0.05]. This result indicates that the MD-ICA method successfully separates fNIRS signals into spatially deep and shallow signals, and the accuracy and reliability of the fNIRS signal will be improved with the method. PMID:26157983
Detection of collaborative activity with Kinect depth cameras.
Sevrin, Loic; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques
2016-08-01
The health status of elderly subjects is highly correlated to their activities together with their social interactions. Thus, the long term monitoring in home of their health status, shall also address the analysis of collaborative activities. This paper proposes a preliminary approach of such a system which can detect the simultaneous presence of several subjects in a common area using Kinect depth cameras. Most areas in home being dedicated to specific tasks, the localization enables the classification of tasks, whether collaborative or not. A scenario of a 24 hours day shrunk into 24 minutes was used to validate our approach. It pointed out the need of artifacts removal to reach high specificity and good sensitivity.
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
NASA Technical Reports Server (NTRS)
Kent, G. S.; Mccormick, M. P.; Wang, P.-H.
1994-01-01
The stratospheric aerosol measurement 2, stratospheric aerosol and gas experiment (SAGE) 1, and SAGE 2 series of solar occultation satellite instruments were designed for the study of stratospheric aerosols and gases and have been extensively validated in the stratosphere. They are also capable, under cloud-free conditions, of measuring the extinction due to aerosols in the troposphere. Such tropospheric extinction measurements have yet to be validated by appropriate lidar and in situ techniques. In this paper published atmospheric aerosol optical depth measurements, made from high-altitude observatories during volcanically quiet periods, have been compared with optical depths calculated from local SAGE 1 and SAGE 2 extinction profiles. Surface measurements from three such observatories have been used, one located in Hawaii and two within the continental United States. Data have been intercompared on a seasonal basis at wave-lenths between 0.5 and 1.0 micron and found to agree within the range of measurement errors and expected atmospheric variation. The mean rms difference between the optical depths for corresponding satellite and surface measured data sets is 29%, and the mean ratio of the optical depths is 1.09.
CREST-SAFE: Snow LST validation, wetness profiler creation, and depth/SWE product development
NASA Astrophysics Data System (ADS)
Perez Diaz, C. L.; Lakhankar, T.; Romanov, P.; Khanbilvardi, R.; Munoz Barreto, J.; Yu, Y.
2017-12-01
CREST-SAFE: Snow LST validation, wetness profiler creation, and depth/SWE product development The Field Snow Research Station (also referred to as Snow Analysis and Field Experiment, SAFE) is operated by the NOAA Center for Earth System Sciences and Remote Sensing Technologies (CREST) in the City University of New York (CUNY). The field station is located within the premises of the Caribou Municipal Airport (46°52'59'' N, 68°01'07'' W) and in close proximity to the National Weather Service (NWS) Regional Forecast Office. The station was established in 2010 to support studies in snow physics and snow remote sensing. The Visible Infrared Imager Radiometer Suite (VIIRS) Land Surface Temperature (LST) Environmental Data Record (EDR) and Moderate Resolution Imaging Spectroradiometer (MODIS) LST product (provided by the Terra and Aqua Earth Observing System satellites) were validated using in situ LST (T-skin) and near-surface air temperature (T-air) observations recorded at CREST-SAFE for the winters of 2013 and 2014. Results indicate that T-air correlates better than T-skin with VIIRS LST data and that the accuracy of nighttime LST retrievals is considerably better than that of daytime. Several trends in the MODIS LST data were observed, including the underestimation of daytime values and night-time values. Results indicate that, although all the data sets showed high correlation with ground measurements, day values yielded slightly higher accuracy ( 1°C). Additionally, we created a liquid water content (LWC)-profiling instrument using time-domain reflectometry (TDR) at CREST-SAFE and tested it during the snow melt period (February-April) immediately after installation in 2014. Results displayed high agreement when compared to LWC estimates obtained using empirical formulas developed in previous studies, and minor improvement over wet snow LWC estimates. Lastly, to improve on global snow cover mapping, a snow product capable of estimating snow depth and snow water equivalent (SWE) using microwave remote sensing and the CREST Snow Depth Regression Tree Model (SDRTM) was developed. Data from AMSR2 onboard the JAXA GCOM-W1 satellite is used to produce daily global snow depth and SWE maps in automated fashion at a 10-km resolution.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Boyles, Ryan
2016-12-01
Surface soil moisture is a critical parameter for understanding the energy flux at the land atmosphere boundary. Weather modeling, climate prediction, and remote sensing validation are some of the applications for surface soil moisture information. The most common in situ measurement for these purposes are sensors that are installed at depths of approximately 5 cm. There are however, sensor technologies and network designs that do not provide an estimate at this depth. If soil moisture estimates at deeper depths could be extrapolated to the near surface, in situ networks providing estimates at other depths would see their values enhanced. Soil moisture sensors from the U.S. Climate Reference Network (USCRN) were used to generate models of 5 cm soil moisture, with 10 cm soil moisture measurements and antecedent precipitation as inputs, via machine learning techniques. Validation was conducted with the available, in situ, 5 cm resources. It was shown that a 5 cm estimate, which was extrapolated from a 10 cm sensor and antecedent local precipitation, produced a root-mean-squared-error (RMSE) of 0.0215 m3/m3. Next, these machine-learning-generated 5 cm estimates were also compared to AMSR-E estimates at these locations. These results were then compared with the performance of the actual in situ readings against the AMSR-E data. The machine learning estimates at 5 cm produced an RMSE of approximately 0.03 m3/m3 when an optimized gain and offset were applied. This is necessary considering the performance of AMSR-E in locations characterized by high vegetation water contents, which are present across North Carolina. Lastly, the application of this extrapolation technique is applied to the ECONet in North Carolina, which provides a 10 cm depth measurement as its shallowest soil moisture estimate. A raw RMSE of 0.028 m3/m3 was achieved, and with a linear gain and offset applied at each ECONet site, an RMSE of 0.013 m3/m3 was possible.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Phakthongsuk, Pitchaya
2009-04-01
To test the construct validity of the Thai version of the job content questionnaire (TJCQ). The present descriptive study recruited 10415 participants from all occupations according to the International Standard Classification of Occupations. The instrument consisted of a 48-item of the job content questionnaire. Eight items newly developed by the authors from in-depth interviews were added. Exploratory factor analysis showed six factor models of work hazards, decision latitude, psychological demand, social support, physical demand, and job security. However, supervisor and co-worker support were not distinguished into two factors and some items distributed differently along the factors extracted. Confirmatory factor analysis supported the construct of six latent factors, although the overall fit was moderately acceptable. Cronbach's alpha coefficients higher than 0.7, supported the internal consistency of TJCQ scales except for job security (0.55). These findings suggest that TJCQ is valid and reliable for assessing job stress among Thai populations.
NASA Astrophysics Data System (ADS)
Chen, Jinlei; Wen, Jun; Tian, Hui
2016-02-01
Soil moisture plays an increasingly important role in the cycle of energy-water exchange, climate change, and hydrologic processes. It is usually measured at a point site, but regional soil moisture is essential for validating remote sensing products and numerical modeling results. In the study reported in this paper, the minimal number of required sites (NRS) for establishing a research observational network and the representative single sites for regional soil moisture estimation are discussed using the soil moisture data derived from the ;Maqu soil moisture observational network; (101°40‧-102°40‧E, 33°30‧-35°45‧N), which is supported by Chinese Academy of Science. Furthermore, the best up-scaling method suitable for this network has been studied by evaluating four commonly used up-scaling methods. The results showed that (1) Under a given accuracy requirement R ⩾ 0.99, RMSD ⩽ 0.02 m3/m3, NRS at both 5 and 10 cm depth is 10. (2) Representativeness of the sites has been validated by time stability analysis (TSA), time sliding correlation analysis (TSCA) and optimal combination of sites (OCS). NST01 is the most representative site at 5 cm depth for the first two methods; NST07 and NST02 are the most representative sites at 10 cm depth. The optimum combination sites at 5 cm depth are NST01, NST02, and NST07. NST05, NST08, and NST13 are the best group at 10 cm depth. (3) Linear fitting, compared with other three methods, is the best up-scaling method for all types of representative sites obtained above, and linear regression equations between a single site and regional soil moisture are established hereafter. ;Single site; obtained by OCS has the greatest up-scaling effect, and TSCA takes the second place. (4) Linear fitting equations show good practicability in estimating the variation of regional soil moisture from July 3, 2013 to July 3, 2014, when a large number of observed soil moisture data are lost.
Snodgrass, Melinda R; Chung, Moon Y; Meadan, Hedda; Halle, James W
2018-03-01
Single-case research (SCR) has been a valuable methodology in special education research. Montrose Wolf (1978), an early pioneer in single-case methodology, coined the term "social validity" to refer to the social importance of the goals selected, the acceptability of procedures employed, and the effectiveness of the outcomes produced in applied investigations. Since 1978, many contributors to SCR have included social validity as a feature of their articles and several authors have examined the prevalence and role of social validity in SCR. We systematically reviewed all SCR published in six highly-ranked special education journals from 2005 to 2016 to establish the prevalence of social validity assessments and to evaluate their scientific rigor. We found relatively low, but stable prevalence with only 28 publications addressing all three factors of the social validity construct (i.e., goals, procedures, outcomes). We conducted an in-depth analysis of the scientific rigor of these 28 publications. Social validity remains an understudied construct in SCR, and the scientific rigor of social validity assessments is often lacking. Implications and future directions are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Moussavi, Mahsa S.; Abdalati, Waleed; Pope, Allen; Scambos, Ted; Tedesco, Marco; MacFerrin, Michael; Grigsby, Shane
2016-01-01
Supraglacial meltwater lakes on the western Greenland Ice Sheet (GrIS) are critical components of its surface hydrology and surface mass balance, and they also affect its ice dynamics. Estimates of lake volume, however, are limited by the availability of in situ measurements of water depth,which in turn also limits the assessment of remotely sensed lake depths. Given the logistical difficulty of collecting physical bathymetric measurements, methods relying upon in situ data are generally restricted to small areas and thus their application to largescale studies is difficult to validate. Here, we produce and validate spaceborne estimates of supraglacial lake volumes across a relatively large area (1250 km(exp 2) of west Greenland's ablation region using data acquired by the WorldView-2 (WV-2) sensor, making use of both its stereo-imaging capability and its meter-scale resolution. We employ spectrally-derived depth retrieval models, which are either based on absolute reflectance (single-channel model) or a ratio of spectral reflectances in two bands (dual-channel model). These models are calibrated by usingWV-2multispectral imagery acquired early in the melt season and depth measurements from a high resolutionWV-2 DEM over the same lake basins when devoid of water. The calibrated models are then validated with different lakes in the area, for which we determined depths. Lake depth estimates based on measurements recorded in WV-2's blue (450-510 nm), green (510-580 nm), and red (630-690 nm) bands and dual-channel modes (blue/green, blue/red, and green/red band combinations) had near-zero bias, an average root-mean-squared deviation of 0.4 m (relative to post-drainage DEMs), and an average volumetric error of b1%. The approach outlined in this study - image-based calibration of depth-retrieval models - significantly improves spaceborne supraglacial bathymetry retrievals, which are completely independent from in situ measurements.
NASA Astrophysics Data System (ADS)
Bak, S.; Smith, J. M.; Hesser, T.; Bryant, M. A.
2016-12-01
Near-coast wave models are generally validated with relatively small data sets that focus on analytical solutions, specialized experiments, or intense storms. Prior studies have compiled testbeds that include a few dozen experiments or storms to validate models (e.g., Ris et al. 2002), but few examples exist that allow for continued model evaluation in the nearshore and surf-zone in near-realtime. The limited nature of these validation sets is driven by a lack of high spatial and temporal resolution in-situ wave measurements and the difficulty in maintaining these instruments on the active profile over long periods of time. The US Army Corps of Engineers Field Research Facility (FRF) has initiated a Coastal Model Test-Bed (CMTB), which is an automated system that continually validates wave models (with morphological and circulation models to follow) utilizing the rich data set of oceanographic and bathymetric measurements collected at the FRF. The FRF's cross-shore wave array provides wave measurements along a cross-shore profile from 26 m of water depth to the shoreline, utilizing various instruments including wave-rider buoys, AWACs, aquadopps, pressure gauges, and a dune-mounted lidar (Brodie et al. 2015). This work uses the CMTB to evaluate the performance of a phase-averaged numerical wave model, STWAVE (Smith 2007, Massey et al. 2011) over the course of a year at the FRF in Duck, NC. Additionally, from the BathyDuck Experiment in October 2015, the CMTB was used to determine the impact of applying the depth boundary condition for the model from monthly acoustic bathymetric surveys in comparison to hourly estimates using a video-based inversion method (e.g., cBathy, Holman et al. 2013). The modeled wave parameters using both bathymetric boundary conditions are evaluated using the FRF's cross-shore wave array and two additional cross-shore arrays of wave measurements in 2 to 4 m water depth from BathyDuck in Fall, 2015.
Establishing a 'Physician's Spiritual Well-being Scale' and testing its reliability and validity.
Fang, C K; Li, P Y; Lai, M L; Lin, M H; Bridge, D T; Chen, H W
2011-01-01
The purpose of this study was to develop a Physician's Spiritual Well-Being Scale (PSpWBS). The significance of a physician's spiritual well-being was explored through in-depth interviews with and qualitative data collection from focus groups. Based on the results of qualitative analysis and related literature, the PSpWBS consisting of 25 questions was established. Reliability and validity tests were performed on 177 subjects. Four domains of the PSpWBS were devised: physician's characteristics; medical practice challenges; response to changes; and overall well-being. The explainable total variance was 65.65%. Cronbach α was 0.864 when the internal consistency of the whole scale was calculated. Factor analysis showed that the internal consistency Cronbach α value for each factor was between 0.625 and 0.794 and the split-half reliability was 0.865. The scale has satisfactory reliability and validity and could serve as the basis for assessment of the spiritual well-being of a physician.
Franzen, Lutz; Anderski, Juliane; Windbergs, Maike
2015-09-01
For rational development and evaluation of dermal drug delivery, the knowledge of rate and extent of substance penetration into the human skin is essential. However, current analytical procedures are destructive, labor intense and lack a defined spatial resolution. In this context, confocal Raman microscopy bares the potential to overcome current limitations in drug depth profiling. Confocal Raman microscopy already proved its suitability for the acquisition of qualitative penetration profiles, but a comprehensive investigation regarding its suitability for quantitative measurements inside the human skin is still missing. In this work, we present a systematic validation study to deploy confocal Raman microscopy for quantitative drug depth profiling in human skin. After we validated our Raman microscopic setup, we successfully established an experimental procedure that allows correlating the Raman signal of a model drug with its controlled concentration in human skin. To overcome current drawbacks in drug depth profiling, we evaluated different modes of peak correlation for quantitative Raman measurements and offer a suitable operating procedure for quantitative drug depth profiling in human skin. In conclusion, we successfully demonstrate the potential of confocal Raman microscopy for quantitative drug depth profiling in human skin as valuable alternative to destructive state-of-the-art techniques. Copyright © 2015 Elsevier B.V. All rights reserved.
Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771
Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.
Bispectral Index Monitoring: validity and utility in pediatric dentistry.
Goyal, Ashima; Mittal, Neeti; Mittal, Parteek; Gauba, K
2014-01-01
Reliable and safe provision of sedation and general anesthesia is dependent on continuous vigilance of patient's sedation depth. Failure to do so may result in unintended oversedation or undersedation. It is a common practice to observe sedation depth by applying subjective sedation scales and in case of general anesthesia, practitioner is dependent on vital sign assessment. The Bispectral Index System (BIS) is a recently introduced objective, quantitative, easy to use, and free from observer bias, and clinically useful tool to assess sedation depth and it precludes the need to stimulate the patient to assess his sedation level. The present article is an attempt to orient the readers towards utility and validity of BIS for sedation and general anesthesia in pediatric dentistry. In this article, we attempt to make the readers understand the principle of BIS, its variation across sedation continuum, its validity across different age groups and for a variety of sedative drugs.
NASA Astrophysics Data System (ADS)
DSouza, Alisha V.; Flynn, Brendan P.; Gunn, Jason R.; Samkoe, Kimberley S.; Anand, Sanjay; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.
2014-03-01
Treatment monitoring of Aminolevunilic-acid (ALA) - Photodynamic Therapy (PDT) of basal-cell carcinoma (BCC) calls for superficial and subsurface imaging techniques. While superficial imagers exist for this purpose, their ability to assess PpIX levels in thick lesions is poor; additionally few treatment centers have the capability to measure ALA-induced PpIX production. An area of active research is to improve treatments to deeper and nodular BCCs, because treatment is least effective in these. The goal of this work was to understand the logistics and technical capabilities to quantify PpIX at depths over 1mm, using a novel hybrid ultrasound-guided, fiber-based fluorescence molecular spectroscopictomography system. This system utilizes a 633nm excitation laser and detection using filtered spectrometers. Source and detection fibers are collinear so that their imaging plane matches that of ultrasound transducer. Validation with phantoms and tumor-simulating fluorescent inclusions in mice showed sensitivity to fluorophore concentrations as low as 0.025μg/ml at 4mm depth from surface, as presented in previous years. Image-guided quantification of ALA-induced PpIX production was completed in subcutaneous xenograft epidermoid cancer tumor model A431 in nude mice. A total of 32 animals were imaged in-vivo, using several time points, including pre-ALA, 4-hours post-ALA, and 24-hours post-ALA administration. On average, PpIX production in tumors increased by over 10-fold, 4-hours post-ALA. Statistical analysis of PpIX fluorescence showed significant difference among all groups; p<0.05. Results were validated by exvivo imaging of resected tumors. Details of imaging, analysis and results will be presented to illustrate variability and the potential for imaging these values at depth.
Wei, Wenli; Bai, Yu; Liu, Yuling
2016-01-01
This paper is concerned with the simulation and experimental study of hydraulic characteristics in a pilot Carrousel oxidation ditch for the optimization of submerged depth ratio of surface aerators. The simulation was based on the large eddy simulation with the Smagorinsky model, and the velocity was monitored in the ditches with an acoustic Doppler velocimeter method. Comparisons of the simulated velocities and experimental ones show a good agreement, which validates that the accuracy of this simulation is good. The best submerged depth ratio of 2/3 for surface aerators was obtained according to the analysis of the flow field structure, the ratio of gas and liquid in the bottom layer of a ditch, the average velocity of mixture and the flow region with a velocity easily causing sludge deposition under the four operation conditions with submerged depth ratios of 1/3, 1/2, 2/3 and 3/4 for surface aerators. The research result can provide a reference for the design of Carrousel oxidation ditches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karthikeyan, N; Bharathiya University, Coimbatore, Tamilnadu; Ganesh, KM
Purpose: To validate the Monaco montecorlo beam model for a range of small field in the heterogeneous medium. Methods: A in-house phantom with three different medium of Foam, PMMA and derlin resembling the densities of lung, soft tissue, and bone was used for the study. The field sizes of 8, 16, 24, 32 and 48mm were studied for the validation of montecarlo algorithm using 0.01cc volume ionchamber and gafchromic films. The 6MV photon beam from Elekta Beam modulator was used with 100cm SAD setup. The outputs were measured at the depth of 5, 10 and 20mm in every second mediummore » with 3cm buildup of first medium for the interface of lung-bone, lung-soft tissue, soft tissue-bone, bone-lung and soft tissue-lung. Similarly, the 2D dose analysis with gamma criteria of 2%2mm were done at the same depths using gafchromic film. For all the measurements 10.4×10.4cm were taken as reference to which the other field sizes were compared. Monaco TPSv.3.20 was used to calculate the dose distribution for all the simulated measurement setups. Results: The average maximum difference among the field sizes of 8, 16, 24, 32 and 48mm at the depth of 5mm in second medium with the interface of lung-bone, lung-soft tissue, soft tissue-bone, bone-lung and soft tissue-lung were observed as 1.29±0.14%, 0.49±0.16%, 0.87±0.23%, 0.92±0.11%, 1.01±0.19% respectively. The minimum and maximum variation of dose among different materials for the smallest field size of 8mm were observed as 0.23% and 1.67% respectively. The 2D analysis showed the average gamma passing of 98.9±0.5%. The calculated two-tailed P-value were showed insignificance with values of 0.562 and 0.452 for both ionchamber and film measurements. Conclusion: The accuracy of dose calculation for the small fields in Monaco Montecarlo TPS algorithm was validated in different inhomogeneous medium and found the results were well correlated with measurement data.« less
Contrast Analysis for Side-Looking Sonar
2013-09-30
bound for shadow depth that can be used to validate modeling tools such as SWAT (Shallow Water Acoustics Toolkit). • Adaptive Postprocessing: Tune image...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send
Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach.
Guruceaga, Elizabeth; Garin-Muga, Alba; Prieto, Gorka; Bejarano, Bartolomé; Marcilla, Miguel; Marín-Vicente, Consuelo; Perez-Riverol, Yasset; Casal, J Ignacio; Vizcaíno, Juan Antonio; Corrales, Fernando J; Segura, Victor
2017-12-01
The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations.
Enhanced Missing Proteins Detection in NCI60 Cell Lines Using an Integrative Search Engine Approach
2017-01-01
The Human Proteome Project (HPP) aims deciphering the complete map of the human proteome. In the past few years, significant efforts of the HPP teams have been dedicated to the experimental detection of the missing proteins, which lack reliable mass spectrometry evidence of their existence. In this endeavor, an in depth analysis of shotgun experiments might represent a valuable resource to select a biological matrix in design validation experiments. In this work, we used all the proteomic experiments from the NCI60 cell lines and applied an integrative approach based on the results obtained from Comet, Mascot, OMSSA, and X!Tandem. This workflow benefits from the complementarity of these search engines to increase the proteome coverage. Five missing proteins C-HPP guidelines compliant were identified, although further validation is needed. Moreover, 165 missing proteins were detected with only one unique peptide, and their functional analysis supported their participation in cellular pathways as was also proposed in other studies. Finally, we performed a combined analysis of the gene expression levels and the proteomic identifications from the common cell lines between the NCI60 and the CCLE project to suggest alternatives for further validation of missing protein observations. PMID:28960077
Mbinze, J K; Sacré, P-Y; Yemoa, A; Mavar Tayey Mbay, J; Habyalimana, V; Kalenda, N; Hubert, Ph; Marini, R D; Ziemons, E
2015-01-01
Poor quality antimalarial drugs are one of the public's major health problems in Africa. The depth of this problem may be explained in part by the lack of effective enforcement and the lack of efficient local drug analysis laboratories. To tackle part of this issue, two spectroscopic methods with the ability to detect and to quantify quinine dihydrochloride in children's oral drops formulations were developed and validated. Raman and near infrared (NIR) spectroscopy were selected for the drug analysis due to their low cost, non-destructive and rapid characteristics. Both of the methods developed were successfully validated using the total error approach in the range of 50-150% of the target concentration (20%W/V) within the 10% acceptance limits. Samples collected on the Congolese pharmaceutical market were analyzed by both techniques to detect potentially substandard drugs. After a comparison of the analytical performance of both methods, it has been decided to implement the method based on NIR spectroscopy to perform the routine analysis of quinine oral drop samples in the Quality Control Laboratory of Drugs at the University of Kinshasa (DRC). Copyright © 2015 Elsevier B.V. All rights reserved.
Development and validation of a habitat suitability model for ...
We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and salinity) to quantitatively describe habitat suitable for Z. japonica invasion based on ecology and physiology from the primary literature. Habitat suitability is defined with values ranging from zero to one, where one denotes areas most conducive to Z. japonica and zero denotes areas not likely to support Z. japonica growth. The model was applied to Yaquina Bay, Oregon, USA, an area that has well documented Z. japonica expansion over the last two decades. The highest suitability values for Z. japonica occurred in the mid to upper portions of the intertidal zone, with larger expanses occurring in the lower estuary. While the upper estuary did contain suitable habitat, most areas were not as large as in the lower estuary, due to inappropriate depth, a steeply sloping intertidal zone, and lower salinity. The lowest suitability values occurred below the lower intertidal zone, within the Yaquina River channel. The model was validated by comparison to a multi-year time series of Z. japonica maps, revealing a strong predictive capacity. Sensitivity analysis performed to evaluate the contribution of each parameter to the model prediction revealed that depth was the most important factor. Sh
Development and validation of the pro-environmental behaviour scale for women's health.
Kim, HyunKyoung
2017-05-01
This study was aimed to develop and test the Pro-environmental Behavior Scale for Women's Health. Women adopt sustainable behaviours and alter their life styles to protect the environment and their health from environmental pollution. The conceptual framework of pro-environmental behaviours was based on Rogers' protection motivation theory and Weinstein's precaution adoption process model. The cross-sectional design was used for instrument development. The instrument development process consisted of a literature review, personal depth interviews and focus group interviews. The sample comprised 356 adult women recruited in April-May 2012 in South Korea using quota sampling. For construct validity, exploratory factor analysis was conducted to examine the factor structure, after which convergent and discriminant validity and known-group comparisons were tested. Principal component analysis yielded 17 items with four factors, including 'women's health protection,' 'chemical exposure prevention,' 'alternative consumption,' and 'community-oriented behaviour'. The Cronbach's α was 0·81. Convergent and discriminant validity were supported by performing correlations with other environmental-health and health-behaviour measures. Nursing professionals can reliably use the instrument to assess women's behaviours, which protect their health and the environment. © 2016 John Wiley & Sons Ltd.
Analysis and synthesis of (SAR) waveguide phased array antennas
NASA Astrophysics Data System (ADS)
Visser, H. J.
1994-02-01
This report describes work performed due to ESA contract No. 101 34/93/NL/PB. Started is with a literature study on dual polarized waveguide radiators, resulting in the choice for the open ended square waveguide. After a thorough description of the mode matching infinite waveguide array analysis method - including finiteness effects - that forms the basis for all further described analysis and synthesis methods, the accuracy of the analysis software is validated by comparison with measurements on two realized antennas. These antennas have centered irises in the waveguide apertures and a dielectric wide angle impedance matching sheet in front of the antenna. A synthesis method, using simulated annealing and downhill simplex, is described next and different antenna designs, based on the analysis of a single element in an infinite array environment, are presented. Next, designs of subarrays are presented. Shown is the paramount importance of including the array environment in the design of a subarray. A microstrip patch waveguide exciter and subarray feeding network are discussed and the depth of the waveguide radiator is estimated. Chosen is a rectangular grid array with waveguides of 2.5 cm depth without irises and without dielectric sheet, grouped in linear 8 elements subarrays.
Psychometric Testing of a Religious Belief Scale.
Chiang, Yi-Chien; Lee, Hsiang-Chun; Chu, Tsung-Lan; Han, Chin-Yen; Hsiao, Ya-Chu
2017-12-01
Nurses account for a significant percentage of staff in the healthcare system. The religious beliefs of nurses may affect their competence to provide spiritual care to patients. No reliable and valid instruments are currently available to measure the religious beliefs of nurses in Taiwan. The aims of this study were to develop a religious belief scale (RBS) for Taiwanese nurses and to evaluate the psychometric properties of this scale. A cross-sectional study design was used, and 24 RBS items were generated from in-depth interviews, a literature review, and expert recommendations. The RBS self-administered questionnaire was provided to 619 clinical nurses, who were recruited from two medical centers and one local hospital in Taiwan during 2011-2012. A calibration sample was used to explore the factor structure, whereas a validation sample was used to validate the factor structure that was constructed by the calibration sample. Known-group validity and criterion-related validity were also assessed. An exploratory factor analysis resulted in an 18-item RBS with four factors, including "religious effects," "divine," "religious query," and "religious stress." A confirmatory factor analysis recommended the deletion of one item, resulting in a final RBS of 17 items. The convergent validity and discriminate validity of the RBS were acceptable. The RBS correlated positively with spiritual health and supported concurrent validity. The known-group validity was supported by showing that the mean RBS between nurses with or without religious affiliation was significant. The 17-item RBS developed in this study is a reliable, valid, and useful scale for measuring the religious beliefs of nurses in Taiwan. This scale may help measure the religious beliefs of nurses and elicit the relationship between these beliefs and spirituality.
Reiner, Iris; Beutel, Manfred; Skaletz, Christian; Brähler, Elmar; Stöbel-Richter, Yve
2012-01-01
Research on psychosocial influences such as relationship characteristics has received increased attention in the clinical as well as social-psychological field. Several studies demonstrated that the quality of relationships, in particular with respect to the perceived support within intimate relationships, profoundly affects individuals' mental and physical health. There is, however, a limited choice of valid and internationally known assessments of relationship quality in Germany. We report the validation of the German version of the Quality of Relationships Inventory (QRI). First, we evaluated its factor structure in a representative German sample of 1.494 participants by means of confirmatory factor analysis. Our findings support the previously proposed three-factor structure. Second, importance and satisfaction with different relationship domains (family/children and relationship/sexuality) were linked with the QRI scales, demonstrating high construct validity. Finally, we report sex and age differences regarding the perceived relationship support, conflict and depth in our German sample. In conclusion, the QRI is a reliable and valid measurement to assess social support in romantic relationships in the German population. PMID:22662151
Influence of Wind Pressure on the Carbonation of Concrete
Zou, Dujian; Liu, Tiejun; Du, Chengcheng; Teng, Jun
2015-01-01
Carbonation is one of the major deteriorations that accelerate steel corrosion in reinforced concrete structures. Many mathematical/numerical models of the carbonation process, primarily diffusion-reaction models, have been established to predict the carbonation depth. However, the mass transfer of carbon dioxide in porous concrete includes molecular diffusion and convection mass transfer. In particular, the convection mass transfer induced by pressure difference is called penetration mass transfer. This paper presents the influence of penetration mass transfer on the carbonation. A penetration-reaction carbonation model was constructed and validated by accelerated test results under high pressure. Then the characteristics of wind pressure on the carbonation were investigated through finite element analysis considering steady and fluctuating wind flows. The results indicate that the wind pressure on the surface of concrete buildings results in deeper carbonation depth than that just considering the diffusion of carbon dioxide. In addition, the influence of wind pressure on carbonation tends to increase significantly with carbonation depth. PMID:28793462
Influence of Wind Pressure on the Carbonation of Concrete.
Zou, Dujian; Liu, Tiejun; Du, Chengcheng; Teng, Jun
2015-07-24
Carbonation is one of the major deteriorations that accelerate steel corrosion in reinforced concrete structures. Many mathematical/numerical models of the carbonation process, primarily diffusion-reaction models, have been established to predict the carbonation depth. However, the mass transfer of carbon dioxide in porous concrete includes molecular diffusion and convection mass transfer. In particular, the convection mass transfer induced by pressure difference is called penetration mass transfer. This paper presents the influence of penetration mass transfer on the carbonation. A penetration-reaction carbonation model was constructed and validated by accelerated test results under high pressure. Then the characteristics of wind pressure on the carbonation were investigated through finite element analysis considering steady and fluctuating wind flows. The results indicate that the wind pressure on the surface of concrete buildings results in deeper carbonation depth than that just considering the diffusion of carbon dioxide. In addition, the influence of wind pressure on carbonation tends to increase significantly with carbonation depth.
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lei, Pingguang; Lei, Guanghe; Tian, Jianjun; Zhou, Zengfen; Zhao, Miao; Wan, Chonghua
2014-10-01
This paper is aimed to develop the irritable bowel syndrome (IBS) scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-IBS) by the modular approach and validate it by both classical test theory and generalizability theory. The QLICD-IBS was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, and quantitative statistical procedures. One hundred twelve inpatients with IBS were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability, and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t tests and also G studies and D studies of generalizability theory analysis. Multi-trait scaling analysis, correlation, and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. Test-retest reliability coefficients (Pearson r and intra-class correlation (ICC)) for the overall score and all domains were higher than 0.80; the internal consistency α for all domains at two measurements were higher than 0.70 except for the social domain (0.55 and 0.67, respectively). The overall score and scores for all domains/facets had statistically significant changes after treatments with moderate or higher effect size standardized response mean (SRM) ranging from 0.72 to 1.02 at domain levels. G coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-IBS has good validity, reliability, responsiveness, and some highlights and can be used as the quality of life instrument for patients with IBS.
Refaat, Tamer F; Singh, Upendra N; Yu, Jirong; Petros, Mulugeta; Remus, Ruben; Ismail, Syed
2016-05-20
Field experiments were conducted to test and evaluate the initial atmospheric carbon dioxide (CO2) measurement capability of airborne, high-energy, double-pulsed, 2-μm integrated path differential absorption (IPDA) lidar. This IPDA was designed, integrated, and operated at the NASA Langley Research Center on-board the NASA B-200 aircraft. The IPDA was tuned to the CO2 strong absorption line at 2050.9670 nm, which is the optimum for lower tropospheric weighted column measurements. Flights were conducted over land and ocean under different conditions. The first validation experiments of the IPDA for atmospheric CO2 remote sensing, focusing on low surface reflectivity oceanic surface returns during full day background conditions, are presented. In these experiments, the IPDA measurements were validated by comparison to airborne flask air-sampling measurements conducted by the NOAA Earth System Research Laboratory. IPDA performance modeling was conducted to evaluate measurement sensitivity and bias errors. The IPDA signals and their variation with altitude compare well with predicted model results. In addition, off-off-line testing was conducted, with fixed instrument settings, to evaluate the IPDA systematic and random errors. Analysis shows an altitude-independent differential optical depth offset of 0.0769. Optical depth measurement uncertainty of 0.0918 compares well with the predicted value of 0.0761. IPDA CO2 column measurement compares well with model-driven, near-simultaneous air-sampling measurements from the NOAA aircraft at different altitudes. With a 10-s shot average, CO2 differential optical depth measurement of 1.0054±0.0103 was retrieved from a 6-km altitude and a 4-GHz on-line operation. As compared to CO2 weighted-average column dry-air volume mixing ratio of 404.08 ppm, derived from air sampling, IPDA measurement resulted in a value of 405.22±4.15 ppm with 1.02% uncertainty and 0.28% additional bias. Sensitivity analysis of environmental systematic errors correlates the additional bias to water vapor. IPDA ranging resulted in a measurement uncertainty of <3 m.
NASA Astrophysics Data System (ADS)
Jaspers, Mariëlle E.; Maltha, Ilse M.; Klaessens, John H.; Vet, Henrica C.; Verdaasdonk, Rudolf M.; Zuijlen, Paul P.
2016-02-01
In burn wounds early discrimination between the different depths plays an important role in the treatment strategy. The remaining vasculature in the wound determines its healing potential. Non-invasive measurement tools that can identify the vascularization are therefore considered to be of high diagnostic importance. Thermography is a non-invasive technique that can accurately measure the temperature distribution over a large skin or tissue area, the temperature is a measure of the perfusion of that area. The aim of this study was to investigate the clinimetric properties (i.e. reliability and validity) of thermography for measuring burn wound depth. In a cross-sectional study with 50 burn wounds of 35 patients, the inter-observer reliability and the validity between thermography and Laser Doppler Imaging were studied. With ROC curve analyses the ΔT cut-off point for different burn wound depths were determined. The inter-observer reliability, expressed by an intra-class correlation coefficient of 0.99, was found to be excellent. In terms of validity, a ΔT cut-off point of 0.96°C (sensitivity 71%; specificity 79%) differentiates between a superficial partial-thickness and deep partial-thickness burn. A ΔT cut-off point of -0.80°C (sensitivity 70%; specificity 74%) could differentiate between a deep partial-thickness and a full-thickness burn wound. This study demonstrates that thermography is a reliable method in the assessment of burn wound depths. In addition, thermography was reasonably able to discriminate among different burn wound depths, indicating its potential use as a diagnostic tool in clinical burn practice.
Light field geometry of a Standard Plenoptic Camera.
Hahne, Christopher; Aggoun, Amar; Haxha, Shyqyri; Velisavljevic, Vladan; Fernández, Juan Carlos Jácome
2014-11-03
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
Flash Infrared Thermography Contrast Data Analysis Technique
NASA Technical Reports Server (NTRS)
Koshti, Ajay
2014-01-01
This paper provides information on an IR Contrast technique that involves extracting normalized contrast versus time evolutions from the flash thermography inspection infrared video data. The analysis calculates thermal measurement features from the contrast evolution. In addition, simulation of the contrast evolution is achieved through calibration on measured contrast evolutions from many flat-bottom holes in the subject material. The measurement features and the contrast simulation are used to evaluate flash thermography data in order to characterize delamination-like anomalies. The thermal measurement features relate to the anomaly characteristics. The contrast evolution simulation is matched to the measured contrast evolution over an anomaly to provide an assessment of the anomaly depth and width which correspond to the depth and diameter of the equivalent flat-bottom hole (EFBH) similar to that used as input to the simulation. A similar analysis, in terms of diameter and depth of an equivalent uniform gap (EUG) providing a best match with the measured contrast evolution, is also provided. An edge detection technique called the half-max is used to measure width and length of the anomaly. Results of the half-max width and the EFBH/EUG diameter are compared to evaluate the anomaly. The information provided here is geared towards explaining the IR Contrast technique. Results from a limited amount of validation data on reinforced carbon-carbon (RCC) hardware are included in this paper.
2014-01-01
Introduction Intensive care unit (ICU) patients are known to experience severely disturbed sleep, with possible detrimental effects on short- and long- term outcomes. Investigation into the exact causes and effects of disturbed sleep has been hampered by cumbersome and time consuming methods of measuring and staging sleep. We introduce a novel method for ICU depth of sleep analysis, the ICU depth of sleep index (IDOS index), using single channel electroencephalography (EEG) and apply it to outpatient recordings. A proof of concept is shown in non-sedated ICU patients. Methods Polysomnographic (PSG) recordings of five ICU patients and 15 healthy outpatients were analyzed using the IDOS index, based on the ratio between gamma and delta band power. Manual selection of thresholds was used to classify data as either wake, sleep or slow wave sleep (SWS). This classification was compared to visual sleep scoring by Rechtschaffen & Kales criteria in normal outpatient recordings and ICU recordings to illustrate face validity of the IDOS index. Results When reduced to two or three classes, the scoring of sleep by IDOS index and manual scoring show high agreement for normal sleep recordings. The obtained overall agreements, as quantified by the kappa coefficient, were 0.84 for sleep/wake classification and 0.82 for classification into three classes (wake, non-SWS and SWS). Sensitivity and specificity were highest for the wake state (93% and 93%, respectively) and lowest for SWS (82% and 76%, respectively). For ICU recordings, agreement was similar to agreement between visual scorers previously reported in literature. Conclusions Besides the most satisfying visual resemblance with manually scored normal PSG recordings, the established face-validity of the IDOS index as an estimator of depth of sleep was excellent. This technique enables real-time, automated, single channel visualization of depth of sleep, facilitating the monitoring of sleep in the ICU. PMID:24716479
NASA Astrophysics Data System (ADS)
Gupta, Mousumi; Chatterjee, Somenath
2018-04-01
Surface texture is an important issue to realize the nature (crest and trough) of surfaces. Atomic force microscopy (AFM) image is a key analysis for surface topography. However, in nano-scale, the nature (i.e., deflection or crack) as well as quantification (i.e., height or depth) of deposited layers is essential information for material scientist. In this paper, a gradient-based K-means algorithm is used to differentiate the layered surfaces depending on their color contrast of as-obtained from AFM images. A transformation using wavelet decomposition is initiated to extract the information about deflection or crack on the material surfaces from the same images. Z-axis depth analysis from wavelet coefficients provides information about the crack present in the material. Using the above method corresponding surface information for the material is obtained. In addition, the Gaussian filter is applied to remove the unwanted lines, which occurred during AFM scanning. Few known samples are taken as input, and validity of the above approaches is shown.
Spatial prediction of ground subsidence susceptibility using an artificial neural network.
Lee, Saro; Park, Inhye; Choi, Jong-Kuk
2012-02-01
Ground subsidence in abandoned underground coal mine areas can result in loss of life and property. We analyzed ground subsidence susceptibility (GSS) around abandoned coal mines in Jeong-am, Gangwon-do, South Korea, using artificial neural network (ANN) and geographic information system approaches. Spatial data of subsidence area, topography, and geology, as well as various ground-engineering data, were collected and used to create a raster database of relevant factors for a GSS map. Eight major factors causing ground subsidence were extracted from the existing ground subsidence area: slope, depth of coal mine, distance from pit, groundwater depth, rock-mass rating, distance from fault, geology, and land use. Areas of ground subsidence were randomly divided into a training set to analyze GSS using the ANN and a test set to validate the predicted GSS map. Weights of each factor's relative importance were determined by the back-propagation training algorithms and applied to the input factor. The GSS was then calculated using the weights, and GSS maps were created. The process was repeated ten times to check the stability of analysis model using a different training data set. The map was validated using area-under-the-curve analysis with the ground subsidence areas that had not been used to train the model. The validation showed prediction accuracies between 94.84 and 95.98%, representing overall satisfactory agreement. Among the input factors, "distance from fault" had the highest average weight (i.e., 1.5477), indicating that this factor was most important. The generated maps can be used to estimate hazards to people, property, and existing infrastructure, such as the transportation network, and as part of land-use and infrastructure planning.
NASA Astrophysics Data System (ADS)
Singleton, V. L.; Gantzer, P.; Little, J. C.
2007-02-01
An existing linear bubble plume model was improved, and data collected from a full-scale diffuser installed in Spring Hollow Reservoir, Virginia, were used to validate the model. The depth of maximum plume rise was simulated well for two of the three diffuser tests. Temperature predictions deviated from measured profiles near the maximum plume rise height, but predicted dissolved oxygen profiles compared very well with observations. A sensitivity analysis was performed. The gas flow rate had the greatest effect on predicted plume rise height and induced water flow rate, both of which were directly proportional to gas flow rate. Oxygen transfer within the hypolimnion was independent of all parameters except initial bubble radius and was inversely proportional for radii greater than approximately 1 mm. The results of this work suggest that plume dynamics and oxygen transfer can successfully be predicted for linear bubble plumes using the discrete-bubble approach.
Li, Jie; Stroebe, Magaret; Chan, Cecilia L W; Chow, Amy Y M
2017-06-01
The rationale, development, and validation of the Bereavement Guilt Scale (BGS) are described in this article. The BGS was based on a theoretically developed, multidimensional conceptualization of guilt. Part 1 describes the generation of the item pool, derived from in-depth interviews, and review of the scientific literature. Part 2 details statistical analyses for further item selection (Sample 1, N = 273). Part 3 covers the psychometric properties of the emergent-BGS (Sample 2, N = 600, and Sample 3, N = 479). Confirmatory factor analysis indicated that a five-factor model fit the data best. Correlations of BGS scores with depression, anxiety, self-esteem, self-forgiveness, and mode of death were consistent with theoretical predictions, supporting the construct validity of the measure. The internal consistency and test-retest reliability were also supported. Thus, initial testing or examination suggests that the BGS is a valid tool to assess multiple components of bereavement guilt. Further psychometric testing across cultures is recommended.
Khalil, Georges E; Calabro, Karen S; Crook, Brittani; Machado, Tamara C; Perry, Cheryl L; Prokhorov, Alexander V
2018-02-01
In the United States, young adults have the highest prevalence of tobacco use. The dissemination of mobile phone text messages is a growing strategy for tobacco risk communication among young adults. However, little has been done concerning the design and validation of such text messages. The Texas Tobacco Center of Regulatory Science (Texas-TCORS) has developed a library of messages based on framing (gain- or loss-framed), depth (simple or complex) and appeal (emotional or rational). This study validated the library based on depth and appeal, identified text messages that may need improvement, and explored new themes. The library formed the study sample (N=976 messages). The Linguistic Inquiry and Word Count (LIWC) software of 2015 was used to code for word count, word length and frequency of emotional and cognitive words. Analyses of variance, logistic regression and scatter plots were conducted for validation. In all, 874 messages agreed with LIWC-coding. Several messages did not agree with LIWC. Ten messages designed to be complex indicated simplicity, while 51 messages designed to be rational exhibited no cognitive words. New relevant themes were identified, such as health (e.g. 'diagnosis', 'cancer'), death (e.g. 'dead', 'lethal') and social connotations (e.g. 'parents', 'friends'). Nicotine and tobacco researchers can safely use, for young adults, messages from the Texas-TCORS library to convey information in the intended style. Future work may expand upon the new themes. Findings will be utilized to develop new campaigns, so that risks of nicotine and tobacco products can be widely disseminated.
Unit Hydrograph Peaking Analysis for Goose Creek Watershed in Virginia: A Case Study
2017-05-01
increment would not exceed 1.5 times the designed unit peak. The purpose of this study is to analyze the validity of this UHPF range of the Goose...confidence interval precipitation depths to the watershed in addition to the 50% value. This study concluded that a design event with a return period greater...In this study , the physically based GSSHA model was deployed to obtain corresponding design discharge from probable rainfall events. 3.2.1 GSSHA
Liu, H; Puangmali, P; Zbyszewski, D; Elhage, O; Dasgupta, P; Dai, J S; Seneviratne, L; Althoefer, K
2010-01-01
This paper presents a novel wheeled probe for the purpose of aiding a surgeon in soft tissue abnormality identification during minimally invasive surgery (MIS), compensating the loss of haptic feedback commonly associated with MIS. Initially, a prototype for validating the concept was developed. The wheeled probe consists of an indentation depth sensor employing an optic fibre sensing scheme and a force/torque sensor. The two sensors work in unison, allowing the wheeled probe to measure the tool-tissue interaction force and the rolling indentation depth concurrently. The indentation depth sensor was developed and initially tested on a homogenous silicone phantom representing a good model for a soft tissue organ; the results show that the sensor can accurately measure the indentation depths occurring while performing rolling indentation, and has good repeatability. To validate the ability of the wheeled probe to identify abnormalities located in the tissue, the device was tested on a silicone phantom containing embedded hard nodules. The experimental data demonstrate that recording the tissue reaction force as well as rolling indentation depth signals during rolling indentation, the wheeled probe can rapidly identify the distribution of tissue stiffness and cause the embedded hard nodules to be accurately located.
Al-Eidan, Fahad; Baig, Lubna Ansari; Magzoub, Mohi-Eldin; Omair, Aamir
2016-04-01
To assess reliability and validity of evaluation tool using Haematology course as an example. The cross-sectional study was conducted at King Saud Bin Abdul Aziz University of Health Sciences, Riyadh, Saudi Arabia, in 2012, while data analysis was completed in 2013. The 27-item block evaluation instrument was developed by a multidisciplinary faculty after a comprehensive literature review. Validity of the questionnaire was confirmed using principal component analysis with varimax rotation and Kaiser normalisation. Identified factors were combined to get the internal consistency reliability of each factor. Student's t-test was used to compare mean ratings between male and female students for the faculty and block evaluation. Of the 116 subjects in the study, 80(69%) were males and 36(31%) were females. Reliability of the questionnaire was Cronbach's alpha 0.91. Factor analysis yielded a logically coherent 7 factor solution that explained 75% of the variation in the data. The factors were group dynamics in problem-based learning (alpha0.92), block administration (alpha 0.89), quality of objective structured clinical examination (alpha 0.86), block coordination (alpha 0.81), structure of problem-based learning (alpha 0.84), quality of written exam (alpha 0.91), and difficulty of exams (alpha0.41). Female students' opinion on depth of analysis and critical thinking was significantly higher than that of the males (p=0.03). The faculty evaluation tool used was found to be reliable, but its validity, as assessed through factor analysis, has to be interpreted with caution as the responders were less than the minimum required for factor analysis.
The U.S. Navy in the World (1981-1990): Context for U.S. Navy Capstone Strategies and Concepts
2011-12-01
shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1...NAME OF RESPONSIBLE PERSON a . REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by...ANSI Std Z39-18 CNA is a not-for-profit organization whose professional staff of over 700 provides in-depth analysis and results-oriented solutions
Data collection handbook to support modeling the impacts of radioactive material in soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Cheng, J.J.; Jones, L.G.
1993-04-01
A pathway analysis computer code called RESRAD has been developed for implementing US Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), and material-related (soil, concrete) parameters are used in the RESRAD code. This handbook discusses parameter definitions, typical ranges, variations, measurement methodologies, and input screen locations. Although this handbook was developed primarily to support the application of RESRAD, the discussions and values are valid for other model applications.
Dwyer, Tim; Martin, C Ryan; Kendra, Rita; Sermer, Corey; Chahal, Jaskarndip; Ogilvie-Harris, Darrell; Whelan, Daniel; Murnaghan, Lucas; Nauth, Aaron; Theodoropoulos, John
2017-06-01
To determine the interobserver reliability of the International Cartilage Repair Society (ICRS) grading system of chondral lesions in cadavers, to determine the intraobserver reliability of the ICRS grading system comparing arthroscopy and video assessment, and to compare the arthroscopic ICRS grading system with histological grading of lesion depth. Eighteen lesions in 5 cadaveric knee specimens were arthroscopically graded by 7 fellowship-trained arthroscopic surgeons using the ICRS classification system. The arthroscopic video of each lesion was sent to the surgeons 6 weeks later for repeat grading and determination of intraobserver reliability. Lesions were biopsied, and the depth of the cartilage lesion was assessed. Reliability was calculated using intraclass correlations. The interobserver reliability was 0.67 (95% confidence interval, 0.5-0.89) for the arthroscopic grading, and the intraobserver reliability with the video grading was 0.8 (95% confidence interval, 0.67-0.9). A high correlation was seen between the arthroscopic grading of depth and the histological grading of depth (0.91); on average, surgeons graded lesions using arthroscopy a mean of 0.37 (range, 0-0.86) deeper than the histological grade. The arthroscopic ICRS classification system has good interobserver and intraobserver reliability. A high correlation with histological assessment of depth provides evidence of validity for this classification system. As cartilage lesions are treated on the basis of the arthroscopic ICRS classification, it is important to ascertain the reliability and validity of this method. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Social Validity: Perceptions of Check and Connect with Early Literacy Support
ERIC Educational Resources Information Center
Miltich Lyst, Aimee; Gabriel, Stacey; O'Shaughnessy, Tam E.; Meyers, Joel; Meyers, Barbara
2005-01-01
This article underscores the potential advantages of qualitative methods to illustrate the depth and complexity of social validity. This investigation evaluates the social validity of Check and Connect with Early Literacy Support (CCEL), through the perspectives of teachers and caregivers whose children participated in the intervention. Teachers…
Pandey, Ramakant; Premalatha, M
2017-03-01
Open raceway ponds are widely adopted for cultivating microalgae on a large scale. Working depth of the raceway pond is the major component to be analysed for increasing the volume to surface area ratio. The working depth is limited up to 5-15 cm in conventional ponds but in this analysis working depth of raceway pond is considered as 25 cm. In this work, positioning of the paddle wheel is analysed and corresponding Vertical Mixing Index are calculated using CFD. Flow pattern along the length of the raceway pond, at three different paddle wheel speeds are analysed for L/W ratio of 6, 8 and 10, respectively. Effect of clearance (C) between rotor blade tip and bottom surface is also analysed by taking four clearance conditions i.e. C = 2, 5, 10 and 15. Moving reference frame method of Fluent is used for the modeling of six blade paddle wheel and realizable k-ε model is used for capturing turbulence characteristics. Overall objective of this work is to analyse the required geometry for maintaining a minimum flow velocity to avoid settling of algae corresponding to 25 cm working depth. Geometry given in [13] is designed using ANSYS Design modular and CFD results are generated using ANSYS FLUENT for the purpose of validation. Good agreement of results is observed between CFD and experimental Particle image velocimetry results with the deviation of 7.23%.
Chang, A.T.C.; Kelly, R.E.J.; Josberger, E.G.; Armstrong, R.L.; Foster, J.L.; Mognard, N.M.
2005-01-01
Accurate estimation of snow mass is important for the characterization of the hydrological cycle at different space and time scales. For effective water resources management, accurate estimation of snow storage is needed. Conventionally, snow depth is measured at a point, and in order to monitor snow depth in a temporally and spatially comprehensive manner, optimum interpolation of the points is undertaken. Yet the spatial representation of point measurements at a basin or on a larger distance scale is uncertain. Spaceborne scanning sensors, which cover a wide swath and can provide rapid repeat global coverage, are ideally suited to augment the global snow information. Satellite-borne passive microwave sensors have been used to derive snow depth (SD) with some success. The uncertainties in point SD and areal SD of natural snowpacks need to be understood if comparisons are to be made between a point SD measurement and satellite SD. In this paper three issues are addressed relating satellite derivation of SD and ground measurements of SD in the northern Great Plains of the United States from 1988 to 1997. First, it is shown that in comparing samples of ground-measured point SD data with satellite-derived 25 ?? 25 km2 pixels of SD from the Defense Meteorological Satellite Program Special Sensor Microwave Imager, there are significant differences in yearly SD values even though the accumulated datasets showed similarities. Second, from variogram analysis, the spatial variability of SD from each dataset was comparable. Third, for a sampling grid cell domain of 1?? ?? 1?? in the study terrain, 10 distributed snow depth measurements per cell are required to produce a sampling error of 5 cm or better. This study has important implications for validating SD derivations from satellite microwave observations. ?? 2005 American Meteorological Society.
Programmable stream prefetch with resource optimization
Boyle, Peter; Christ, Norman; Gara, Alan; Mawhinney, Robert; Ohmacht, Martin; Sugavanam, Krishnan
2013-01-08
A stream prefetch engine performs data retrieval in a parallel computing system. The engine receives a load request from at least one processor. The engine evaluates whether a first memory address requested in the load request is present and valid in a table. The engine checks whether there exists valid data corresponding to the first memory address in an array if the first memory address is present and valid in the table. The engine increments a prefetching depth of a first stream that the first memory address belongs to and fetching a cache line associated with the first memory address from the at least one cache memory device if there is not yet valid data corresponding to the first memory address in the array. The engine determines whether prefetching of additional data is needed for the first stream within its prefetching depth. The engine prefetches the additional data if the prefetching is needed.
Roy, Gilles; Roy, Nathalie
2008-03-20
A multiple-field-of-view (MFOV) lidar is used to characterize size and optical depth of low concentration of bioaerosol clouds. The concept relies on the measurement of the forward scattered light by using the background aerosols at various distances at the back of a subvisible cloud. It also relies on the subtraction of the background aerosol forward scattering contribution and on the partial attenuation of the first-order backscattering. The validity of the concept developed to retrieve the effective diameter and the optical depth of low concentration bioaerosol clouds with good precision is demonstrated using simulation results and experimental MFOV lidar measurements. Calculations are also done to show that the method presented can be extended to small optical depth cloud retrieval.
Using Content Maps to Measure Content Development in Physical Education: Validation and Application
ERIC Educational Resources Information Center
Ward, Phillip; Dervent, Fatih; Lee, Yun Soo; Ko, Bomna; Kim, Insook; Tao, Wang
2017-01-01
Purpose: This study reports on our efforts toward extending the conceptual understanding of content development in physical education by validating content maps as a measurement tool, examining new categories of instructional tasks to describe content development and validating formulae that can be used to evaluate depth of content development.…
Utilizing Depth of Colonization of Seagrasses to Develop ...
US EPA is working with state and local partners in Florida to develop numeric water quality criteria to protect estuaries from nutrient pollution. Similar to other nutrient management programs in Florida, EPA is considering status of seagrass habitats as an indicator of biological integrity, with depth of colonization of seagrasses used to relate potential seagrass extent to water quality requirements (especially water clarity). We developed and validated an automated methodology for evaluating depth of colonization and applied it to generate 228 estimates of seagrass colonization depth for coverage years spanning 67 years (1940-2007) in a total of 100 segments within 19 estuarine and coastal areas in Florida. A validation test showed that two parameters that were computed, Zc50 and ZcMax, approximated the average and 95th percentile depth at the deep-water margin of seagrass beds. Zc50 was estimated separately for continuous seagrass vs. all seagrass. Average values for Zc50 as well as long-term trends were evaluated for the entire state, illustrating a decline on average from early years (e.g., 1940-1953) to a middle period (1982-1999) and a variable degree of recovery since 2000. The largest decrease in Zc50 occurred in Florida panhandle estuaries. Extensive water quality data compiled in the Florida DEP’s Impaired Waters Rule database was evaluated to characterize Secchi depth, CDOM, TSS, and chlorophyll-a in relation to depth of colonization estima
Validation of Pooled Whole-Genome Re-Sequencing in Arabidopsis lyrata.
Fracassetti, Marco; Griffin, Philippa C; Willi, Yvonne
2015-01-01
Sequencing pooled DNA of multiple individuals from a population instead of sequencing individuals separately has become popular due to its cost-effectiveness and simple wet-lab protocol, although some criticism of this approach remains. Here we validated a protocol for pooled whole-genome re-sequencing (Pool-seq) of Arabidopsis lyrata libraries prepared with low amounts of DNA (1.6 ng per individual). The validation was based on comparing single nucleotide polymorphism (SNP) frequencies obtained by pooling with those obtained by individual-based Genotyping By Sequencing (GBS). Furthermore, we investigated the effect of sample number, sequencing depth per individual and variant caller on population SNP frequency estimates. For Pool-seq data, we compared frequency estimates from two SNP callers, VarScan and Snape; the former employs a frequentist SNP calling approach while the latter uses a Bayesian approach. Results revealed concordance correlation coefficients well above 0.8, confirming that Pool-seq is a valid method for acquiring population-level SNP frequency data. Higher accuracy was achieved by pooling more samples (25 compared to 14) and working with higher sequencing depth (4.1× per individual compared to 1.4× per individual), which increased the concordance correlation coefficient to 0.955. The Bayesian-based SNP caller produced somewhat higher concordance correlation coefficients, particularly at low sequencing depth. We recommend pooling at least 25 individuals combined with sequencing at a depth of 100× to produce satisfactory frequency estimates for common SNPs (minor allele frequency above 0.05).
NASA Astrophysics Data System (ADS)
Wang, Wenjing; Qiu, Rui; Ren, Li; Liu, Huan; Wu, Zhen; Li, Chunyan; Li, Junli
2017-09-01
Mean glandular dose (MGD) is not only determined by the compressed breast thickness (CBT) and the glandular content, but also by the distribution of glandular tissues in breast. Depth dose inside the breast in mammography has been widely concerned as glandular dose decreases rapidly with increasing depth. In this study, an experiment using thermo luminescent dosimeters (TLDs) was carried out to validate Monte Carlo simulations of mammography. Percent depth doses (PDDs) at different depth values were measured inside simple breast phantoms of different thicknesses. The experimental values were well consistent with the values calculated by Geant4. Then a detailed breast model with a CBT of 4 cm and a glandular content of 50%, which has been constructed in previous work, was used to study the effects of the distribution of glandular tissues in breast with Geant4. The breast model was reversed in direction of compression to get a reverse model with a different distribution of glandular tissues. Depth dose distributions and glandular tissue dose conversion coefficients were calculated. It revealed that the conversion coefficients were about 10% larger when the breast model was reversed, for glandular tissues in the reverse model are concentrated in the upper part of the model.
NASA Astrophysics Data System (ADS)
Buongiorno Nardelli, B.; Guinehut, S.; Verbrugge, N.; Cotroneo, Y.; Zambianchi, E.; Iudicone, D.
2017-12-01
The depth of the upper ocean mixed layer provides fundamental information on the amount of seawater that directly interacts with the atmosphere. Its space-time variability modulates water mass formation and carbon sequestration processes related to both the physical and biological pumps. These processes are particularly relevant in the Southern Ocean, where surface mixed-layer depth estimates are generally obtained either as climatological fields derived from in situ observations or through numerical simulations. Here we demonstrate that weekly observation-based reconstructions can be used to describe the variations of the mixed-layer depth in the upper ocean over a range of space and time scales. We compare and validate four different products obtained by combining satellite measurements of the sea surface temperature, salinity, and dynamic topography and in situ Argo profiles. We also compute an ensemble mean and use the corresponding spread to estimate mixed-layer depth uncertainties and to identify the more reliable products. The analysis points out the advantage of synergistic approaches that include in input the sea surface salinity observations obtained through a multivariate optimal interpolation. Corresponding data allow to assess mixed-layer depth seasonal and interannual variability. Specifically, the maximum correlations between mixed-layer anomalies and the Southern Annular Mode are found at different time lags, related to distinct summer/winter responses in the Antarctic Intermediate Water and Sub-Antarctic Mode Waters main formation areas.
NASA Technical Reports Server (NTRS)
Morris, W. D.; Witte, W. G.; Whitlock, C. H.
1980-01-01
Remote sensing of water quality is dicussed. Remote sensing penetration depth is a function both of water type and wavelength. Results of three tests to help demonstrate the magnitude of this dependence are presented. The water depth to which the remote-sensor data was valid was always less than that of the Secchi disk depth, although not always the same fraction of that depth. The penetration depths were wavelength dependent and showed the greatest variation for the water type with largest Secchi depth. The presence of a reflective plate, simulating a reflective subsurface, increased the apparent depth of light penetration from that calculated for water of infinite depth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belz, J.; Cao, Z.; Huentemeyer, P.
Measurements are reported on the fluorescence of air as a function of depth in electromagnetic showers initiated by bunches of 28.5 GeV electrons. The light yield is compared with the expected and observed depth profiles of ionization in the showers. It validates the use of atmospheric fluorescence profiles in measuring ultra high energy cosmic rays.
Validation of TOMS Aerosol Products using AERONET Observations
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Torres, O.; Sinyuk, A.; Holben, B.
2002-01-01
The Total Ozone Mapping Spectrometer (TOMS) aerosol algorithm uses measurements of radiances at two near UV channels in the range 331-380 nm to derive aerosol optical depth and single scattering albedo. Because of the low near UV surface albedo of all terrestrial surfaces (between 0.02 and 0.08), the TOMS algorithm has the capability of retrieving aerosol properties over the oceans and the continents. The Aerosol Robotic Network (AERONET) routinely derives spectral aerosol optical depth and single scattering albedo at a large number of sites around the globe. We have performed comparisons of both aerosol optical depth and single scattering albedo derived from TOMS and AERONET. In general, the TOMS aerosol products agree well with the ground-based observations, Results of this validation will be discussed.
ERIC Educational Resources Information Center
Santelices, Maria Veronica; Taut, Sandy
2011-01-01
This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…
NASA Astrophysics Data System (ADS)
Xu, Jianhui; Shu, Hong
2014-09-01
This study assesses the analysis performance of assimilating the Moderate Resolution Imaging Spectroradiometer (MODIS)-based albedo and snow cover fraction (SCF) separately or jointly into the physically based Common Land Model (CoLM). A direct insertion method (DI) is proposed to assimilate the black and white-sky albedos into the CoLM. The MODIS-based albedo is calculated with the MODIS bidirectional reflectance distribution function (BRDF) model parameters product (MCD43B1) and the solar zenith angle as estimated in the CoLM for each time step. Meanwhile, the MODIS SCF (MOD10A1) is assimilated into the CoLM using the deterministic ensemble Kalman filter (DEnKF) method. A new DEnKF-albedo assimilation scheme for integrating the DI and DEnKF assimilation schemes is proposed. Our assimilation results are validated against in situ snow depth observations from November 2008 to March 2009 at five sites in the Altay region of China. The experimental results show that all three data assimilation schemes can improve snow depth simulations. But overall, the DEnKF-albedo assimilation shows the best analysis performance as it significantly reduces the bias and root-mean-square error (RMSE) during the snow accumulation and ablation periods at all sites except for the Fuyun site. The SCF assimilation via DEnKF produces better results than the albedo assimilation via DI, implying that the albedo assimilation that indirectly updates the snow depth state variable is less efficient than the direct SCF assimilation. For the Fuyun site, the DEnKF-albedo scheme tends to overestimate the snow depth accumulation with the maximum bias and RMSE values because of the large positive innovation (observation minus forecast).
Indices for estimating fractional snow cover in the western Tibetan Plateau
NASA Astrophysics Data System (ADS)
Shreve, Cheney M.; Okin, Gregory S.; Painter, Thomas H.
Snow cover in the Tibetan Plateau is highly variable in space and time and plays a key role in ecological processes of this cold-desert ecosystem. Resolution of passive microwave data is too low for regional-scale estimates of snow cover on the Tibetan Plateau, requiring an alternate data source. Optically derived snow indices allow for more accurate quantification of snow cover using higher-resolution datasets subject to the constraint of cloud cover. This paper introduces a new optical snow index and assesses four optically derived MODIS snow indices using Landsat-based validation scenes: MODIS Snow-Covered Area and Grain Size (MODSCAG), Relative Multiple Endmember Spectral Mixture Analysis (RMESMA), Relative Spectral Mixture Analysis (RSMA) and the normalized-difference snow index (NDSI). Pearson correlation coefficients were positively correlated with the validation datasets for all four optical snow indices, suggesting each provides a good measure of total snow extent. At the 95% confidence level, linear least-squares regression showed that MODSCAG and RMESMA had accuracy comparable to validation scenes. Fusion of optical snow indices with passive microwave products, which provide snow depth and snow water equivalent, has the potential to contribute to hydrologic and energy-balance modeling in the Tibetan Plateau.
NASA Technical Reports Server (NTRS)
Moreno-Madrinan, Max Jacobo; Fischer, Andrew
2012-01-01
Satellite observation of phytoplankton concentration or chlorophyll-a is an important characteristic, critically integral to monitoring coastal water quality. However, the optical properties of estuarine and coastal waters are highly variable and complex and pose a great challenge for accurate analysis. Constituents such as suspended solids and dissolved organic matter and the overlapping and uncorrelated absorptions in the blue region of the spectrum renders the blue-green ratio algorithms for estimating chlorophyll-a inaccurate. Measurement of sun-induced chlorophyll fluorescence, on the other hand, which utilizes the near infrared portion of the electromagnetic spectrum, may provide a better estimate of phytoplankton concentrations. While modelling and laboratory studies have illustrated both the utility and limitations of satellite baseline algorithms based on the sun induced chlorophyll fluorescence signal, few have examined the empirical validity of these algorithms using a comprehensive long term in situ data set. In an unprecedented analysis of a long term (2003-2011) in situ monitoring data from Tampa Bay, Florida (USA), we assess the validity of the FLH product from the Moderate Resolution Imaging Spectrometer (MODIS) against chlorophyll ]a and a suite of water quality parameters taken in a variety of conditions throughout a large optically complex estuarine system. A systematic analysis of sampling sites throughout the bay is undertaken to understand how the relationship between FLH and in situ chlorophyll-a responds to varying conditions within the estuary including water depth, distance from shore and structures and eight water quality parameters. From the 39 station for which data was derived, 22 stations showed significant correlations when the FLH product was matched with in situ chlorophyll-alpha data. The correlations (r2) for individual stations within Tampa Bay ranged between 0.67 (n=28, pless than 0.01) and-0.457 (n=12, p=.016), indicating that for some areas within the Bay, FLH can be a good predictor of chlorophyll-alpha concentration and hence a useful tool for the analysis of water quality. Overall, the results show a 106% increase in the validity of chlorophyll -a concentration estimates using FLH over the standard the blue-green OC3M algorithm. This analysis also illustrates that the correlations between FLH and in situ chlorophyll -a measurements increases with increasing water depth and distance of the monitoring sites from both the shore and structures. However, due to confounding factors related to the complexity of the estuarine system, a linear improvement in the FLH to chlorophyll ]a relationship was not clearly noted with increasing depth and distance from shore alone. Correlations of FLH with turbidity, nutrients (total nitrogen and total phosphorous) biological oxygen demand, salinity, sea surface temperature correlated positively with FLH concentrations, while dissolved oxygen and pH showed negative correlations. Principle component analyses are employed to further describe the relationships between the multivariate water quality parameters and the FLH product. The majority of sites with higher and very significant correlations (pless than 0.01) also showed high correlation values for nutrients, turbidity and biological oxygen demand. These sites were on average in greater than seven meters of water and over five kilometers from shore. A thorough understanding of the relationship between the MODIS FLH product and in situ water quality parameters will enhance our understanding of the accuracy MODIS fs global FLH algorithm and assist in optimizing its calibration for use in monitoring the quality of estuarine and coastal waters worldwide.
Coded excitation for infrared non-destructive testing of carbon fiber reinforced plastics.
Mulaveesala, Ravibabu; Venkata Ghali, Subbarao
2011-05-01
This paper proposes a Barker coded excitation for defect detection using infrared non-destructive testing. Capability of the proposed excitation scheme is highlighted with recently introduced correlation based post processing approach and compared with the existing phase based analysis by taking the signal to noise ratio into consideration. Applicability of the proposed scheme has been experimentally validated on a carbon fiber reinforced plastic specimen containing flat bottom holes located at different depths.
NASA Astrophysics Data System (ADS)
Tessonnier, T.; Mairani, A.; Brons, S.; Sala, P.; Cerutti, F.; Ferrari, A.; Haberer, T.; Debus, J.; Parodi, K.
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
Tessonnier, T; Mairani, A; Brons, S; Sala, P; Cerutti, F; Ferrari, A; Haberer, T; Debus, J; Parodi, K
2017-08-01
In the field of particle therapy helium ion beams could offer an alternative for radiotherapy treatments, owing to their interesting physical and biological properties intermediate between protons and carbon ions. We present in this work the comparisons and validations of the Monte Carlo FLUKA code against in-depth dosimetric measurements acquired at the Heidelberg Ion Beam Therapy Center (HIT). Depth dose distributions in water with and without ripple filter, lateral profiles at different depths in water and a spread-out Bragg peak were investigated. After experimentally-driven tuning of the less known initial beam characteristics in vacuum (beam lateral size and momentum spread) and simulation parameters (water ionization potential), comparisons of depth dose distributions were performed between simulations and measurements, which showed overall good agreement with range differences below 0.1 mm and dose-weighted average dose-differences below 2.3% throughout the entire energy range. Comparisons of lateral dose profiles showed differences in full-width-half-maximum lower than 0.7 mm. Measurements of the spread-out Bragg peak indicated differences with simulations below 1% in the high dose regions and 3% in all other regions, with a range difference less than 0.5 mm. Despite the promising results, some discrepancies between simulations and measurements were observed, particularly at high energies. These differences were attributed to an underestimation of dose contributions from secondary particles at large angles, as seen in a triple Gaussian parametrization of the lateral profiles along the depth. However, the results allowed us to validate FLUKA simulations against measurements, confirming its suitability for 4 He ion beam modeling in preparation of clinical establishment at HIT. Future activities building on this work will include treatment plan comparisons using validated biological models between proton and helium ions, either within a Monte Carlo treatment planning engine based on the same FLUKA code, or an independent analytical planning system fed with a validated database of inputs calculated with FLUKA.
NASA Astrophysics Data System (ADS)
Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.
2014-06-01
Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.
NASA Astrophysics Data System (ADS)
Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.
2015-01-01
Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.
NASA Astrophysics Data System (ADS)
Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin
2018-04-01
This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.
Xpo7 is a broad-spectrum exportin and a nuclear import receptor.
Aksu, Metin; Pleiner, Tino; Karaca, Samir; Kappert, Christin; Dehne, Heinz-Jürgen; Seibel, Katharina; Urlaub, Henning; Bohnsack, Markus T; Görlich, Dirk
2018-05-10
Exportins bind cargo molecules in a RanGTP-dependent manner inside nuclei and transport them through nuclear pores to the cytoplasm. CRM1/Xpo1 is the best-characterized exportin because specific inhibitors such as leptomycin B allow straightforward cargo validations in vivo. The analysis of other exportins lagged far behind, foremost because no such inhibitors had been available for them. In this study, we explored the cargo spectrum of exportin 7/Xpo7 in depth and identified not only ∼200 potential export cargoes but also, surprisingly, ∼30 nuclear import substrates. Moreover, we developed anti-Xpo7 nanobodies that acutely block Xpo7 function when transfected into cultured cells. The inhibition is pathway specific, mislocalizes export cargoes of Xpo7 to the nucleus and import substrates to the cytoplasm, and allowed validation of numerous tested cargo candidates. This establishes Xpo7 as a broad-spectrum bidirectional transporter and paves the way for a much deeper analysis of exportin and importin function in the future. © 2018 Aksu et al.
Predicting groundwater redox status on a regional scale using linear discriminant analysis.
Close, M E; Abraham, P; Humphries, B; Lilburne, L; Cuthill, T; Wilson, S
2016-08-01
Reducing conditions are necessary for denitrification, thus the groundwater redox status can be used to identify subsurface zones where potentially significant nitrate reduction can occur. Groundwater chemistry in two contrasting regions of New Zealand was classified with respect to redox status and related to mappable factors, such as geology, topography and soil characteristics using discriminant analysis. Redox assignment was carried out for water sampled from 568 and 2223 wells in the Waikato and Canterbury regions, respectively. For the Waikato region 64% of wells sampled indicated oxic conditions in the water; 18% indicated reduced conditions and 18% had attributes indicating both reducing and oxic conditions termed "mixed". In Canterbury 84% of wells indicated oxic conditions; 10% were mixed; and only 5% indicated reduced conditions. The analysis was performed over three different well depths, <25m, 25 to 100 and >100m. For both regions, the percentage of oxidised groundwater decreased with increasing well depth. Linear discriminant analysis was used to develop models to differentiate between the three redox states. Models were derived for each depth and region using 67% of the data, and then subsequently validated on the remaining 33%. The average agreement between predicted and measured redox status was 63% and 70% for the Waikato and Canterbury regions, respectively. The models were incorporated into GIS and the prediction of redox status was extended over the whole region, excluding mountainous land. This knowledge improves spatial prediction of reduced groundwater zones, and therefore, when combined with groundwater flow paths, improves estimates of denitrification. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Woollacott, L. C.
2009-01-01
The CDIO (Conceive-Design-Implement-Operate) syllabus is the most detailed statement on the goals of engineering education currently found in the literature. This paper presents an in-depth validation exercise of the CDIO syllabus using the taxonomy of engineering competencies as a validating instrument. The study explains the attributes that make…
NASA Astrophysics Data System (ADS)
Guarino, Lucia Falsetti
A method for measuring depth of understanding of students in the middle-level science classroom was developed and validated. A common theme in the literature on constructivism in science education is that constructivist pedagogy, as opposed to objectivist pedagogy, results in a greater depth of understanding. Since few instruments measuring this construct exist at the present time, the development of such a tool to measure this construct was a significant contribution to the current body of assessment technologies in science education. The author's Depth of Understanding Assessment (DUA) evolved from a writing measure originally designed as a history assessment. The study involved 230 eighth grade science students studying a chemical change unit. The main research questions were: (1) What is the relationship between the DUA and each of the following independent variables: recall, application, and questioning modalities as measured by the Cognitive Preference Test; deep, surface, achieving, and deep-achieving approaches as measured by the Learning Process Questionnaire; achievement as measured by the Chemical Change Quiz, and teacher perception of student ability to conceptualize science content? (2) Is there a difference in depth of understanding, as measured by the DUA, between students who are taught by objectivist pedagogy and students who are taught by constructivist pedagogy favoring the constructivist group? (3) Is there a gender difference in depth of understanding as measured by the DUA? (4) Do students who are taught by constructivist pedagogy perceive their learning environment as more constructivist than students who are taught by objectivist pedagogy? Six out of nine hypothesis tests supported the validity of the DUA. The results of the qualitative component of this study which consisted of student interviews substantiated the quantitative results by providing additional information and insights. There was a significant difference in depth of understanding between the two groups favoring the constructivist group, however, since only two teachers and their students participated in the study, the significance of this result is limited. There was a significant gender difference in depth of understanding favoring females. Students in the constructivist group perceived their learning environment to be more constructivist than students in the objectivist group.
Effect of applied force and blade speed on histopathology of bone during resection by sagittal saw.
James, Thomas P; Chang, Gerard; Micucci, Steven; Sagar, Amrit; Smith, Eric L; Cassidy, Charles
2014-03-01
A sagittal saw is commonly used for resection of bone during joint replacement surgery. During sawing, heat is generated that can lead to an increase in temperature at the resected surface. The aim of this study was to determine the effect of applied thrust force and blade speed on generating heat. The effect of these factors and their interactions on cutting temperature and bone health were investigated with a full factorial Design of Experiments approach for two levels of thrust force, 15 N and 30 N, and for two levels of blade oscillation rate, 12,000 and 18,000 cycles per minute (cpm). In addition, a preliminary study was conducted to eliminate blade wear as a confounding factor. A custom sawing fixture was used to crosscut samples of fresh bovine cortical bone while temperature in the bone was measured by thermocouple (n=40), followed by measurements of the depth of thermal necrosis by histopathological analysis (n=200). An analysis of variance was used to determine the significance of the factor effects on necrotic depth as evidenced by empty lacunae. Both thrust force and blade speed demonstrated a statistically significant effect on the depth of osteonecrosis (p<0.05), while the interaction of thrust force with blade speed was not significant (p=0.22). The minimum necrotic depth observed was 0.50mm, corresponding to a higher level of force and blade speed (30 N, 18,000 cpm). Under these conditions, a maximum temperature of 93°C was measured at 0.3mm from the kerf. With a decrease in both thrust force and blade speed (15N, 12,000 cpm), the temperature in the bone increased to 109°C, corresponding to a nearly 50% increase in depth of the necrotic zone to 0.74 mm. A predictive equation for necrotic depth in terms of thrust force and blade speed was determined through regression analysis and validated by experiment. The histology results imply that an increase in applied thrust force is more effective in reducing the depth of thermal damage to surrounding bone than an increase in blade speed. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Khalil, Georges E.; Calabro, Karen S.; Crook, Brittani; Machado, Tamara C.; Perry, Cheryl L.; Prokhorov, Alexander V.
2018-01-01
INTRODUCTION In the United States, young adults have the highest prevalence of tobacco use. The dissemination of mobile phone text messages is a growing strategy for tobacco risk communication among young adults. However, little has been done concerning the design and validation of such text messages. The Texas Tobacco Center of Regulatory Science (Texas-TCORS) has developed a library of messages based on framing (gain- or loss-framed), depth (simple or complex) and appeal (emotional or rational). This study validated the library based on depth and appeal, identified text messages that may need improvement, and explored new themes. METHODS The library formed the study sample (N=976 messages). The Linguistic Inquiry and Word Count (LIWC) software of 2015 was used to code for word count, word length and frequency of emotional and cognitive words. Analyses of variance, logistic regression and scatter plots were conducted for validation. RESULTS In all, 874 messages agreed with LIWC-coding. Several messages did not agree with LIWC. Ten messages designed to be complex indicated simplicity, while 51 messages designed to be rational exhibited no cognitive words. New relevant themes were identified, such as health (e.g. ‘diagnosis’, ‘cancer’), death (e.g. ‘dead’, ‘lethal’) and social connotations (e.g. ‘parents’, ‘friends’). CONCLUSIONS Nicotine and tobacco researchers can safely use, for young adults, messages from the Texas-TCORS library to convey information in the intended style. Future work may expand upon the new themes. Findings will be utilized to develop new campaigns, so that risks of nicotine and tobacco products can be widely disseminated. PMID:29888338
NASA Astrophysics Data System (ADS)
Refaat, T. F.; Singh, U. N.; Petros, M.; Yu, J.; Remus, R.; Ismail, S.
2017-12-01
An airborne Integrated Path Differential Absorption (IPDA) lidar has been developed and validated at NASA Langley Research Center for atmospheric carbon dioxide column measurements. The instrument consists of a tunable, high-energy 2-μm double pulse laser transmitter and 0.4 m telescope receiver coupled to an InGaAs pin detection system. The instrument was validated for carbon dioxide (CO2) measurements from ground and airborne platforms, using a movable lidar trailer and the NASA B-200 aircraft. Airborne validation was conducted over the ocean by comparing the IPDA CO2 optical depth measurement to optical depth model derived using NOAA airborne CO2 air-sampling. Another airborne validation was conducted over land vegetation by comparing the IPDA measurement to a model derived using on-board in-situ measurements using an absolute, non-dispersive infrared gas analyzer (LiCor 840A). IPDA range measurements were also compared to rangefinder and Global Positioning System (GPS) records during ground and airborne validation, respectively. Range measurements from the ground indicated a 0.93 m IPDA range measurement uncertainty, which is limited by the transmitted laser pulse and detection system properties. This uncertainty increased to 2.80 and 7.40 m over ocean and land, due to fluctuations in ocean surface and ground elevations, respectively. IPDA CO2 differential optical depth measurements agree with both models. Consistent CO2 optical depth biases were well correlated with the digitizer full scale input range settings. CO2 optical depth measurements over ocean from 3.1 and 6.1 km altitudes indicated 0.95% and 0.83% uncertainty, respectively, using 10 second (100 shots) averaging. Using the same averaging 0.40% uncertainty was observed over land, from 3.4 km altitude, due to higher surface reflectivity, which increases the return signal power and enhances the signal-to-noise ratio. However, less uncertainty is observed at higher altitudes due to reduced signal shot noise, indicating that detection system noise-equivalent-power dominates the error. These results show that the IPDA technique is well suited for space-based platforms, which includes larger column content integration that enhances the measurement sensitivity.
Ernstbrunner, L; Werthel, J-D; Hatta, T; Thoreson, A R; Resch, H; An, K-N; Moroder, P
2016-10-01
The bony shoulder stability ratio (BSSR) allows for quantification of the bony stabilisers in vivo. We aimed to biomechanically validate the BSSR, determine whether joint incongruence affects the stability ratio (SR) of a shoulder model, and determine the correct parameters (glenoid concavity versus humeral head radius) for calculation of the BSSR in vivo. Four polyethylene balls (radii: 19.1 mm to 38.1 mm) were used to mould four fitting sockets in four different depths (3.2 mm to 19.1mm). The SR was measured in biomechanical congruent and incongruent experimental series. The experimental SR of a congruent system was compared with the calculated SR based on the BSSR approach. Differences in SR between congruent and incongruent experimental conditions were quantified. Finally, the experimental SR was compared with either calculated SR based on the socket concavity or plastic ball radius. The experimental SR is comparable with the calculated SR (mean difference 10%, sd 8%; relative values). The experimental incongruence study observed almost no differences (2%, sd 2%). The calculated SR on the basis of the socket concavity radius is superior in predicting the experimental SR (mean difference 10%, sd 9%) compared with the calculated SR based on the plastic ball radius (mean difference 42%, sd 55%). The present biomechanical investigation confirmed the validity of the BSSR. Incongruence has no significant effect on the SR of a shoulder model. In the event of an incongruent system, the calculation of the BSSR on the basis of the glenoid concavity radius is recommended.Cite this article: L. Ernstbrunner, J-D. Werthel, T. Hatta, A. R. Thoreson, H. Resch, K-N. An, P. Moroder. Biomechanical analysis of the effect of congruence, depth and radius on the stability ratio of a simplistic 'ball-and-socket' joint model. Bone Joint Res 2016;5:453-460. DOI: 10.1302/2046-3758.510.BJR-2016-0078.R1. © 2016 Ernstbrunner et al.
Wan, Chonghua; Li, Hezhan; Fan, Xuejin; Yang, Ruixue; Pan, Jiahua; Chen, Wenru; Zhao, Rong
2014-06-04
Quality of life (QOL) for patients with coronary heart disease (CHD) is now concerned worldwide with the specific instruments being seldom and no one developed by the modular approach. This paper is aimed to develop the CHD scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-CHD) by the modular approach and validate it by both classical test theory and Generalizability Theory. The QLICD-CHD was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, pre-testing and quantitative statistical procedures. 146 inpatients with CHD were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t-tests and also G studies and D studies of Genralizability Theory analysis. Multi-trait scaling analysis, correlation and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. The internal consistency α and test-retest reliability coefficients (Pearson r and Intra-class correlations ICC) for the overall instrument and all domains were higher than 0.70 and 0.80 respectively; The overall and all domains except for social domain had statistically significant changes after treatments with moderate effect size SRM (standardized response mea) ranging from 0.32 to 0.67. G-coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-CHD has good validity, reliability, and moderate responsiveness and some highlights, and can be used as the quality of life instrument for patients with CHD. However, in order to obtain better reliability, the numbers of items for social domain should be increased or the items' quality, not quantity, should be improved.
Changes in dive profiles as an indicator of feeding success in king and Adélie penguins
NASA Astrophysics Data System (ADS)
Bost, C. A.; Handrich, Y.; Butler, P. J.; Fahlman, A.; Halsey, L. G.; Woakes, A. J.; Ropert-Coudert, Y.
2007-02-01
Determining when and how deep avian divers feed remains a challenge despite technical advances. Systems that record oesophageal temperature are able to determine rate of prey ingestion with a high level of accuracy but technical problems still remain to be solved. Here we examine the validity of using changes in depth profiles to infer feeding activity in free-ranging penguins, as more accessible proxies of their feeding success. We used oesophageal temperature loggers with fast temperature sensors, deployed in tandem with time-depth recorders, on king and Adélie penguins. In the king penguin, a high correspondence was found between the number of ingestions recorded per dive and the number of wiggles during the bottom and the ascent part of the dives. In the Adélie penguins, which feed on smaller prey, the number of large temperature drops was linearly related to the number of undulations per dive. The analysis of change in depth profiles from high-resolution time-depth recorders can provide key information to enhance the study of feeding rate and foraging success of these predators. Such potential is especially relevant in the context of using Southern marine top predators to study change in availability of marine resources.
NASA Astrophysics Data System (ADS)
Efimova, Varvara; Hoffmann, Volker; Eckert, Jürgen
2012-10-01
Depth profiling with pulsed glow discharge is a promising technique. The application of pulsed voltage for sputtering reduces the sputtering rate and thermal stress and hereby improves the analysis of thin layered and thermally fragile samples. However pulsed glow discharge is not well studied and this limits its practical use. The current work deals with the questions which usually arise when the pulsed mode is applied: Which duty cycle, frequency and pulse length must be chosen to get the optimal sputtering rate and crater shape? Are the well-known sputtering effects of the continuous mode valid also for the pulsed regime? Is there any difference between dc and rf pulsing in terms of sputtering? It is found that the pulse length is a crucial parameter for the crater shape and thermal effects. Sputtering with pulsed dc and rf modes is found to be similar. The observed sputtering effects at various pulsing parameters helped to interpret and optimize the depth resolution of GD OES depth profiles.
Demonstration of UXO-PenDepth for the Estimation of Projectile Penetration Depth
2010-08-01
Effects (JTCG/ME) in August 2001. The accreditation process included verification and validation (V&V) by a subject matter expert (SME) other than...Within UXO-PenDepth, there are three sets of input parameters that are required: impact conditions (Fig. 1a), penetrator properties , and target... properties . The impact conditions that need to be defined are projectile orientation and impact velocity. The algorithm has been evaluated against
Valenta, Sabine; De Geest, Sabina; Fierz, Katharina; Beckmann, Sonja; Halter, Jörg; Schanz, Urs; Nair, Gayathri; Kirsch, Monika
2017-04-01
To give a first description of the perception of late effects among long-term survivors after Allogeneic Haematopoietic Stem Cell Transplantation (HSCT) and to validate the German Brief Illness Perception Questionnaire (BIPQ). This is a secondary analysis of data from the cross-sectional, mixed-method PROVIVO study, which included 376 survivors from two Swiss HSCT-centres. First, we analysed the sample characteristics and the distribution for each BIPQ item. Secondly, we tested three validity types following the American Educational Research Association (AERA)Standards: content validity indices (CVIs) were assessed based on an expert survey (n = 9). A confirmatory factor analysis (CFA) explored the internal structure, and correlations tested the validity in relations to other variables including data from the Hospital Anxiety and Depression Scale (HADS), the number and burden of late effects and clinical variables. In total, 319 HSCT recipients returned completed BIPQs. For this sample, the most feared threat for post-transplant life was long lasting late effects (median = 8/10). The expert-survey revealed an overall acceptable CVI (0.82), three items-on personal control, treatment control and causal representation-yielded low CVIs (<.78). The CFA confirmed that the BIPQ fits the underlying construct, the Common-Sense Model (CSM) (χ 2 (df) = 956.321, p = 0.00). The HADS-scores correlated strongly with the item emotional representation (r = 0.648; r = 0.656). According to its overall content validity, the German BIPQ is a promising instrument to gain deeper insights into patients' perceptions of HSCT late effects. However, as three items revealed potential problems, improvements and adaptions in translation are therefore required. Following these revisions, validity evidence should be re-examined through an in-depth patient survey. Copyright © 2017 Elsevier Ltd. All rights reserved.
Liu, Quan; Ma, Li; Fan, Shou-Zen; Abbod, Maysam F; Shieh, Jiann-Shing
2018-01-01
Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients' anaesthetic level during surgeries.
Auvinet, E; Multon, F; Manning, V; Meunier, J; Cobb, J P
2017-01-01
Gait asymmetry information is a key point in disease screening and follow-up. Constant Relative Phase (CRP) has been used to quantify within-stride asymmetry index, which requires noise-free and accurate motion capture, which is difficult to obtain in clinical settings. This study explores a new index, the Longitudinal Asymmetry Index (ILong) which is derived using data from a low-cost depth camera (Kinect). ILong is based on depth images averaged over several gait cycles, rather than derived joint positions or angles. This study aims to evaluate (1) the validity of CRP computed with Kinect, (2) the validity and sensitivity of ILong for measuring gait asymmetry based solely on data provided by a depth camera, (3) the clinical applicability of a posteriorly mounted camera system to avoid occlusion caused by the standard front-fitted treadmill consoles and (4) the number of strides needed to reliably calculate ILong. The gait of 15 subjects was recorded concurrently with a marker-based system (MBS) and Kinect, and asymmetry was artificially reproduced by introducing a 5cm sole attached to one foot. CRP computed with Kinect was not reliable. ILong detected this disturbed gait reliably and could be computed from a posteriorly placed Kinect without loss of validity. A minimum of five strides was needed to achieve a correlation coefficient of 0.9 between standard MBS and low-cost depth camera based ILong. ILong provides a clinically pragmatic method for measuring gait asymmetry, with application for improved patient care through enhanced disease, screening, diagnosis and monitoring. Copyright © 2016. Published by Elsevier B.V.
Survey Development to Assess College Students' Perceptions of the Campus Environment.
Sowers, Morgan F; Colby, Sarah; Greene, Geoffrey W; Pickett, Mackenzie; Franzen-Castle, Lisa; Olfert, Melissa D; Shelnutt, Karla; Brown, Onikia; Horacek, Tanya M; Kidd, Tandalayo; Kattelmann, Kendra K; White, Adrienne A; Zhou, Wenjun; Riggsbee, Kristin; Yan, Wangcheng; Byrd-Bredbenner, Carol
2017-11-01
We developed and tested a College Environmental Perceptions Survey (CEPS) to assess college students' perceptions of the healthfulness of their campus. CEPS was developed in 3 stages: questionnaire development, validity testing, and reliability testing. Questionnaire development was based on an extensive literature review and input from an expert panel to establish content validity. Face validity was established with the target population using cognitive interviews with 100 college students. Concurrent-criterion validity was established with in-depth interviews (N = 30) of college students compared to surveys completed by the same 30 students. Surveys completed by college students from 8 universities (N = 1147) were used to test internal structure (factor analysis) and internal consistency (Cronbach's alpha). After development and testing, 15 items remained from the original 48 items. A 5-factor solution emerged: physical activity (4 items, α = .635), water (3 items, α = .773), vending (2 items, α = .680), healthy food (2 items, α = .631), and policy (2 items, α = .573). The mean total score for all universities was 62.71 (±11.16) on a 100-point scale. CEPS appears to be a valid and reliable tool for assessing college students' perceptions of their health-related campus environment.
Artani, Azmina; Bhamani, Shireen Shehzad; Azam, Iqbal; AbdulSultan, Moiz; Khoja, Adeel; Kamal, Ayeesha K
2017-05-05
Contextually relevant stressful life events are integral to the quantification of stress. None such measures have been adapted for the Pakistani population. The RLCQ developed by Richard Rahe measures stress of an individual through recording the experience of life changing events. We used qualitative methodology in order to identify contextually relevant stressors in an open ended format, using serial in-depth interviews until thematic saturation of reported stressful life events was achieved. In our next phase of adaptation, our objective was to scale each item on the questionnaire, so as to weigh each of these identified events, in terms of severity of stress. This scaling exercise was performed on 200 random participants residing in the four communities of Karachi namely Kharadar, Dhorajee, Gulshan and Garden. For analysis of the scaled tool, exploratory factor analysis was used to inform structuring. Finally, to complete the process of adaption, content and face validity exercises were performed. Content validity by subject expert review and face validity was performed by translation and back translation of the adapted RLCQ. This yielded our final adapted tool. Stressful life events emerging from the qualitative phase of the study reflect daily life stressors arising from the unstable socio-political environment. Some such events were public harassment, robbery/theft, missed life opportunities due to nepotism, extortion and threats, being a victim of state sponsored brutality, lack of electricity, water, sanitation, fuel, destruction due to natural disasters and direct or media based exposure to suicide bombing in the city. Personal or societal based relevant stressors included male child preference, having an unmarried middle aged daughter, lack of empowerment and respect reported by females. The finally adapted RLCQ incorporated "Environmental Stress" as a new category. The processes of qualitative methodology, in depth interview, community based scaling and face and content validity yielded an adapted RLCQ that represents contextually relevant life stress for adults residing in urban Pakistan. Clinicaltrials.gov NCT02356263 . Registered January 28, 2015. (Observational Study Only).
NASA Astrophysics Data System (ADS)
Nakahara, Hisashi
2015-02-01
For monitoring temporal changes in subsurface structures I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Use of coda waves requires earthquakes resulting in decreased time resolution for monitoring. Nonetheless, it may be possible to monitor subsurface structures in sufficient time resolutions in regions with high seismicity. In studying the 2011 Tohoku-Oki, Japan earthquake (Mw 9.0), for which velocity changes have been previously reported, I try to validate the method. KiK-net stations in northern Honshu are used in this analysis. For each moderate earthquake normalized auto correlation functions of surface records are stacked with respect to time windows in the S-wave coda. Aligning the stacked, normalized auto correlation functions with time, I search for changes in phases arrival times. The phases at lag times of <1 s are studied because changes at shallow depths are focused. Temporal variations in the arrival times are measured at the stations based on the stretching method. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. The amounts of the phase delays are 10 % on average with the maximum of about 50 % at some stations. The deconvolution analysis using surface and subsurface records at the same stations is conducted for validation. The results show the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percent, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable in detecting larger changes. In spite of these disadvantages, this analysis is still attractive because it can be applied to many records on the surface in regions where no boreholes are available.
Reeves, Todd D; Marbach-Ad, Gili
2016-01-01
Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.
Kraft, James M; Maloney, Shannon I; Brainard, David H
2002-01-01
Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.
Validation of MODIS FLH and In Situ Chlorophyll a from Tampa Bay, Florida (USA)
NASA Technical Reports Server (NTRS)
Fischer, Andrew; MorenoMadrinan, Max J.
2012-01-01
Satellite observation of phytoplankton concentration or chlorophyll-a (chla) is an important characteristic, critically integral to monitoring coastal water quality. However, the optical properties of estuarine and coastal waters are highly variable and complex and pose a great challenge for accurate analysis. Constituents such as suspended solids and dissolved organic matter and the overlapping and uncorrelated absorptions in the blue region of the spectrum renders the blue-green ratio algorithms for estimating chl-a inaccurate. Measurement of suninduced chlorophyll fluorescence, on the other hand, which utilizes the near infrared portion of the electromagnetic spectrum may, provide a better estimate of phytoplankton concentrations. While modelling and laboratory studies have illustrated both the utility and limitations of satellite algorithms based on the sun induced chlorophyll fluorescence signal, few have examined the empirical validity of these algorithms or compared their accuracy against bluegreen ratio algorithms . In an unprecedented analysis using a long term (2003-2011) in situ monitoring data set from Tampa Bay, Florida (USA), we assess the validity of the FLH product from the Moderate Resolution Imaging Spectrometer against a suite of water quality parameters taken in a variety of conditions throughout this large optically complex estuarine system. . Overall, the results show a 106% increase in the validity of chla concentration estimation using FLH over the standard chla estimate from the blue-green OC3M algorithm. Additionally, a systematic analysis of sampling sites throughout the bay is undertaken to understand how the FLH product responds to varying conditions in the estuary and correlations are conducted to see how the relationships between satellite FLH and in situ chlorophyll-a change with depth, distance from shore, from structures like bridges, and nutrient concentrations and turbidity. Such analysis illustrates that the correlations between FLH and in situ chla measurements increases with increasing distance between monitoring sites and structures like bridges and shore. Due probably to confounding factors, expected improvement in the FLH- chla relationship was not clearly noted when increasing depth and distance from shore alone (not including bridges). Correlations between turbidity and nutrient concentrations are discussed further and principle component analyses are employed to address the relationships between the multivariate data sets. A thorough understanding of how satellite FLH algorithms relate to in situ water quality parameters will enhance our understanding of how MODIS s global FLH algorithm can be used empirically to monitor coastal waters worldwide.
Free torsional vibrations of tapered cantilever I-beams
NASA Astrophysics Data System (ADS)
Rao, C. Kameswara; Mirza, S.
1988-08-01
Torsional vibration characteristics of linearly tapered cantilever I-beams have been studied by using the Galerkin finite element method. A third degree polynomial is assumed for the angle of twist. The analysis presented is valid for long beams and includes the effect of warping. The individual as well as combined effects of linear tapers in the width of the flanges and the depth of the web on the torsional vibration of cantilever I-beams are investigated. Numerical results generated for various values of taper ratios are presented in graphical form.
Nascimento, Maria Isabel do; Reichenheim, Michael Eduardo; Monteiro, Gina Torres Rego
2011-12-01
The objective of this study was to reassess the dimensional structure of a Brazilian version of the Scale of Satisfaction with Interpersonal Processes of General Medical Care, proposed originally as a one-dimensional instrument. Strict confirmatory factor analysis (CFA) and exploratory factor analysis modeled within a CFA framework (E/CFA) were used to identify the best model. An initial CFA rejected the one-dimensional structure, while an E/CFA suggested a two-dimensional structure. The latter structure was followed by a new CFA, which showed that the model without cross-loading was the most parsimonious, with adequate fit indices (CFI = 0.982 and TLI = 0.988), except for RMSEA (0.062). Although the model achieved convergent validity, discriminant validity was questionable, with the square-root of the mean variance extracted from dimension 1 estimates falling below the respective factor correlation. According to these results, there is not sufficient evidence to recommend the immediate use of the instrument, and further studies are needed for a more in-depth analysis of the postulated structures.
A numerical forecast model for road meteorology
NASA Astrophysics Data System (ADS)
Meng, Chunlei
2017-05-01
A fine-scale numerical model for road surface parameters prediction (BJ-ROME) is developed based on the Common Land Model. The model is validated using in situ observation data measured by the ROSA road weather stations of Vaisala Company, Finland. BJ-ROME not only takes into account road surface factors, such as imperviousness, relatively low albedo, high heat capacity, and high heat conductivity, but also considers the influence of urban anthropogenic heat, impervious surface evaporation, and urban land-use/land-cover changes. The forecast time span and the update interval of BJ-ROME in vocational operation are 24 and 3 h, respectively. The validation results indicate that BJ-ROME can successfully simulate the diurnal variation of road surface temperature both under clear-sky and rainfall conditions. BJ-ROME can simulate road water and snow depth well if the artificial removing was considered. Road surface energy balance in rainy days is quite different from that in clear-sky conditions. Road evaporation could not be neglected in road surface water cycle research. The results of sensitivity analysis show solar radiation correction coefficient, asphalt depth, and asphalt heat conductivity are important parameters in road interface temperatures simulation. The prediction results could be used as a reference of maintenance decision support system to mitigate the traffic jam and urban water logging especially in large cities.
Igras, Susan; Diakité, Mariam; Lundgren, Rebecka
2017-07-01
In West Africa, social factors influence whether couples with unmet need for family planning act on birth-spacing desires. Tékponon Jikuagou is testing a social network-based intervention to reduce social barriers by diffusing new ideas. Individuals and groups judged socially influential by their communities provide entrée to networks. A participatory social network mapping methodology was designed to identify these diffusion actors. Analysis of monitoring data, in-depth interviews, and evaluation reports assessed the methodology's acceptability to communities and staff and whether it produced valid, reliable data to identify influential individuals and groups who diffuse new ideas through their networks. Results indicated the methodology's acceptability. Communities were actively and equitably engaged. Staff appreciated its ability to yield timely, actionable information. The mapping methodology also provided valid and reliable information by enabling communities to identify highly connected and influential network actors. Consistent with social network theory, this methodology resulted in the selection of informal groups and individuals in both informal and formal positions. In-depth interview data suggest these actors were diffusing new ideas, further confirming their influence/connectivity. The participatory methodology generated insider knowledge of who has social influence, challenging commonly held assumptions. Collecting and displaying information fostered staff and community learning, laying groundwork for social change.
Histological Validity and Clinical Evidence for Use of Fractional Lasers for Acne Scars
Sardana, Kabir; Garg, Vijay K; Arora, Pooja; Khurana, Nita
2012-01-01
Though fractional lasers are widely used for acne scars, very little clinical or histological data based on the objective clinical assessment or the depth of penetration of lasers on in vivo facial tissue are available. The depth probably is the most important aspect that predicts the improvement in acne scars but the studies on histology have little uniformity in terms of substrate (tissue) used, processing and stains used. The variability of the laser setting (dose, pulses and density) makes comparison of the studies difficult. It is easier to compare the end results, histological depth and clinical results. We analysed all the published clinical and histological studies on fractional lasers in acne scars and analysed the data, both clinical and histological, by statistical software to decipher their significance. On statistical analysis, the depth was found to be variable with the 1550-nm lasers achieving a depth of 679 μm versus 10,600 nm (895 μm) and 2940 nm (837 μm) lasers. The mean depth of penetration (in μm) in relation to the energy used, in millijoules (mj), varies depending on the laser studied. This was statistically found to be 12.9–28.5 for Er:glass, 3–54.38 for Er:YAG and 6.28–53.66 for CO2. The subjective clinical improvement was a modest 46%. The lack of objective evaluation of clinical improvement and scar-specific assessment with the lack of appropriate in vivo studies is a case for combining conventional modalities like subcision, punch excision and needling with fractional lasers to achieve optimal results. PMID:23060702
Olson, Scott A.; Ayotte, Joseph D.
1996-01-01
Total scour at a highway crossing is comprised of three components: 1) long-term aggradation or degradation; 2) contraction scour (due to reduction in flow area caused by a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute scour depths for contraction and local scour and a summary of the results follows. Contraction scour for all modelled flows ranged from 6.3 ft to 7.8 ft and the worst-case contraction scour occurred at the 100-year discharge. Abutment scour ranged from 7.9 ft to 20.3 ft and the worst-case abutment scour occurred at the 500-year discharge. Scour depths and depths to armoring are summarized on p. 14 in the section titled “Scour Results”. Scour elevations, based on the calculated depths are presented in tables 1 and 2; a graph of the scour elevations is presented in figure 8 Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. For all scour presented in this report, “the scour depths adopted [by VTAOT] may differ from the equation values based on engineering judgement” (Richardson and others, 1993, p. 21, 27). It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results.
Song, Donald L.; Ivanoff, Michael A.
1996-01-01
Total scour at a highway crossing is comprised of three components: 1) long-term aggradation or degradation; 2) contraction scour (due to reduction in flow area caused by a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute scour depths for contraction and local scour and a summary of the results follows. Contraction scour for all modelled flows ranged from 1.9 ft to 4.6 ft and the worst-case contraction scour occurred at the incipient overtopping discharge. Abutment scour ranged from 4.0 ft to 22.5 ft and the worst-case abutment scour occurred at the 500-year discharge. Scour depths and depths to armoring are summarized on p. 14 in the section titled “Scour Results”. Scour elevations, based on the calculated depths are presented in tables 1 and 2; a graph of the scour elevations is presented in figure 8 Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. For all scour presented in this report, “the scour depths adopted [by VTAOT] may differ from the equation values based on engineering judgement” (Richardson and others, 1993, p. 21, 27). It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results.
Olson, Scott A.; Song, Donald L.
1996-01-01
Total scour at a highway crossing is comprised of three components: 1) long-term aggradation or degradation; 2) contraction scour (due to reduction in flow area caused by a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute scour depths for contraction and local scour and a summary of the results follows. Contraction scour for all modelled flows ranged from 0.6 ft to 1.3 ft and the worst-case contraction scour occurred at the 500-year discharge. Abutment scour ranged from 6.7 ft to 12.2 ft and the worst-case abutment scour occurred at the 500-year discharge. Scour depths and depths to armoring are summarized on p. 14 in the section titled “Scour Results”. Scour elevations, based on the calculated depths are presented in tables 1 and 2; a graph of the scour elevations is presented in figure 8 Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. For all scour presented in this report, “the scour depths adopted [by VTAOT] may differ from the equation values based on engineering judgement” (Richardson and others, 1993, p. 21, 27). It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization.
Blenkmann, Alejandro O; Phillips, Holly N; Princich, Juan P; Rowe, James B; Bekinschtein, Tristan A; Muravchik, Carlos H; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2-3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions.
iElectrodes: A Comprehensive Open-Source Toolbox for Depth and Subdural Grid Electrode Localization
Blenkmann, Alejandro O.; Phillips, Holly N.; Princich, Juan P.; Rowe, James B.; Bekinschtein, Tristan A.; Muravchik, Carlos H.; Kochen, Silvia
2017-01-01
The localization of intracranial electrodes is a fundamental step in the analysis of invasive electroencephalography (EEG) recordings in research and clinical practice. The conclusions reached from the analysis of these recordings rely on the accuracy of electrode localization in relationship to brain anatomy. However, currently available techniques for localizing electrodes from magnetic resonance (MR) and/or computerized tomography (CT) images are time consuming and/or limited to particular electrode types or shapes. Here we present iElectrodes, an open-source toolbox that provides robust and accurate semi-automatic localization of both subdural grids and depth electrodes. Using pre- and post-implantation images, the method takes 2–3 min to localize the coordinates in each electrode array and automatically number the electrodes. The proposed pre-processing pipeline allows one to work in a normalized space and to automatically obtain anatomical labels of the localized electrodes without neuroimaging experts. We validated the method with data from 22 patients implanted with a total of 1,242 electrodes. We show that localization distances were within 0.56 mm of those achieved by experienced manual evaluators. iElectrodes provided additional advantages in terms of robustness (even with severe perioperative cerebral distortions), speed (less than half the operator time compared to expert manual localization), simplicity, utility across multiple electrode types (surface and depth electrodes) and all brain regions. PMID:28303098
Risinger, John I.; Allard, Jay; Chandran, Uma; Day, Roger; Chandramouli, Gadisetti V. R.; Miller, Caela; Zahn, Christopher; Oliver, Julie; Litzi, Tracy; Marcus, Charlotte; Dubil, Elizabeth; Byrd, Kevin; Cassablanca, Yovanni; Becich, Michael; Berchuck, Andrew; Darcy, Kathleen M.; Hamilton, Chad A.; Conrads, Thomas P.; Maxwell, G. Larry
2013-01-01
Endometrial cancer is the most common gynecologic malignancy in the United States but it remains poorly understood at the molecular level. This investigation was conducted to specifically assess whether gene expression changes underlie the clinical and pathologic factors traditionally used for determining treatment regimens in women with stage I endometrial cancer. These include the effect of tumor grade, depth of myometrial invasion and histotype. We utilized oligonucleotide microarrays to assess the transcript expression profile in epithelial glandular cells laser microdissected from 79 endometrioid and 12 serous stage I endometrial cancers with a heterogeneous distribution of grade and depth of myometrial invasion, along with 12 normal post-menopausal endometrial samples. Unsupervised multidimensional scaling analyses revealed that serous and endometrioid stage I cancers have similar transcript expression patterns when compared to normal controls where 900 transcripts were identified to be differentially expressed by at least fourfold (univariate t-test, p < 0.001) between the cancers and normal endometrium. This analysis also identified transcript expression differences between serous and endometrioid cancers and tumor grade, but no apparent differences were identified as a function of depth of myometrial invasion. Four genes were validated by quantitative PCR on an independent set of cancer and normal endometrium samples. These findings indicate that unique gene expression profiles are associated with histologic type and grade, but not myometrial invasion among early stage endometrial cancers. These data provide a comprehensive perspective on the molecular alterations associated with stage I endometrial cancer, particularly those subtypes that have the worst prognosis. PMID:23785665
Boehmler, Erick M.
1996-01-01
Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows was 0.1 ft. The worst-case contraction scour occurred at the 100-year and 500-year discharges. Abutment scour ranged from 3.9 to 5.2 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1995, p. 47). Many factors, including historical performance during flood events, the geomorphic assessment, scour protection measures, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein, based on the consideration of additional contributing factors and experienced engineering judgement.
Simulating the Cyclone Induced Turbulent Mixing in the Bay of Bengal using COAWST Model
NASA Astrophysics Data System (ADS)
Prakash, K. R.; Nigam, T.; Pant, V.
2017-12-01
Mixing in the upper oceanic layers (up to a few tens of meters from surface) is an important process to understand the evolution of sea surface properties. Enhanced mixing due to strong wind forcing at surface leads to deepening of mixed layer that affects the air-sea exchange of heat and momentum fluxes and modulates sea surface temperature (SST). In the present study, we used Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) model to demonstrate and quantify the enhanced cyclone induced turbulent mixing in case of a severe cyclonic storm. The COAWST model was configured over the Bay of Bengal (BoB) and used to simulate the atmospheric and oceanic conditions prevailing during the tropical cyclone (TC) Phailin that occurred over the BoB during 10-15 October 2013. The model simulated cyclone track was validated with IMD best-track and model SST validated with daily AVHRR SST data. Validation shows that model simulated track & intensity, SST and salinity were in good agreement with observations and the cyclone induced cooling of the sea surface was well captured by the model. Model simulations show a considerable deepening (by 10-15 m) of the mixed layer and shoaling of thermocline during TC Phailin. The power spectrum analysis was performed on the zonal and meridional baroclinic current components, which shows strongest energy at 14 m depth. Model results were analyzed to investigate the non-uniform energy distribution in the water column from surface up to the thermocline depth. The rotary spectra analysis highlights the downward direction of turbulent mixing during the TC Phailin period. Model simulations were used to quantify and interpret the near-inertial mixing, which were generated by cyclone induced strong wind stress and the near-inertial energy. These near-inertial oscillations are responsible for the enhancement of the mixing operative in the strong post-monsoon (October-November) stratification in the BoB.
Automatic RBG-depth-pressure anthropometric analysis and individualised sleep solution prescription.
Esquirol Caussa, Jordi; Palmero Cantariño, Cristina; Bayo Tallón, Vanessa; Cos Morera, Miquel Àngel; Escalera, Sergio; Sánchez, David; Sánchez Padilla, Maider; Serrano Domínguez, Noelia; Relats Vilageliu, Mireia
2017-08-01
Sleep surfaces must adapt to individual somatotypic features to maintain a comfortable, convenient and healthy sleep, preventing diseases and injuries. Individually determining the most adequate rest surface can often be a complex and subjective question. To design and validate an automatic multimodal somatotype determination model to automatically recommend an individually designed mattress-topper-pillow combination. Design and validation of an automated prescription model for an individualised sleep system is performed through a single-image 2 D-3 D analysis and body pressure distribution, to objectively determine optimal individual sleep surfaces combining five different mattress densities, three different toppers and three cervical pillows. A final study (n = 151) and re-analysis (n = 117) defined and validated the model, showing high correlations between calculated and real data (>85% in height and body circumferences, 89.9% in weight, 80.4% in body mass index and more than 70% in morphotype categorisation). Somatotype determination model can accurately prescribe an individualised sleep solution. This can be useful for healthy people and for health centres that need to adapt sleep surfaces to people with special needs. Next steps will increase model's accuracy and analise, if this prescribed individualised sleep solution can improve sleep quantity and quality; additionally, future studies will adapt the model to mattresses with technological improvements, tailor-made production and will define interfaces for people with special needs.
Farmer, Nicholas A.; Karnauskas, Mandy
2013-01-01
There is broad interest in the development of efficient marine protected areas (MPAs) to reduce bycatch and end overfishing of speckled hind (Epinephelus drummondhayi) and warsaw grouper (Hyporthodus nigritus) in the Atlantic Ocean off the southeastern U.S. We assimilated decades of data from many fishery-dependent, fishery-independent, and anecdotal sources to describe the spatial distribution of these data limited stocks. A spatial classification model was developed to categorize depth-grids based on the distribution of speckled hind and warsaw grouper point observations and identified benthic habitats. Logistic regression analysis was used to develop a quantitative model to predict the spatial distribution of speckled hind and warsaw grouper as a function of depth, latitude, and habitat. Models, controlling for sampling gear effects, were selected based on AIC and 10-fold cross validation. The best-fitting model for warsaw grouper included latitude and depth to explain 10.8% of the variability in probability of detection, with a false prediction rate of 28–33%. The best-fitting model for speckled hind, per cross-validation, included latitude and depth to explain 36.8% of the variability in probability of detection, with a false prediction rate of 25–27%. The best-fitting speckled hind model, per AIC, also included habitat, but had false prediction rates up to 36%. Speckled hind and warsaw grouper habitats followed a shelf-edge hardbottom ridge from North Carolina to southeast Florida, with speckled hind more common to the north and warsaw grouper more common to the south. The proportion of habitat classifications and model-estimated stock contained within established and proposed MPAs was computed. Existing MPAs covered 10% of probable shelf-edge habitats for speckled hind and warsaw grouper, protecting 3–8% of speckled hind and 8% of warsaw grouper stocks. Proposed MPAs could add 24% more probable shelf-edge habitat, and protect an additional 14–29% of speckled hind and 20% of warsaw grouper stocks. PMID:24260126
43 CFR 3286.1 - Model Unit Agreement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... continue such drilling diligently until the ___ formation has been tested or until at a lesser depth... Operator shall not in any event be required to drill said well to a depth in excess of ___ feet. 11.5The... assert any legal or constitutional right or defense pertaining to the validity or invalidity of any law...
Basra, M K A; Salek, M S; Fenech, D; Finlay, A Y
2018-01-01
Skin disease can affect the quality of life (QoL) of teenagers in a variety of different ways, some being unique to this age group. To develop and validate a dermatology-specific QoL instrument for adolescents with skin diseases. Qualitative semistructured interviews were conducted with adolescents with skin disease to gain in-depth understanding of how skin diseases affect their QoL. A prototype instrument based on the themes identified from content analysis of interviews was tested in several stages, using classical test theory and item response theory models to develop this new tool and conduct its psychometric evaluation. Thirty-three QoL issues were identified from semistructured interviews with 50 adolescents. A questionnaire based on items derived from content analysis of interviews was subjected to Rasch analysis: factor analysis identified three domains, therefore not supporting the validity of T-QoL as a unidimensional measure. Psychometric evaluation of the final 18-item questionnaire was carried out in a cohort of 203 adolescents. Convergent validity was demonstrated by significant correlation with Skindex-Teen and Dermatology Life Quality Index (DLQI) or Children's DLQI. The T-QoL showed excellent internal consistency reliability: Cronbach's α = 0·89 for total scale score and 0·85, 0·60 and 0·74, respectively, for domains 1, 2 and 3. Test-retest reliability was high in stable volunteers. T-QoL showed sensitivity to change in two subgroups of patients who indicated change in their self-assessed disease severity. Built on rich qualitative data from patients, the T-QoL is a simple and valid tool to quantify the impact of skin disease on adolescents' QoL; it could be used as an outcome measure in both clinical practice and clinical research. © 2017 British Association of Dermatologists.
ERIC Educational Resources Information Center
Kennealy, Patrick J.; Hicks, Brian M.; Patrick, Christopher J.
2007-01-01
The validity of the Psychopathy Checklist-Revised (PCL-R) has been examined extensively in men, but its validity for women remains understudied. Specifically, the correlates of the general construct of psychopathy and its components as assessed by PCL-R total, factor, and facet scores have yet to be examined in depth. Based on previous research…
Computational Depth of Anesthesia via Multiple Vital Signs Based on Artificial Neural Networks.
Sadrawi, Muammar; Fan, Shou-Zen; Abbod, Maysam F; Jen, Kuo-Kuang; Shieh, Jiann-Shing
2015-01-01
This study evaluated the depth of anesthesia (DoA) index using artificial neural networks (ANN) which is performed as the modeling technique. Totally 63-patient data is addressed, for both modeling and testing of 17 and 46 patients, respectively. The empirical mode decomposition (EMD) is utilized to purify between the electroencephalography (EEG) signal and the noise. The filtered EEG signal is subsequently extracted to achieve a sample entropy index by every 5-second signal. Then, it is combined with other mean values of vital signs, that is, electromyography (EMG), heart rate (HR), pulse, systolic blood pressure (SBP), diastolic blood pressure (DBP), and signal quality index (SQI) to evaluate the DoA index as the input. The 5 doctor scores are averaged to obtain an output index. The mean absolute error (MAE) is utilized as the performance evaluation. 10-fold cross-validation is performed in order to generalize the model. The ANN model is compared with the bispectral index (BIS). The results show that the ANN is able to produce lower MAE than BIS. For the correlation coefficient, ANN also has higher value than BIS tested on the 46-patient testing data. Sensitivity analysis and cross-validation method are applied in advance. The results state that EMG has the most effecting parameter, significantly.
Computational Depth of Anesthesia via Multiple Vital Signs Based on Artificial Neural Networks
Sadrawi, Muammar; Fan, Shou-Zen; Abbod, Maysam F.; Jen, Kuo-Kuang; Shieh, Jiann-Shing
2015-01-01
This study evaluated the depth of anesthesia (DoA) index using artificial neural networks (ANN) which is performed as the modeling technique. Totally 63-patient data is addressed, for both modeling and testing of 17 and 46 patients, respectively. The empirical mode decomposition (EMD) is utilized to purify between the electroencephalography (EEG) signal and the noise. The filtered EEG signal is subsequently extracted to achieve a sample entropy index by every 5-second signal. Then, it is combined with other mean values of vital signs, that is, electromyography (EMG), heart rate (HR), pulse, systolic blood pressure (SBP), diastolic blood pressure (DBP), and signal quality index (SQI) to evaluate the DoA index as the input. The 5 doctor scores are averaged to obtain an output index. The mean absolute error (MAE) is utilized as the performance evaluation. 10-fold cross-validation is performed in order to generalize the model. The ANN model is compared with the bispectral index (BIS). The results show that the ANN is able to produce lower MAE than BIS. For the correlation coefficient, ANN also has higher value than BIS tested on the 46-patient testing data. Sensitivity analysis and cross-validation method are applied in advance. The results state that EMG has the most effecting parameter, significantly. PMID:26568957
Time multiplexing based extended depth of focus imaging.
Ilovitsh, Asaf; Zalevsky, Zeev
2016-01-01
We propose to utilize the time multiplexing super resolution method to extend the depth of focus of an imaging system. In standard time multiplexing, the super resolution is achieved by generating duplication of the optical transfer function in the spectrum domain, by the use of moving gratings. While this improves the spatial resolution, it does not increase the depth of focus. By changing the gratings frequency and, by that changing the duplication positions, it is possible to obtain an extended depth of focus. The proposed method is presented analytically, demonstrated via numerical simulations and validated by a laboratory experiment.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Benson, Sally; Sonntag, Karen L.; Kato, Seiji; Min, Qilong; Minnis, Patrick; Twohy, Cynthia H.; Poellot, Michael; Dong, Xiquan; Long, Charles;
2006-01-01
It has been hypothesized that continuous ground-based remote sensing measurements from active and passive remote sensors combined with regular soundings of the atmospheric thermodynamic structure can be combined to describe the effects of clouds on the clear sky radiation fluxes. We critically test that hypothesis in this paper and a companion paper (Part II). Using data collected at the Southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site sponsored by the U.S. Department of Energy, we explore an analysis methodology that results in the characterization of the physical state of the atmospheric profile at time resolutions of five minutes and vertical resolutions of 90 m. The description includes thermodynamics and water vapor profile information derived by merging radiosonde soundings with ground-based data, and continues through specification of the cloud layer occurrence and microphysical and radiative properties derived from retrieval algorithms and parameterizations. The description of the atmospheric physical state includes a calculation of the infrared and clear and cloudy sky solar flux profiles. Validation of the methodology is provided by comparing the calculated fluxes with top of atmosphere (TOA) and surface flux measurements and by comparing the total column optical depths to independently derived estimates. We find over a 1-year period of comparison in overcast uniform skies, that the calculations are strongly correlated to measurements with biases in the flux quantities at the surface and TOA of less than 10% and median fractional errors ranging from 20% to as low as 2%. In the optical depth comparison for uniform overcast skies during the year 2000 where the optical depth varies over 3 orders of magnitude we find a mean positive bias of 46% with a median bias of less than 10% and a 0.89 correlation coefficient. The slope of the linear regression line for the optical depth comparison is 0.86 with a normal deviation of 20% about this line. In addition to a case study where we examine the cloud radiative effects at the TOA, surface and atmosphere by a middle latitude synoptic-scale cyclone, we examine the cloud top pressure and optical depth retrievals of ISCCP and LBTM over a period of 1 year. Using overcast period from the year 2000, we find that the satellite algorithms tend to bias cloud tops into the middle troposphere and underestimate optical depth in high optical depth events (greater than 100) by as much as a factor of 2.
Single-case research design in pediatric psychology: considerations regarding data analysis.
Cohen, Lindsey L; Feinstein, Amanda; Masuda, Akihiko; Vowles, Kevin E
2014-03-01
Single-case research allows for an examination of behavior and can demonstrate the functional relation between intervention and outcome in pediatric psychology. This review highlights key assumptions, methodological and design considerations, and options for data analysis. Single-case methodology and guidelines are reviewed with an in-depth focus on visual and statistical analyses. Guidelines allow for the careful evaluation of design quality and visual analysis. A number of statistical techniques have been introduced to supplement visual analysis, but to date, there is no consensus on their recommended use in single-case research design. Single-case methodology is invaluable for advancing pediatric psychology science and practice, and guidelines have been introduced to enhance the consistency, validity, and reliability of these studies. Experts generally agree that visual inspection is the optimal method of analysis in single-case design; however, statistical approaches are becoming increasingly evaluated and used to augment data interpretation.
Multi-frequency local wavenumber analysis and ply correlation of delamination damage.
Juarez, Peter D; Leckey, Cara A C
2015-09-01
Wavenumber domain analysis through use of scanning laser Doppler vibrometry has been shown to be effective for non-contact inspection of damage in composites. Qualitative and semi-quantitative local wavenumber analysis of realistic delamination damage and quantitative analysis of idealized damage scenarios (Teflon inserts) have been performed previously in the literature. This paper presents a new methodology based on multi-frequency local wavenumber analysis for quantitative assessment of multi-ply delamination damage in carbon fiber reinforced polymer (CFRP) composite specimens. The methodology is presented and applied to a real world damage scenario (impact damage in an aerospace CFRP composite). The methodology yields delamination size and also correlates local wavenumber results from multiple excitation frequencies to theoretical dispersion curves in order to robustly determine the delamination ply depth. Results from the wavenumber based technique are validated against a traditional nondestructive evaluation method. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Doblas, Ana; Patwary, Nurmohammed; Saavedra, Genaro; Martínez-Corral, Manuel; Preza, Chrysanthe
2014-03-01
Wavefront coding techniques are currently used to engineer unique point spread functions (PSFs) that enhance existing microscope modalities or create new ones. Previous work in this field demonstrated that simulated intensity PSFs encoded with a generalized cubic phase mask (GCPM) are invariant to spherical aberration or misfocus; dependent on parameter selection. Additional work demonstrated that simulated PSFs encoded with a squared cubic phase mask (SQUBIC) produce a depth invariant focal spot for application in confocal scanning microscopy. Implementation of PSF engineering theory with a liquid crystal on silicon (LCoS) spatial light modulator (SLM) enables validation of WFC phase mask designs and parameters by manipulating optical wavefront properties with a programmable diffractive element. To validate and investigate parameters of the GCPM and SQUBIC WFC masks, we implemented PSF engineering in an upright microscope modified with a dual camera port and a LCoS SLM. We present measured WFC PSFs and compare them to simulated PSFs through analysis of their effect on the microscope imaging system properties. Experimentally acquired PSFs show the same intensity distribution as simulation for the GCPM phase mask, the SQUBIC-mask and the well-known and characterized cubic-phase mask (CPM), first applied to high NA microscopy by Arnison et al.10, for extending depth of field. These measurements provide experimental validation of new WFC masks and demonstrate the use of the LCoS SLM as a WFC design tool. Although efficiency improvements are needed, this application of LCoS technology renders the microscope capable of switching among multiple WFC modes.
Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.
2014-01-01
A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367
NASA Astrophysics Data System (ADS)
Boiyo, Richard; Kumar, K. Raghavendra; Zhao, Tianliang
2017-11-01
Over the last two decades, a number of space-borne sensors have been used to retrieve aerosol optical depth (AOD). The reliability of these datasets over East Africa (EA), however, is an important issue in the interpretation of regional aerosol variability. This study provides an intercomparison and validation of AOD retrievals from the MODIS-Terra (DT and DB), MISR and OMI sensors against ground-based measurements from the AERONET over three sites (CRPSM_Malindi, Nairobi, and ICIPE_Mbita) in Kenya, EA during the periods 2008-2013, 2005-2009 and 2006-2015, respectively. The analysis revealed that MISR performed better over the three sites with about 82.5% of paired AOD data falling within the error envelope (EE). MODIS-DT showed good agreement against AERONET with 59.05% of paired AOD falling within the sensor EE over terrestrial surfaces with relatively high vegetation cover. The comparison between MODIS-DB and AERONET revealed an overall lower performance with lower Gfraction (48.93%) and lower correlation r = 0.58; while AOD retrieved from OMI showed less correspondence with AERONET data with lower Gfraction (68.89%) and lowest correlation r = 0.31. The monthly evaluation of AODs retrieved from the sensors against AERONET AOD indicates that MODIS-DT has the best performance over the three sites with highest correlation (0.71-0.84), lowest RMSE and spread closer to the AERONET. Regarding seasonal analysis, MISR performed well during most seasons over Nairobi and Mbita; while MODIS-DT performed better than all other sensors during most seasons over Malindi. Furthermore, the best seasonal performance of most sensors relative to AERONET data occurred during June-August (JJA) attributed to modulations induced by a precipitation-vegetation factor to AOD satellite retrieval algorithms. The study revealed the strength and weakness of each of the retrieval algorithm and forms the basis for further research on the validation of satellite retrieved aerosol products over EA.
Scherrer, Stephen R; Rideout, Brendan P; Giorli, Giacomo; Nosal, Eva-Marie; Weng, Kevin C
2018-01-01
Passive acoustic telemetry using coded transmitter tags and stationary receivers is a popular method for tracking movements of aquatic animals. Understanding the performance of these systems is important in array design and in analysis. Close proximity detection interference (CPDI) is a condition where receivers fail to reliably detect tag transmissions. CPDI generally occurs when the tag and receiver are near one another in acoustically reverberant settings. Here we confirm transmission multipaths reflected off the environment arriving at a receiver with sufficient delay relative to the direct signal cause CPDI. We propose a ray-propagation based model to estimate the arrival of energy via multipaths to predict CPDI occurrence, and we show how deeper deployments are particularly susceptible. A series of experiments were designed to develop and validate our model. Deep (300 m) and shallow (25 m) ranging experiments were conducted using Vemco V13 acoustic tags and VR2-W receivers. Probabilistic modeling of hourly detections was used to estimate the average distance a tag could be detected. A mechanistic model for predicting the arrival time of multipaths was developed using parameters from these experiments to calculate the direct and multipath path lengths. This model was retroactively applied to the previous ranging experiments to validate CPDI observations. Two additional experiments were designed to validate predictions of CPDI with respect to combinations of deployment depth and distance. Playback of recorded tags in a tank environment was used to confirm multipaths arriving after the receiver's blanking interval cause CPDI effects. Analysis of empirical data estimated the average maximum detection radius (AMDR), the farthest distance at which 95% of tag transmissions went undetected by receivers, was between 840 and 846 m for the deep ranging experiment across all factor permutations. From these results, CPDI was estimated within a 276.5 m radius of the receiver. These empirical estimations were consistent with mechanistic model predictions. CPDI affected detection at distances closer than 259-326 m from receivers. AMDR determined from the shallow ranging experiment was between 278 and 290 m with CPDI neither predicted nor observed. Results of validation experiments were consistent with mechanistic model predictions. Finally, we were able to predict detection/nondetection with 95.7% accuracy using the mechanistic model's criterion when simulating transmissions with and without multipaths. Close proximity detection interference results from combinations of depth and distance that produce reflected signals arriving after a receiver's blanking interval has ended. Deployment scenarios resulting in CPDI can be predicted with the proposed mechanistic model. For deeper deployments, sea-surface reflections can produce CPDI conditions, resulting in transmission rejection, regardless of the reflective properties of the seafloor.
Barmou, Maher M; Hussain, Saba F; Abu Hassan, Mohamed I
2018-06-01
The aim of the study was to assess the reliability and validity of cephalometric variables from MicroScribe-3DXL. Seven cephalometric variables (facial angle, ANB, maxillary depth, U1/FH, FMA, IMPA, FMIA) were measured by a dentist in 60 Malay subjects (30 males and 30 females) with class I occlusion and balanced face. Two standard images were taken for each subject with conventional cephalometric radiography and MicroScribe-3DXL. All the images were traced and analysed. SPSS version 2.0 was used for statistical analysis with P-value was set at P<0.05. The results revealed a significant statistic difference in four measurements (U1/FH, FMA, IMPA, FMIA) with P-value range (0.00 to 0.03). The difference in the measurements was considered clinically acceptable. The overall reliability of MicroScribe-3DXL was 92.7% and its validity was 91.8%. The MicroScribe-3DXL is reliable and valid to most of the cephalometric variables with the advantages of saving time and cost. This is a promising device to assist in diverse areas in dental practice and research. Copyright © 2018. Published by Elsevier Masson SAS.
Post mitigation impact risk analysis for asteroid deflection demonstration missions
NASA Astrophysics Data System (ADS)
Eggl, Siegfried; Hestroffer, Daniel; Thuillot, William; Bancelin, David; Cano, Juan L.; Cichocki, Filippo
2015-08-01
Even though mankind believes to have the capabilities to avert potentially disastrous asteroid impacts, only the realization of mitigation demonstration missions can validate this claim. Such a deflection demonstration attempt has to be cost effective, easy to validate, and safe in the sense that harmless asteroids must not be turned into potentially hazardous objects. Uncertainties in an asteroid's orbital and physical parameters as well as those additionally introduced during a mitigation attempt necessitate an in depth analysis of deflection mission designs in order to dispel planetary safety concerns. We present a post mitigation impact risk analysis of a list of potential kinetic impactor based deflection demonstration missions proposed in the framework of the NEOShield project. Our results confirm that mitigation induced uncertainties have a significant influence on the deflection outcome. Those cannot be neglected in post deflection impact risk studies. We show, furthermore, that deflection missions have to be assessed on an individual basis in order to ensure that asteroids are not inadvertently transported closer to the Earth at a later date. Finally, we present viable targets and mission designs for a kinetic impactor test to be launched between the years 2025 and 2032.
Testing Pearl Model In Three European Sites
NASA Astrophysics Data System (ADS)
Bouraoui, F.; Bidoglio, G.
The Plant Protection Product Directive (91/414/EEC) stresses the need of validated models to calculate predicted environmental concentrations. The use of models has become an unavoidable step before pesticide registration. In this context, European Commission, and in particular DGVI, set up a FOrum for the Co-ordination of pes- ticide fate models and their USe (FOCUS). In a complementary effort, DG research supported the APECOP project, with one of its objective being the validation and im- provement of existing pesticide fate models. The main topic of research presented here is the validation of the PEARL model for different sites in Europe. The PEARL model, actually used in the Dutch pesticide registration procedure, was validated in three well- instrumented sites: Vredepeel (the Netherlands), Brimstone (UK), and Lanna (Swe- den). A step-wise procedure was used for the validation of the PEARL model. First the water transport module was calibrated, and then the solute transport module, using tracer measurements keeping unchanged the water transport parameters. The Vrede- peel site is characterised by a sandy soil. Fourteen months of measurements were used for the calibration. Two pesticides were applied on the site: bentazone and etho- prophos. PEARL predictions were very satisfactory for both soil moisture content, and pesticide concentration in the soil profile. The Brimstone site is characterised by a cracking clay soil. The calibration was conducted on a time series measurement of 7 years. The validation consisted in comparing predictions and measurement of soil moisture at different soil depths, and in comparing the predicted and measured con- centration of isoproturon in the drainage water. The results, even if in good agreement with the measuremens, highlighted the limitation of the model when the preferential flow becomes a dominant process. PEARL did not reproduce well soil moisture pro- file during summer months, and also under-predicted the arrival of isoproturon to the drains. The Lanna site is characterised by s structured clay soil. PEARL was success- ful in predicting soil moisture profiles and the draining water. PEARL performed well in predicting the soil concentration of bentazone at different depth. However, since PEARL does not consider cracks in the soil, it did not predict well the peak concen- trations of bentazone in the drainage water. Along with the validation results for the three sites, a sensitivity analysis of the model is presented.
NASA Astrophysics Data System (ADS)
Li, Lang-quan; Huang, Wei; Yan, Li; Li, Shi-bin
2017-10-01
The dual transverse injection system with a front hydrogen porthole and a rear air porthole arranged in tandem is proposed, and this is a realistic approach for mixing enhancement and penetration improvement of transverse injection in a scramjet combustor. The influence of this dual transverse injection system on mixing characteristics has been evaluated numerically based on grid independency analysis and code validation. The numerical approach employed in the current study has been validated against the available experimental data in the open literature, and the predicted wall static pressure distributions show reasonable agreement with the experimental data for the cases with different jet-to-crossflow pressure ratios. The obtained results predicted by the three-dimensional Reynolds-average Navier - Stokes (RANS) equations coupled with the two equation k-ω shear stress transport (SST) turbulence model show that the air pothole has an great impact on penetration depth and mixing efficiency, and the effect of air jet on flow field varies with different values of the aspect ratio. The air porthole with larger aspect ratio can increase the fuel penetration depth. However, when the aspect ratio is relatively small, the fuel penetration depth decreases, and even smaller than that of the single injection system. At the same time, the air pothole has a highly remarkable improvement on mixing efficiency, especially in the near field. The smaller the aspect ratio of the air porthole is, the higher the mixing efficiency in the near field is. This is due to its larger circulation in the near field. The dual injection system owns more losses of stagnation pressure than the single injection system.
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
NASA Astrophysics Data System (ADS)
Haddad, Bouchra; Palacios, David; Pastor, Manuel; Zamorano, José Juan
2016-09-01
Lahars are among the most catastrophic volcanic processes, and the ability to model them is central to mitigating their effects. Several lahars recently generated by the Popocatépetl volcano (Mexico) moved downstream through the Huiloac Gorge towards the village of Santiago Xalitzintla. The most dangerous was the 2001 lahar, in which the destructive power of the debris flow was maintained throughout the extent of the flow. Identifying the zone of hazard can be based either on numerical or empirical models, but a calibration and validation process is required to ensure hazard map quality. The Geoflow-SPH depth integrated numerical model used in this study to reproduce the 2001 lahar was derived from the velocity-pressure version of the Biot-Zienkiewicz model, and was discretized using the smoothed particle hydrodynamics (SPH) method. The results of the calibrated SPH model were validated by comparing the simulated deposit depth with the field depth measured at 16 cross sections distributed strategically along the gorge channel. Moreover, the dependency of the results on topographic mesh resolution, initial lahar mass shape and dimensions is also investigated. The results indicate that to accurately reproduce the 2001 lahar flow dynamics the channel topography needed to be discretized using a mesh having a minimum 5 m resolution, and an initial lahar mass shape that adopted the source area morphology. Field validation of the calibrated model showed that there was a satisfactory relationship between the simulated and field depths, the error being less than 20% for 11 of the 16 cross sections. This study demonstrates that the Geoflow-SPH model was able to accurately reproduce the lahar path and the extent of the flow, but also reproduced other parameters including flow velocity and deposit depth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
New, Joshua Ryan; Levinson, Ronnen; Huang, Yu
The Roof Savings Calculator (RSC) was developed through collaborations among Oak Ridge National Laboratory (ORNL), White Box Technologies, Lawrence Berkeley National Laboratory (LBNL), and the Environmental Protection Agency in the context of a California Energy Commission Public Interest Energy Research project to make cool-color roofing materials a market reality. The RSC website and a simulation engine validated against demonstration homes were developed to replace the liberal DOE Cool Roof Calculator and the conservative EPA Energy Star Roofing Calculator, which reported different roof savings estimates. A preliminary analysis arrived at a tentative explanation for why RSC results differed from previous LBNLmore » studies and provided guidance for future analysis in the comparison of four simulation programs (doe2attic, DOE-2.1E, EnergyPlus, and MicroPas), including heat exchange between the attic surfaces (principally the roof and ceiling) and the resulting heat flows through the ceiling to the building below. The results were consolidated in an ORNL technical report, ORNL/TM-2013/501. This report is an in-depth inter-comparison of four programs with detailed measured data from an experimental facility operated by ORNL in South Carolina in which different segments of the attic had different roof and attic systems.« less
Zhang, Yixiang; Gao, Peng; Xing, Zhuo; Jin, Shumei; Chen, Zhide; Liu, Lantao; Constantino, Nasie; Wang, Xinwang; Shi, Weibing; Yuan, Joshua S.; Dai, Susie Y.
2013-01-01
High abundance proteins like ribulose-1,5-bisphosphate carboxylase oxygenase (Rubisco) impose a consistent challenge for the whole proteome characterization using shot-gun proteomics. To address this challenge, we developed and evaluated Polyethyleneimine Assisted Rubisco Cleanup (PARC) as a new method by combining both abundant protein removal and fractionation. The new approach was applied to a plant insect interaction study to validate the platform and investigate mechanisms for plant defense against herbivorous insects. Our results indicated that PARC can effectively remove Rubisco, improve the protein identification, and discover almost three times more differentially regulated proteins. The significantly enhanced shot-gun proteomics performance was translated into in-depth proteomic and molecular mechanisms for plant insect interaction, where carbon re-distribution was used to play an essential role. Moreover, the transcriptomic validation also confirmed the reliability of PARC analysis. Finally, functional studies were carried out for two differentially regulated genes as revealed by PARC analysis. Insect resistance was induced by over-expressing either jacalin-like or cupin-like genes in rice. The results further highlighted that PARC can serve as an effective strategy for proteomics analysis and gene discovery. PMID:23943779
Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.
Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves
2016-09-01
This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.
NASA Astrophysics Data System (ADS)
Houssard, Patrick; Lorrain, Anne; Tremblay-Boyer, Laura; Allain, Valérie; Graham, Brittany S.; Menkes, Christophe E.; Pethybridge, Heidi; Couturier, Lydie I. E.; Point, David; Leroy, Bruno; Receveur, Aurore; Hunt, Brian P. V.; Vourey, Elodie; Bonnet, Sophie; Rodier, Martine; Raimbault, Patrick; Feunteun, Eric; Kuhnert, Petra M.; Munaron, Jean-Marie; Lebreton, Benoit; Otake, Tsuguo; Letourneur, Yves
2017-05-01
Estimates of trophic position are used to validate ecosystem models and understand food web structure. A consumer's trophic position can be estimated by the stable nitrogen isotope values (δ15N) of its tissue, once the baseline isotopic variability has been accounted for. Our study established the first data-driven baseline δ15N isoscape for the Western and Central Pacific Ocean using particulate organic matter. Bulk δ15N analysis on 1039 muscle tissue of bigeye and yellowfin tuna were conducted together with amino acid compound-specific δ15N analysis (AA-CSIA) on a subset of 21 samples. Both particulate organic matter and tuna bulk δ15N values varied by more than 10‰ across the study area. Fine-scaled trophic position maps were constructed and revealed higher tuna trophic position (by ∼1) in the southern latitudes compared to the equator. AA-CSIA confirmed these spatial patterns for bigeye and, to a lesser extent, yellowfin tuna. Using generalized additive models, spatial variations of tuna trophic positions were mainly related to the depth of the 20°C isotherm, a proxy for the thermocline behavior, with higher tuna trophic position estimates at greater thermocline depths. We hypothesized that a deeper thermocline would increase tuna vertical habitat and access to mesopelagic prey of higher trophic position. Archival tagging data further suggested that the vertical habitat of bigeye tuna was deeper in the southern latitudes than at the equator. These results suggest the importance of thermocline depth in influencing tropical tuna diet, which affects their vulnerability to fisheries, and may be altered by climate change.
NASA Astrophysics Data System (ADS)
Richardson, Ryan T.
This study builds upon recent research in the field of fluvial remote sensing by applying techniques for mapping physical attributes of rivers. Depth, velocity, and grain size are primary controls on the types of habitat present in fluvial ecosystems. This thesis focuses on expanding fluvial remote sensing to larger spatial extents and sub-meter resolutions, which will increase our ability to capture the spatial heterogeneity of habitat at a resolution relevant to individual salmonids and an extent relevant to species. This thesis consists of two chapters, one focusing on expanding the spatial extent over which depth can be mapped using Optimal Band Ratio Analysis (OBRA) and the other developing general relations for mapping grain size from three-dimensional topographic point clouds. The two chapters are independent but connected by the overarching goal of providing scientists and managers more useful tools for quantifying the amount and quality of salmonid habitat via remote sensing. The OBRA chapter highlights the true power of remote sensing to map depths from hyperspectral images as a central component of watershed scale analysis, while also acknowledging the great challenges involved with increasing spatial extent. The grain size mapping chapter establishes the first general relations for mapping grain size from roughness using point clouds. These relations will significantly reduce the time needed in the field by eliminating the need for independent measurements of grain size for calibrating the roughness-grain size relationship and thus making grain size mapping with SFM more cost effective for river restoration and monitoring. More data from future studies are needed to refine these relations and establish their validity and generality. In conclusion, this study adds to the rapidly growing field of fluvial remote sensing and could facilitate river research and restoration.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.
2015-07-01
Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.
32 CFR 701.58 - In-depth analysis of FOIA exemptions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 32 National Defense 5 2013-07-01 2013-07-01 false In-depth analysis of FOIA exemptions. 701.58... DEPARTMENT OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC FOIA Exemptions § 701.58 In-depth analysis of FOIA exemptions. An in-depth analysis of the FOIA exemptions is addressed in the DOJ's annual publication...
32 CFR 701.58 - In-depth analysis of FOIA exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 5 2011-07-01 2011-07-01 false In-depth analysis of FOIA exemptions. 701.58... DEPARTMENT OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC FOIA Exemptions § 701.58 In-depth analysis of FOIA exemptions. An in-depth analysis of the FOIA exemptions is addressed in the DOJ's annual publication...
32 CFR 701.58 - In-depth analysis of FOIA exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 5 2010-07-01 2010-07-01 false In-depth analysis of FOIA exemptions. 701.58... DEPARTMENT OF THE NAVY DOCUMENTS AFFECTING THE PUBLIC FOIA Exemptions § 701.58 In-depth analysis of FOIA exemptions. An in-depth analysis of the FOIA exemptions is addressed in the DOJ's annual publication...
CRRES microelectronics package flight data analysis
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.; Brucker, G. J.; Stauffer, C. A.
1993-01-01
A detailed in-depth analysis was performed on the data from some of the CRRES MEP (Microelectronics Package) devices. These space flight measurements covered a period of about fourteen months of mission lifetime. Several types of invalid data were identified and corrections were made. Other problems were noted and adjustments applied, as necessary. Particularly important and surprising were observations of abnormal device behavior in many parts that could neither be explained nor correlated to causative events. Also, contrary to prevailing theory, proton effects appeared to be far more significant and numerous than cosmic ray effects. Another unexpected result was the realization that only nine out of thirty-two p-MOS dosimeters on the MEP indicated a valid operation. Comments, conclusions, and recommendations are given.
Modeling of Depth Cue Integration in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.; Davis, Wendy
2003-01-01
Psychophysical research has demonstrated that human observers utilize a variety of visual cues to form a perception of three-dimensional depth. However, most of these studies have utilized a passive judgement paradigm, and failed to consider depth-cue integration as a dynamic and task-specific process. In the current study, we developed and experimentally validated a model of manual control of depth that examines how two potential cues (stereo disparity and relative size) are utilized in both first- and second-order active depth control tasks. We found that stereo disparity plays the dominate role for determining depth position, while relative size dominates perception of depth velocity. Stereo disparity also plays a reduced role when made less salient (i.e., when viewing distance is increased). Manual control models predict that position information is sufficient for first-order control tasks, while velocity information is required to perform a second-order control task. Thus, the rules for depth-cue integration in active control tasks are dependent on both task demands and cue quality.
Sex estimation from measurements of the first rib in a contemporary Polish population.
Kubicka, Anna Maria; Piontek, Janusz
2016-01-01
The aim of this study was to evaluate the accuracy of sex assessment using measurements of the first rib from computed tomography (CT) to develop a discriminant formula. Four discriminant formulae were derived based on CT imaging of the right first rib of 85 female and 91 male Polish patients of known age and sex. In direct discriminant analysis, the first equation consisted of all first rib variables; the second included measurements of the rib body; the third comprised only two measurements of the sternal end of the first rib. The stepwise method selected the four best variables from all measurements. The discriminant function equation was then tested on a cross-validated group consisting of 23 females and 24 males. The direct discriminant analysis showed that sex assessment was possible in 81.5% of cases in the first group and in 91.5% in the cross-validated group when all variables for the first rib were included. The average accuracy for the original group for rib body and sternal end was 80.9 and 67.9%, respectively. The percentages of correctly assigned individuals for the functions based on the rib body and sternal end in the cross-validated group were 76.6 and 85.0%, respectively. Higher average accuracies were obtained for stepwise discriminant analysis: 83.1% for the original group and 91.2% for the cross-validated group. The exterior edge, anterior-posterior of the sternal end, and depth of the arc were the most reliable parameters. Our results suggest that the first rib is dimorphic and that the described method can be used for sex assessment.
Ruckebusch, C; Vilmin, F; Coste, N; Huvenne, J P
2008-07-01
We evaluate the contribution made by multivariate curve resolution-alternating least squares (MCR-ALS) for resolving gel permeation chromatography-Fourier transform infrared (GPC-FT-IR) data collected on butadiene rubber (BR) and styrene butadiene rubber (SBR) blends in order to access in-depth knowledge of polymers along the molecular weight distribution (MWD). In the BR-SBR case, individual polymers differ in chemical composition but share almost the same MWD. Principal component analysis (PCA) gives a general overview of the data structure and attests to the feasibility of modeling blends as a binary system. MCR-ALS is then performed. It allows resolving the chromatographic coelution and validates the chosen methodology. For SBR-SBR blends, the problem is more challenging since the individual elastomers present the same chemical composition. Rank deficiency is detected from the PCA data structure analysis. MCR-ALS is thus performed on column-wise augmented matrices. It brings very useful insight into the composition of the analyzed blends. In particular, a weak change in the composition of individual SBR in the MWD's lowest mass region is revealed.
NASA Technical Reports Server (NTRS)
Markus, Thorsten; Masson, Robert; Worby, Anthony; Lytle, Victoria; Kurtz, Nathan; Maksym, Ted
2011-01-01
In October 2003 a campaign on board the Australian icebreaker Aurora Australis had the objective to validate standard Aqua Advanced Microwave Scanning Radiometer (AMSR-E) sea-ice products. Additionally, the satellite laser altimeter on the Ice, Cloud and land Elevation Satellite (ICESat) was in operation. To capture the large-scale information on the sea-ice conditions necessary for satellite validation, the measurement strategy was to obtain large-scale sea-ice statistics using extensive sea-ice measurements in a Lagrangian approach. A drifting buoy array, spanning initially 50 km 100 km, was surveyed during the campaign. In situ measurements consisted of 12 transects, 50 500 m, with detailed snow and ice measurements as well as random snow depth sampling of floes within the buoy array using helicopters. In order to increase the amount of coincident in situ and satellite data an approach has been developed to extrapolate measurements in time and in space. Assuming no change in snow depth and freeboard occurred during the period of the campaign on the floes surveyed, we use buoy ice-drift information as well as daily estimates of thin-ice fraction and rough-ice vs smooth-ice fractions from AMSR-E and QuikSCAT, respectively, to estimate kilometer-scale snow depth and freeboard for other days. The results show that ICESat freeboard estimates have a mean difference of 1.8 cm when compared with the in situ data and a correlation coefficient of 0.6. Furthermore, incorporating ICESat roughness information into the AMSR-E snow depth algorithm significantly improves snow depth retrievals. Snow depth retrievals using a combination of AMSR-E and ICESat data agree with in situ data with a mean difference of 2.3 cm and a correlation coefficient of 0.84 with a negligible bias.
ERIC Educational Resources Information Center
Nguyen, Thai-Huy; Nguyen, Mike Hoa; Nguyen, Bach Mai Dolly; Gasman, Marybeth; Conrad, Clifton
2018-01-01
This article highlights the capacity of an Asian American, Native American and Pacific Islander Institution (AANAPISI) to serve as an institutional convertor--by addressing challenges commonly associated with marginalized students--for low-income, Asian American and Pacific Islander students entering college. Through an in-depth case study, we…
Givati, Assaf; Hatton, Kieron
2015-04-01
Traditional acupuncturists' quest for external legitimacy in Britain involves the standardization of their knowledge bases through the development of training schools and syllabi, formal educational structures, and, since the 1990s, the teaching of undergraduate courses within (or validated by) Higher Education Institutions (HEIs), a process which entails biomedical alignment of the curriculum. However, as holistic discourses were commonly used as a rhetorical strategy by CAM practitioners to distance themselves from biomedicine and as a source of public appeal, this 'mainstreaming' process evoked practitioners' concerns that their holistic claims are being compromised. An additional challenge is being posed by a group of academics and scientists in Britain who launched an attack on CAM courses taught in HEIs, accusing them of being 'unscientific' and 'non-academic' in nature. This paper explores the negotiation of all these challenges during the formalization of traditional acupuncture education in Britain, with a particular focus on the role of HEIs. The in-depth qualitative investigation draws on several data sets: participant observation in a university validated acupuncture course; in-depth interviews; and documentary analysis. The findings show how, as part of the formalization process, acupuncturists in Britain (re)negotiate their holistic, anti-reductionist discourses and claims in relation to contemporary societal, political and cultural forces. Moreover, the teaching and validation of acupuncture courses by HEIs may contribute to broadening acupuncturists' 'holistic awareness' of societal and cultural influences on individuals' and communities' ill-health. This investigation emphasises the dynamic and context-specific (rather than fixed and essentialized) nature of acupuncture practice and knowledge. Copyright © 2015 Elsevier Ltd. All rights reserved.
Satellite remote sensing of fine particulate air pollutants over Indian mega cities
NASA Astrophysics Data System (ADS)
Sreekanth, V.; Mahesh, B.; Niranjan, K.
2017-11-01
In the backdrop of the need for high spatio-temporal resolution data on PM2.5 mass concentrations for health and epidemiological studies over India, empirical relations between Aerosol Optical Depth (AOD) and PM2.5 mass concentrations are established over five Indian mega cities. These relations are sought to predict the surface PM2.5 mass concentrations from high resolution columnar AOD datasets. Current study utilizes multi-city public domain PM2.5 data (from US Consulate and Embassy's air monitoring program) and MODIS AOD, spanning for almost four years. PM2.5 is found to be positively correlated with AOD. Station-wise linear regression analysis has shown spatially varying regression coefficients. Similar analysis has been repeated by eliminating data from the elevated aerosol prone seasons, which has improved the correlation coefficient. The impact of the day to day variability in the local meteorological conditions on the AOD-PM2.5 relationship has been explored by performing a multiple regression analysis. A cross-validation approach for the multiple regression analysis considering three years of data as training dataset and one-year data as validation dataset yielded an R value of ∼0.63. The study was concluded by discussing the factors which can improve the relationship.
NASA Astrophysics Data System (ADS)
Tesoriero, A. J.; Terziotti, S.
2014-12-01
Nitrate trends in streams often do not match expectations based on recent nitrogen source loadings to the land surface. Groundwater discharge with long travel times has been suggested as the likely cause for these observations. The fate of nitrate in groundwater depends to a large extent on the occurrence of denitrification along flow paths. Because denitrification in groundwater is inhibited when dissolved oxygen (DO) concentrations are high, defining the oxic-suboxic interface has been critical in determining pathways for nitrate transport in groundwater and to streams at the local scale. Predicting redox conditions on a regional scale is complicated by the spatial variability of reaction rates. In this study, logistic regression and boosted classification tree analysis were used to predict the probability of oxic water in groundwater in the Chesapeake Bay watershed. The probability of oxic water (DO > 2 mg/L) was predicted by relating DO concentrations in over 3,000 groundwater samples to indicators of residence time and/or electron donor availability. Variables that describe position in the flow system (e.g., depth to top of the open interval), soil drainage and surficial geology were the most important predictors of oxic water. Logistic regression and boosted classification tree analysis correctly predicted the presence or absence of oxic conditions in over 75 % of the samples in both training and validation data sets. Predictions of the percentages of oxic wells in deciles of risk were very accurate (r2>0.9) in both the training and validation data sets. Depth to the bottom of the oxic layer was predicted and is being used to estimate the effect that groundwater denitrification has on stream nitrate concentrations and the time lag between the application of nitrogen at the land surface and its effect on streams.
Lee, Gloria K L; Chan, Chetwyn C H
2003-01-01
This study aimed at investigating the utilization and applicability of the Dictionary of Occupational Titles (DOT) as a methodology to study the job profile (nature and physical demand) of formwork carpentry in the local situation. Thirty male formwork carpenters were recruited by convenient sampling to participate in a two-hour interview, with reference to the DOT Physical Demand Questionnaire (DOTPDQ) and the WestTool Sort Questionnaire. The information obtained was further consolidated by comparing the results from the interview to three construction sites and training guidelines from the formwork carpentry training centers. The triangulation of the data formulated a job profile of formwork carpenters. The results from the DOTPDQ revealed that workers' work demands were standing, walking, pushing, pulling, reaching, climbing, balancing, stooping, crouching, lifting, carrying, handling and near acuity. This produced an agreement of 84.6% with the original DOT. A discrepancy was found in the demands of kneeling, fingering, far acuity and depth perception. The discrepancy between the data from the United States and local appeared to be minimal. It was thus inferred that the DOT-based job profile was largely valid for describing formwork carpentry in Hong Kong. In-depth analysis should be conducted to further substantiate the validity of utilizing the DOT system for other job types and their physical demands.
Wang, Mei; Avula, Bharathi; Wang, Yan-Hong; Zhao, Jianping; Avonto, Cristina; Parcher, Jon F; Raman, Vijayasankar; Zweigenbaum, Jerry A; Wylie, Philip L; Khan, Ikhlas A
2014-01-01
As part of an ongoing research program on authentication, safety and biological evaluation of phytochemicals and dietary supplements, an in-depth chemical investigation of different types of chamomile was performed. A collection of chamomile samples including authenticated plants, commercial products and essential oils was analysed by GC/MS. Twenty-seven authenticated plant samples representing three types of chamomile, viz. German chamomile, Roman chamomile and Juhua were analysed. This set of data was employed to construct a sample class prediction (SCP) model based on stepwise reduction of data dimensionality followed by principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). The model was cross-validated with samples including authenticated plants and commercial products. The model demonstrated 100.0% accuracy for both recognition and prediction abilities. In addition, 35 commercial products and 11 essential oils purported to contain chamomile were subsequently predicted by the validated PLS-DA model. Furthermore, tentative identification of the marker compounds correlated with different types of chamomile was explored. Copyright © 2013 Elsevier Ltd. All rights reserved.
Chen, Hong; Li, Shanshan
2018-01-01
There exists a lack of specific research methods to estimate the relationship between an organization and its employees, which has long challenged research in the field of organizational management. Therefore, this article introduces psychological distance concept into the research of organizational behavior, which can define the concept of psychological distance between employees and an organization and describe a level of perceived correspondence or interaction between subjects and objects. We developed an employee-organization psychological distance (EOPD) scale through both qualitative and quantitative analysis methods. As indicated by the research results based on grounded theory (10 employee in-depth interview records and 277 opening questionnaires) and formal investigation (544 questionnaires), this scale consists of six dimensions: experiential distance, behavioral distance, emotional distance, cognitive distance, spatial-temporal distance, and objective social distance based on 44 items. Finally, we determined that the EOPD scale exhibited acceptable reliability and validity using confirmatory factor analysis. This research may establish a foundation for future research on the measurement of psychological relationships between employees and organizations. PMID:29375427
Development of the competency scale for primary care managers in Thailand: Scale development.
Kitreerawutiwong, Keerati; Sriruecha, Chanaphol; Laohasiriwong, Wongsa
2015-12-09
The complexity of the primary care system requires a competent manager to achieve high-quality healthcare. The existing literature in the field yields little evidence of the tools to assess the competency of primary care administrators. This study aimed to develop and examine the psychometric properties of the competency scale for primary care managers in Thailand. The scale was developed using in-depth interviews and focus group discussions among policy makers, managers, practitioners, village health volunteers, and clients. The specific dimensions were extracted from 35 participants. 123 items were generated from the evidence and qualitative data. Content validity was established through the evaluation of seven experts and the original 123 items were reduced to 84 items. The pilot testing was conducted on a simple random sample of 487 primary care managers. Item analysis, reliability testing, and exploratory factor analysis were applied to establish the scale's reliability and construct validity. Exploratory factor analysis identified nine dimensions with 48 items using a five-point Likert scale. Each dimension accounted for greater than 58.61% of the total variance. The scale had strong content validity (Indices = 0.85). Each dimension of Cronbach's alpha ranged from 0.70 to 0.88. Based on these analyses, this instrument demonstrated sound psychometric properties and therefore is considered an effective tool for assessment of the primary care manager competencies. The results can be used to improve competency requirements of primary care managers, with implications for health service management workforce development.
De Silva Weliange, Shreenika H; Fernando, Dulitha; Gunatilake, Jagath
2014-05-03
Environmental characteristics are known to be associated with patterns of physical activity (PA). Although several validated tools exist, to measure the environment characteristics, these instruments are not necessarily suitable for application in all settings especially in a developing country. This study was carried out to develop and validate an instrument named the "Physical And Social Environment Scale--PASES" to assess the physical and social environmental factors associated with PA. This will enable identification of various physical and social environmental factors affecting PA in Sri Lanka, which will help in the development of more tailored intervention strategies for promoting higher PA levels in Sri Lanka. The PASES was developed using a scientific approach of defining the construct, item generation, analysis of content of items and item reduction. Both qualitative and quantitative methods of key informant interviews, in-depth interviews and rating of the items generated by experts were conducted. A cross sectional survey among 180 adults was carried out to assess the factor structure through principal component analysis. Another cross sectional survey among a different group of 180 adults was carried out to assess the construct validity through confirmatory factor analysis. Reliability was assessed with test re-test reliability and internal consistency using Spearman r and Cronbach's alpha respectively. Thirty six items were selected after the expert ratings and were developed into interviewer administered questions. Exploration of factor structure of the 34 items which were factorable through principal component analysis with Quartimax rotation extracted 8 factors. The 34 item instrument was assessed for construct validity with confirmatory factor analysis which confirmed an 8 factor model (x2 = 339.9, GFI = 0.90). The identified factors were infrastructure for walking, aesthetics and facilities for cycling, vehicular traffic safety, access and connectivity, recreational facilities for PA, safety, social cohesion and social acceptance of PA with the two non-factorable factors, residential density and land use mix. The PASES also showed good test re-test reliability and a moderate level of internal consistency. The PASES is a valid and reliable tool which could be used to assess the physical and social environment associated with PA in Sri Lanka.
Iverson, Richard M.; Chaojun Ouyang,
2015-01-01
Earth-surface mass flows such as debris flows, rock avalanches, and dam-break floods can grow greatly in size and destructive potential by entraining bed material they encounter. Increasing use of depth-integrated mass- and momentum-conservation equations to model these erosive flows motivates a review of the underlying theory. Our review indicates that many existing models apply depth-integrated conservation principles incorrectly, leading to spurious inferences about the role of mass and momentum exchanges at flow-bed boundaries. Model discrepancies can be rectified by analyzing conservation of mass and momentum in a two-layer system consisting of a moving upper layer and static lower layer. Our analysis shows that erosion or deposition rates at the interface between layers must in general satisfy three jump conditions. These conditions impose constraints on valid erosion formulas, and they help determine the correct forms of depth-integrated conservation equations. Two of the three jump conditions are closely analogous to Rankine-Hugoniot conditions that describe the behavior of shocks in compressible gasses, and the third jump condition describes shear traction discontinuities that necessarily exist across eroding boundaries. Grain-fluid mixtures commonly behave as compressible materials as they undergo entrainment, because changes in bulk density occur as the mixtures mobilize and merge with an overriding flow. If no bulk density change occurs, then only the shear-traction jump condition applies. Even for this special case, however, accurate formulation of depth-integrated momentum equations requires a clear distinction between boundary shear tractions that exist in the presence or absence of bed erosion.
A comprehensive assessment of somatic mutation detection in cancer using whole-genome sequencing
Alioto, Tyler S.; Buchhalter, Ivo; Derdak, Sophia; Hutter, Barbara; Eldridge, Matthew D.; Hovig, Eivind; Heisler, Lawrence E.; Beck, Timothy A.; Simpson, Jared T.; Tonon, Laurie; Sertier, Anne-Sophie; Patch, Ann-Marie; Jäger, Natalie; Ginsbach, Philip; Drews, Ruben; Paramasivam, Nagarajan; Kabbe, Rolf; Chotewutmontri, Sasithorn; Diessl, Nicolle; Previti, Christopher; Schmidt, Sabine; Brors, Benedikt; Feuerbach, Lars; Heinold, Michael; Gröbner, Susanne; Korshunov, Andrey; Tarpey, Patrick S.; Butler, Adam P.; Hinton, Jonathan; Jones, David; Menzies, Andrew; Raine, Keiran; Shepherd, Rebecca; Stebbings, Lucy; Teague, Jon W.; Ribeca, Paolo; Giner, Francesc Castro; Beltran, Sergi; Raineri, Emanuele; Dabad, Marc; Heath, Simon C.; Gut, Marta; Denroche, Robert E.; Harding, Nicholas J.; Yamaguchi, Takafumi N.; Fujimoto, Akihiro; Nakagawa, Hidewaki; Quesada, Víctor; Valdés-Mas, Rafael; Nakken, Sigve; Vodák, Daniel; Bower, Lawrence; Lynch, Andrew G.; Anderson, Charlotte L.; Waddell, Nicola; Pearson, John V.; Grimmond, Sean M.; Peto, Myron; Spellman, Paul; He, Minghui; Kandoth, Cyriac; Lee, Semin; Zhang, John; Létourneau, Louis; Ma, Singer; Seth, Sahil; Torrents, David; Xi, Liu; Wheeler, David A.; López-Otín, Carlos; Campo, Elías; Campbell, Peter J.; Boutros, Paul C.; Puente, Xose S.; Gerhard, Daniela S.; Pfister, Stefan M.; McPherson, John D.; Hudson, Thomas J.; Schlesner, Matthias; Lichter, Peter; Eils, Roland; Jones, David T. W.; Gut, Ivo G.
2015-01-01
As whole-genome sequencing for cancer genome analysis becomes a clinical tool, a full understanding of the variables affecting sequencing analysis output is required. Here using tumour-normal sample pairs from two different types of cancer, chronic lymphocytic leukaemia and medulloblastoma, we conduct a benchmarking exercise within the context of the International Cancer Genome Consortium. We compare sequencing methods, analysis pipelines and validation methods. We show that using PCR-free methods and increasing sequencing depth to ∼100 × shows benefits, as long as the tumour:control coverage ratio remains balanced. We observe widely varying mutation call rates and low concordance among analysis pipelines, reflecting the artefact-prone nature of the raw data and lack of standards for dealing with the artefacts. However, we show that, using the benchmark mutation set we have created, many issues are in fact easy to remedy and have an immediate positive impact on mutation detection accuracy. PMID:26647970
Fang, H; Han, M; Li, Q-L; Cao, C Y; Xia, R; Zhang, Z-H
2016-08-01
Scaling and root planing are widely considered as effective methods for treating chronic periodontitis. A meta-analysis published in 2008 showed no statistically significant differences between full-mouth disinfection (FMD) or full-mouth scaling and root planing (FMS) and quadrant scaling and root planing (Q-SRP). The FMD approach only resulted in modest additional improvements in several indices. Whether differences exist between these two approaches requires further validation. Accordingly, a study was conducted to further validate whether FMD with antiseptics or FMS without the use of antiseptics within 24 h provides greater clinical improvement than Q-SRP in patients with chronic periodontitis. Medline (via OVID), EMBASE (via OVID), PubMed and CENTRAL databases were searched up to 27 January 2015. Randomized controlled trials comparing FMD or FMS with Q-SRP after at least 3 mo were included. Meta-analysis was performed to obtain the weighted mean difference (WMD), together with the corresponding 95% confidence intervals. Thirteen articles were included in the meta-analysis. The WMD of probing pocket depth reduction was 0.25 mm (p < 0.05) for FMD vs. Q-SRP in single-rooted teeth with moderate pockets, and clinical attachment level gain in single- and multirooted teeth with moderate pockets was 0.33 mm (p < 0.05) for FMD vs. Q-SRP. Except for those, no statistically significant differences were found in the other subanalyses of FMD vs. Q-SRP, FMS vs. Q-SRP and FMD vs. FMS. Therefore, the meta-analysis results showed that FMD was better than Q-SRP for achieving probing pocket depth reduction and clinical attachment level gain in moderate pockets. Additionally, regardless of the treatment, no serious complications were observed. FMD, FMS and Q-SRP are all effective for the treatment of adult chronic periodontitis, and they do not lead to any obvious discomfort among patients. Moreover, FMD had modest additional clinical benefits over Q-SRP, so we prefer to recommend FMD as the first choice for the treatment of adult chronic periodontitis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Nakahara, H.
2013-12-01
For monitoring temporal changes in subsurface structures, I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Because the use of coda waves requires earthquakes, time resolution for monitoring decreases. But at regions with high seismicity, it may be possible to monitor subsurface structures in sufficient time resolutions. Studying the 2011 Tohoku-Oki (Mw 9.0), Japan, earthquake for which velocity changes have been already reported by previous studies, I try to validate the method. KiK-net stations in northern Honshu are used in the analysis. For each moderate earthquake, normalized auto correlation functions of surface records are stacked with respect to time windows in S-wave coda. Aligning the stacked normalized auto correlation functions with time, I search for changes in arrival times of phases. The phases at lag times of less than 1s are studied because changes at shallow depths are focused. Based on the stretching method, temporal variations in the arrival times are measured at the stations. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. Amounts of the phase delays are in the order of 10% on average with the maximum of about 50% at some stations. For validation, the deconvolution analysis using surface and subsurface records at the same stations are conducted. The results show that the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percents, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable to detect larger changes. In spite of these disadvantages, this analysis is still attractive because it can be applied to many records on the surface in regions where no boreholes are available. Acknowledgements: Seismograms recorded by KiK-net managed by National Research Institute for Earth Science and Disaster Prevention (NIED) were used in this study. This study was partially supported by JST J-RAPID program and JSPS KAKENHI Grant Numbers 24540449 and 23540449.
Clustering and Network Analysis of Reverse Phase Protein Array Data.
Byron, Adam
2017-01-01
Molecular profiling of proteins and phosphoproteins using a reverse phase protein array (RPPA) platform, with a panel of target-specific antibodies, enables the parallel, quantitative proteomic analysis of many biological samples in a microarray format. Hence, RPPA analysis can generate a high volume of multidimensional data that must be effectively interrogated and interpreted. A range of computational techniques for data mining can be applied to detect and explore data structure and to form functional predictions from large datasets. Here, two approaches for the computational analysis of RPPA data are detailed: the identification of similar patterns of protein expression by hierarchical cluster analysis and the modeling of protein interactions and signaling relationships by network analysis. The protocols use freely available, cross-platform software, are easy to implement, and do not require any programming expertise. Serving as data-driven starting points for further in-depth analysis, validation, and biological experimentation, these and related bioinformatic approaches can accelerate the functional interpretation of RPPA data.
Development and psychometric testing of the active aging scale for Thai adults.
Thanakwang, Kattika; Isaramalai, Sang-Arun; Hatthakit, Urai
2014-01-01
Active aging is central to enhancing the quality of life for older adults, but its conceptualization is not often made explicit for Asian elderly people. Little is known about active aging in older Thai adults, and there has been no development of scales to measure the expression of active aging attributes. The aim of this study was to develop a culturally relevant composite scale of active aging for Thai adults (AAS-Thai) and to evaluate its reliability and validity. EIGHT STEPS OF SCALE DEVELOPMENT WERE FOLLOWED: 1) using focus groups and in-depth interviews, 2) gathering input from existing studies, 3) developing preliminary quantitative measures, 4) reviewing for content validity by an expert panel, 5) conducting cognitive interviews, 6) pilot testing, 7) performing a nationwide survey, and 8) testing psychometric properties. In a nationwide survey, 500 subjects were randomly recruited using a stratified sampling technique. Statistical analyses included exploratory factor analysis, item analysis, and measures of internal consistency, concurrent validity, and test-retest reliability. Principal component factor analysis with varimax rotation resulted in a final 36-item scale consisting of seven factors of active aging: 1) being self-reliant, 2) being actively engaged with society, 3) developing spiritual wisdom, 4) building up financial security, 5) maintaining a healthy lifestyle, 6) engaging in active learning, and 7) strengthening family ties to ensure care in later life. These factors explained 69% of the total variance. Cronbach's alpha coefficient for the overall AAS-Thai was 0.95 and varied between 0.81 and 0.91 for the seven subscales. Concurrent validity and test-retest reliability were confirmed. The AAS-Thai demonstrated acceptable overall validity and reliability for measuring the multidimensional attributes of active aging in a Thai context. This newly developed instrument is ready for use as a screening tool to assess active aging levels among older Thai adults in both community and clinical practice settings.
Fan, Shou-Zen; Abbod, Maysam F.
2018-01-01
Estimating the depth of anaesthesia (DoA) in operations has always been a challenging issue due to the underlying complexity of the brain mechanisms. Electroencephalogram (EEG) signals are undoubtedly the most widely used signals for measuring DoA. In this paper, a novel EEG-based index is proposed to evaluate DoA for 24 patients receiving general anaesthesia with different levels of unconsciousness. Sample Entropy (SampEn) algorithm was utilised in order to acquire the chaotic features of the signals. After calculating the SampEn from the EEG signals, Random Forest was utilised for developing learning regression models with Bispectral index (BIS) as the target. Correlation coefficient, mean absolute error, and area under the curve (AUC) were used to verify the perioperative performance of the proposed method. Validation comparisons with typical nonstationary signal analysis methods (i.e., recurrence analysis and permutation entropy) and regression methods (i.e., neural network and support vector machine) were conducted. To further verify the accuracy and validity of the proposed methodology, the data is divided into four unconsciousness-level groups on the basis of BIS levels. Subsequently, analysis of variance (ANOVA) was applied to the corresponding index (i.e., regression output). Results indicate that the correlation coefficient improved to 0.72 ± 0.09 after filtering and to 0.90 ± 0.05 after regression from the initial values of 0.51 ± 0.17. Similarly, the final mean absolute error dramatically declined to 5.22 ± 2.12. In addition, the ultimate AUC increased to 0.98 ± 0.02, and the ANOVA analysis indicates that each of the four groups of different anaesthetic levels demonstrated significant difference from the nearest levels. Furthermore, the Random Forest output was extensively linear in relation to BIS, thus with better DoA prediction accuracy. In conclusion, the proposed method provides a concrete basis for monitoring patients’ anaesthetic level during surgeries. PMID:29844970
2nd NASA CFD Validation Workshop
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose of the workshop was to review NASA's progress in CFD validation since the first workshop (held at Ames in 1987) and to affirm the future direction of the NASA CFD validation program. The first session consisted of overviews of CFD validation research at each of the three OAET research centers and at Marshall Space Flight Center. The second session consisted of in-depth technical presentations of the best examples of CFD validation work at each center (including Marshall). On the second day the workshop divided into three working groups to discuss CFD validation progress and needs in the subsonic, high-speed, and hypersonic speed ranges. The emphasis of the working groups was on propulsion.
Improved Boundary Layer Depth Retrievals from MPLNET
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Molod, Andrea M.; Joseph, Everette
2013-01-01
Continuous lidar observations of the planetary boundary layer (PBL) depth have been made at the Micropulse Lidar Network (MPLNET) site in Greenbelt, MD since April 2001. However, because of issues with the operational PBL depth algorithm, the data is not reliable for determining seasonal and diurnal trends. Therefore, an improved PBL depth algorithm has been developed which uses a combination of the wavelet technique and image processing. The new algorithm is less susceptible to contamination by clouds and residual layers, and in general, produces lower PBL depths. A 2010 comparison shows the operational algorithm overestimates the daily mean PBL depth when compared to the improved algorithm (1.85 and 1.07 km, respectively). The improved MPLNET PBL depths are validated using radiosonde comparisons which suggests the algorithm performs well to determine the depth of a fully developed PBL. A comparison with the Goddard Earth Observing System-version 5 (GEOS-5) model suggests that the model may underestimate the maximum daytime PBL depth by 410 m during the spring and summer. The best agreement between MPLNET and GEOS-5 occurred during the fall and they diered the most in the winter.
The use of the FACT-H&N (v4) in clinical settings within a developing country: a mixed method study.
Bilal, Sobia; Doss, Jennifer Geraldine; Rogers, Simon N
2014-12-01
In the last decade there has been an increasing awareness about 'quality of life' (QOL) of cancer survivors in developing countries. The study aimed to cross-culturally adapt and validate the FACT-H&N (v4) in Urdu language for Pakistani head and neck cancer patients. In this study the 'same language adaptation method' was used. Cognitive debriefing through in-depth interviews of 25 patients to assess semantic, operational and conceptual equivalence was done. The validation phase included 50 patients to evaluate the psychometric properties. The translated FACT-H&N was easily comprehended (100%). Cronbach's alpha for FACT-G subscales ranged from 0.726 - 0.969. The head and neck subscale and Pakistani questions subscale showed low internal consistency (0.426 and 0.541 respectively). Instrument demonstrated known-group validity in differentiating patients of different clinical stages, treatment status and tumor sites (p < 0.05). Most FACT summary scales correlated strongly with each other (r > 0.75) and showed convergent validity (r > 0.90), with little discriminant validity. Factor analysis revealed 6 factors explaining 85.1% of the total variance with very good (>0.8) Kaiser-Meyer-Olkin and highly significant Bartlett's Test of Sphericity (p < 0.001). The cross-culturally adapted FACT-H&N into Urdu language showed adequate reliability and validity to be incorporated in Pakistani clinical settings for head and neck cancer patients. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Zhaohua Dai; Carl Trettin; Changsheng Li; Devendra M. Amatya; Ge Sun; Harbin Li
2010-01-01
A physically based distributed hydrological model, MIKE SHE, was used to evaluate the effects of altered temperature and precipitation regimes on the streamflow and water table in a forested watershed on the southeastern Atlantic coastal plain. The model calibration and validation against both streamflow and water table depth showed that the MIKE SHE was applicable for...
Kim, Dongyoung; Yang, Jun-Ho; Choi, Soojin; Yoh, Jack J
2018-01-01
Environments affect mineral surfaces, and the surface contamination or alteration can provide potential information to understanding their regional environments. However, when investigating mineral surfaces, mineral and environmental elements appear mixed in data. This makes it difficult to determine their atomic compositions independently. In this research, we developed four analytical methods to distinguish mineral and environmental elements into positive and negative spectra based on depth profiling data using laser-induced breakdown spectroscopy (LIBS). The principle of the methods is to utilize how intensity varied with depth for creating a new spectrum. The methods were applied to five mineral samples exposed to four environmental conditions including seawater, crude oil, sulfuric acid, and air as control. The proposed methods are then validated by applying the resultant spectra to principal component analysis and data were classified by the environmental conditions and atomic compositions of mineral. By applying the methods, the atomic information of minerals and environmental conditions were successfully inferred in the resultant spectrum.
NASA Astrophysics Data System (ADS)
Gianetta, Ivan; Schwarz, Massimiliano; Glenz, Christian; Lammeranner, Walter
2013-04-01
In recent years the effects of roots on river banks and levees have been the subject of major discussions. The main issue about the presence of woody vegetation on levees is related to the possibility that roots increase internal erosion processes and the superimposed load of large trees compromise the integrity of these structures. However, ecologists and landscape managers argue that eliminating the natural vegetation from the riverbanks also means eliminating biotopes, strengthening anthropisation of the landscape, as well as limiting recreations areas. In the context of the third correction of the Rhone in Switzerland, the discussion on new levee geometries and the implementation of woody vegetation on them, lead to a detailed analysis of this issue for this specific case. The objective of this study was to describe quantitatively the processes and factors that influence the root distribution on levees and test modeling approaches for the simulation of vertical root distribution with laboratory and field data. An extension of an eco-hydrological analytic model that considers climatic and pedological condition for the quantification of vertical root distribution was validated with data provided by the University of Vienna (BOKU) of willows' roots (Salix purpurea) grown under controlled conditions. Furthermore, root distribution data of four transversal sections of a levee near Visp (canton Wallis, Switzerland) was used to validate the model. The positions of the levee's sections were chosen based on the species and dimensions of the woody vegetation. The dominant species present in the sections were birch (Betula pendula) and poplar (Populus nigra). For each section a grid of 50x50 cm was created to count and measure the roots. The results show that vertical distribution of root density under controlled growing conditions has an exponential form, decreasing with increasing soil depth, and can be well described by the eco-hydrological model. Vice versa, field data of vertical roots distribution show a non-exponential function and cannot fully be described by the model. A compacted layer of stones at about 2 m depth is considered as limiting factor for the rooting depth on the analyzed levee. The collected data and the knowledge gained from quantitative analysis represent the starting point for a discussion on new levee geometries and the development of new strategies for the implementation of woody vegetation on levees. A long term monitoring project for the analysis of the effectiveness of new implementation strategies of vegetation on levees, is considered an important prospective for future studies on this topic.
Multiple ion beam irradiation for the study of radiation damage in materials
NASA Astrophysics Data System (ADS)
Taller, Stephen; Woodley, David; Getto, Elizabeth; Monterrosa, Anthony M.; Jiao, Zhijie; Toader, Ovidiu; Naab, Fabian; Kubley, Thomas; Dwaraknath, Shyam; Was, Gary S.
2017-12-01
The effects of transmutation produced helium and hydrogen must be included in ion irradiation experiments to emulate the microstructure of reactor irradiated materials. Descriptions of the criteria and systems necessary for multiple ion beam irradiation are presented and validated experimentally. A calculation methodology was developed to quantify the spatial distribution, implantation depth and amount of energy-degraded and implanted light ions when using a thin foil rotating energy degrader during multi-ion beam irradiation. A dual ion implantation using 1.34 MeV Fe+ ions and energy-degraded D+ ions was conducted on single crystal silicon to benchmark the dosimetry used for multi-ion beam irradiations. Secondary Ion Mass Spectroscopy (SIMS) analysis showed good agreement with calculations of the peak implantation depth and the total amount of iron and deuterium implanted. The results establish the capability to quantify the ion fluence from both heavy ion beams and energy-degraded light ion beams for the purpose of using multi-ion beam irradiations to emulate reactor irradiated microstructures.
Kivlan, Benjamin R; Martin, Robroy L
2012-08-01
The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. 2b (Systematic Review of Literature).
A discussion on validity of the diffusion theory by Monte Carlo method
NASA Astrophysics Data System (ADS)
Peng, Dong-qing; Li, Hui; Xie, Shusen
2008-12-01
Diffusion theory was widely used as a basis of the experiments and methods in determining the optical properties of biological tissues. A simple analytical solution could be obtained easily from the diffusion equation after a series of approximations. Thus, a misinterpret of analytical solution would be made: while the effective attenuation coefficient of several semi-infinite bio-tissues were the same, the distribution of light fluence in the tissues would be the same. In order to assess the validity of knowledge above, depth resolved internal fluence of several semi-infinite biological tissues which have the same effective attenuation coefficient were simulated with wide collimated beam in the paper by using Monte Carlo method in different condition. Also, the influence of bio-tissue refractive index on the distribution of light fluence was discussed in detail. Our results showed that, when the refractive index of several bio-tissues which had the same effective attenuation coefficient were the same, the depth resolved internal fluence would be the same; otherwise, the depth resolved internal fluence would be not the same. The change of refractive index of tissue would have affection on the light depth distribution in tissue. Therefore, the refractive index is an important optical property of tissue, and should be taken in account while using the diffusion approximation theory.
NASA Astrophysics Data System (ADS)
Lopes, Marta; Murta, Alberto G.; Cabral, Henrique N.
2006-03-01
The existence of two species of the genus Macroramphosus Lacepède 1803, has been discussed based on morphometric characters, diet composition and depth distribution. Another species, the boarfish Capros aper (Linnaeus 1758), caugth along the Portuguese coast, shows two different morphotypes, one type with smaller eyes and a deeper body than the other, occurring with intermediate forms. In both snipefish and boarfish no sexual dimorphism was found with respect to shape and length relationships. However, females in both genera were on average bigger than males. A multidimensional scaling analysis was performed using Procrustes distances, in order to check if shape geometry was effective in distinguishing the species of snipefish as well as the morphotypes of boarfish. A multivariate discriminant analysis using morphometric characters of snipefish and boarfish was carried out to validate the visual criteria for a distinction of species and morphotypes, respectively. Morphometric characters revealed a great discriminatory power to distinguish morphotypes. Both snipefish and boarfish are very abundant in Portuguese waters, showing two well-defined morphologies and intermediate forms. This study suggests that there may be two different species in each genus and that further studies on these fish should be carried out to investigate if there is reproductive isolation between the morphotypes of boarfish and to validate the species of snipefish.
[A snow depth inversion method for the HJ-1B satellite data].
Dong, Ting-Xu; Jiang, Hong-Bo; Chen, Chao; Qin, Qi-Ming
2011-10-01
The importance of the snow is self-evident, while the harms caused by the snow have also received more and more attention. At present, the retrieval of snow depth mainly focused on the use of microwave remote sensing data or a small amount of optical remote sensing data, such as the meteorological data or the MODIS data. The small satellites for environment and disaster monitoring of China are quite different form the meteorological data and MODIS data, both in the spectral resolution or spatial resolution. In this paper, aimed at the HJ-1B data, snow spectral of different underlying surfaces and depths were surveyed. The correlation between snow cover index and snow depth was also analyzed to establish the model for the snow depth retrieval using the HJ-1B data. The validation results showed that it can meet the requirements of real-time monitoring the snow depth on the condition of conventional snow depth.
Dingman, R.J.; Angino, E.E.
1969-01-01
Chemical analyses of approximately 1,881 samples of water from selected Kansas brines define the variations of water chemistry with depth and aquifer age. The most concentrated brines are found in the Permian rocks which occupy the intermediate section of the geologic column of this area. Salinity decreases below the Permian until the Ordovician (Arbuckle) horizon is reached and then increases until the Precambrian basement rocks are reached. Chemically, the petroleum brines studied in this small area fit the generally accepted pattern of an increase in calcium, sodium and chloride content with increasing salinity. They do not fit the often-predicted trend of increases in the calcium to chloride ratio, calcium content and salinity with depth and geologic age. The calcium to chloride ratio tends to be asymptotic to about 0.2 with increasing chloride content. Sulfate tends to decrease with increasing calcium content. Bicarbonate content is relatively constant with depth. If many of the hypotheses concerning the chemistry of petroleum brines are valid, then the brines studied are anomolous. An alternative lies in accepting the thesis that exceptions to these hypotheses are rapidly becoming the rule and that indeed we still do not have a valid and general hypothesis to explain the origin and chemistry of petroleum brines. ?? 1969.
Yi, Ming; Stephens, Robert M.
2008-01-01
Analysis of microarray and other high throughput data often involves identification of genes consistently up or down-regulated across samples as the first step in extraction of biological meaning. This gene-level paradigm can be limited as a result of valid sample fluctuations and biological complexities. In this report, we describe a novel method, SLEPR, which eliminates this limitation by relying on pathway-level consistencies. Our method first selects the sample-level differentiated genes from each individual sample, capturing genes missed by other analysis methods, ascertains the enrichment levels of associated pathways from each of those lists, and then ranks annotated pathways based on the consistency of enrichment levels of individual samples from both sample classes. As a proof of concept, we have used this method to analyze three public microarray datasets with a direct comparison with the GSEA method, one of the most popular pathway-level analysis methods in the field. We found that our method was able to reproduce the earlier observations with significant improvements in depth of coverage for validated or expected biological themes, but also produced additional insights that make biological sense. This new method extends existing analyses approaches and facilitates integration of different types of HTP data. PMID:18818771
Completing the Feedback Loop: The Impact of Chlorophyll Data Assimilation on the Ocean State
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Keppenne, Christian; Kovach, Robin
2015-01-01
In anticipation of the integration of a full biochemical model into the next generation GMAO coupled system, an intermediate solution has been implemented to estimate the penetration depth (1Kd_PAR) of ocean radiation based on the chlorophyll concentration. The chlorophyll is modeled as a tracer with sources-sinks coming from the assimilation of MODIS chlorophyll data. Two experiments were conducted with the coupled ocean-atmosphere model. In the first, climatological values of Kpar were used. In the second, retrieved daily chlorophyll concentrations were assimilated and Kd_PAR was derived according to Morel et al (2007). No other data was assimilated to isolate the effects of the time-evolving chlorophyll field. The daily MODIS Kd_PAR product was used to validate the skill of the penetration depth estimation and the MERRA-OCEAN re-analysis was used as a benchmark to study the sensitivity of the upper ocean heat content and vertical temperature distribution to the chlorophyll input. In the experiment with daily chlorophyll data assimilation, the penetration depth was estimated more accurately, especially in the tropics. As a result, the temperature bias of the model was reduced. A notably robust albeit small (2-5 percent) improvement was found across the equatorial Pacific ocean, which is a critical region for seasonal to inter-annual prediction.
Smart, David R; Van den Broek, Cory; Nishi, Ron; Cooper, P David; Eastman, David
2014-09-01
Tasmania's aquaculture industry produces over 40,000 tonnes of fish annually, valued at over AUD500M. Aquaculture divers perform repetitive, short-duration bounce dives in fish pens to depths up to 21 metres' sea water (msw). Past high levels of decompression illness (DCI) may have resulted from these 'yo-yo' dives. This study aimed to assess working divers, using Doppler ultrasonic bubble detection, to determine if yo-yo diving was a risk factor for DCI, determine dive profiles with acceptable risk and investigate productivity improvement. Field data were collected from working divers during bounce diving at marine farms near Hobart, Australia. Ascent rates were less than 18 m·min⁻¹, with routine safety stops (3 min at 3 msw) during the final ascent. The Kisman-Masurel method was used to grade bubbling post dive as a means of assessing decompression stress. In accordance with Defence Research and Development Canada Toronto practice, dives were rejected as excessive risk if more than 50% of scores were over Grade 2. From 2002 to 2008, Doppler data were collected from 150 bounce-dive series (55 divers, 1,110 bounces). Three series of bounce profiles, characterized by in-water times, were validated: 13-15 msw, 10 bounces inside 75 min; 16-18 msw, six bounces inside 50 min; and 19-21 msw, four bounces inside 35 min. All had median bubble grades of 0. Further evaluation validated two successive series of bounces. Bubble grades were consistent with low-stress dive profiles. Bubble grades did not correlate with the number of bounces, but did correlate with ascent rate and in-water time. These data suggest bounce diving was not a major factor causing DCI in Tasmanian aquaculture divers. Analysis of field data has improved industry productivity by increasing the permissible number of bounces, compared to earlier empirically-derived tables, without compromising safety. The recommended Tasmanian Bounce Diving Tables provide guidance for bounce diving to a depth of 21 msw, and two successive bounce dive series in a day's diving.
Shape-from-focus by tensor voting.
Hariharan, R; Rajagopalan, A N
2012-07-01
In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.
NASA Astrophysics Data System (ADS)
Mohrmann, J.; Ghate, V. P.; McCoy, I. L.; Bretherton, C. S.; Wood, R.; Minnis, P.; Palikonda, R.
2017-12-01
The Cloud System Evolution in the Trades (CSET) field campaign took place July/August 2015 to study the evolution of clouds, precipitation, and aerosols in the stratocumulus-to-cumulus (Sc-Cu) transition region of the northeast Pacific marine boundary layer (MBL). Aircraft observations sampled across a wide range of cloud and aerosol conditions. The sampling strategy, where MBL airmasses were sampled with the NSF/NCAR Gulfstream-V (HIAPER) and resampled then at their advected location two days later, resulted in a dataset of 14 paired flights suitable for Lagrangian analysis. This analysis shows that Lagrangian coherence of long-lived species (namely CO and O3) across 48 hours are high, but that of subcloud aerosol, MBL depth, and cloud properties is limited. Geostationary satellite retrievals are compared against aircraft observations; these are combined with reanalysis data and HYSPLIT trajectories to document the Lagrangian evolution of cloud fraction, cloud droplet number concentration, liquid water path, estimated inversion strength (EIS), and MBL depth, which are used to expand upon and validate the aircraft-based analysis. Many of the trajectories sampled by the aircraft show a clear Sc-Cu transition. Although satellite cloud fraction and EIS were found to be strongly spatiotemporally correlated, changes in MBL cloud fraction along trajectories did not correlate with any measure of EIS forcing.
Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers
NASA Technical Reports Server (NTRS)
Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino
2012-01-01
Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).
MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes
NASA Astrophysics Data System (ADS)
Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.
2017-11-01
The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.
NASA Astrophysics Data System (ADS)
Bodnar, Victoria; Ganeev, Alexander; Gubal, Anna; Solovyev, Nikolay; Glumov, Oleg; Yakobson, Viktor; Murin, Igor
2018-07-01
A pulsed direct current glow discharge time-of-flight mass spectrometry (GD TOF MS) method for the quantification of fluorine in insoluble crystal materials with fluorine doped potassium titanyl phosphate (KTP) KTiOPO4:KF as an example has been proposed. The following parameters were optimized: repelling pulse delay, discharge duration, discharge voltage, and pressure in the discharge cell. Effective ionization of fluorine in the space between sampler and skimmer under short repelling pulse delay, related to the high-energy electron impact at the discharge front, has been demonstrated. A combination of instrumental and mathematical correction approaches was used to cope for the interferences of 38Ar2+ and 1H316O + on 19F+. To maintain surface conductivity in the dielectric KTP crystals and insure its effective sputtering in combined hollow cathode cell, silver suspension applied by the dip-coating method was employed. Fluorine quantification was performed using relative sensitivity factors. The analysis of a reference material and scanning electron microscope-energy dispersive X-ray spectroscopy was used for validation. Fluorine limit of detection by pulsed direct current GD TOF MS was 0.01 mass%. Real sample analysis showed that fluorine seems to be inhomogeneously distributed in the crystals. That is why depth profiling of F, K, O, and P was performed to evaluate the crystals' non-stoichiometry. The approaches designed allow for fluorine quantification in insoluble dielectric materials with minimal sample preparation and destructivity as well as performing depth profiling to assess crystal non-stoichiometry.
Xie, Zhixiao; Liu, Zhongwei; Jones, John W.; Higer, Aaron L.; Telis, Pamela A.
2011-01-01
The hydrologic regime is a critical limiting factor in the delicate ecosystem of the greater Everglades freshwater wetlands in south Florida that has been severely altered by management activities in the past several decades. "Getting the water right" is regarded as the key to successful restoration of this unique wetland ecosystem. An essential component to represent and model its hydrologic regime, specifically water depth, is an accurate ground Digital Elevation Model (DEM). The Everglades Depth Estimation Network (EDEN) supplies important hydrologic data, and its products (including a ground DEM) have been well received by scientists and resource managers involved in Everglades restoration. This study improves the EDEN DEMs of the Loxahatchee National Wildlife Refuge, also known as Water Conservation Area 1 (WCA1), by adopting a landscape unit (LU) based interpolation approach. The study first filtered the input elevation data based on newly available vegetation data, and then created a separate geostatistical model (universal kriging) for each LU. The resultant DEMs have encouraging cross-validation and validation results, especially since the validation is based on an independent elevation dataset (derived by subtracting water depth measurements from EDEN water surface elevations). The DEM product of this study will directly benefit hydrologic and ecological studies as well as restoration efforts. The study will also be valuable for a broad range of wetland studies.
Royal London space analysis: plaster versus digital model assessment.
Grewal, Balpreet; Lee, Robert T; Zou, Lifong; Johal, Ama
2017-06-01
With the advent of digital study models, the importance of being able to evaluate space requirements becomes valuable to treatment planning and the justification for any required extraction pattern. This study was undertaken to compare the validity and reliability of the Royal London space analysis (RLSA) undertaken on plaster as compared with digital models. A pilot study (n = 5) was undertaken on plaster and digital models to evaluate the feasibility of digital space planning. This also helped to determine the sample size calculation and as a result, 30 sets of study models with specified inclusion criteria were selected. All five components of the RLSA, namely: crowding; depth of occlusal curve; arch expansion/contraction; incisor antero-posterior advancement and inclination (assessed from the pre-treatment lateral cephalogram) were accounted for in relation to both model types. The plaster models served as the gold standard. Intra-operator measurement error (reliability) was evaluated along with a direct comparison of the measured digital values (validity) with the plaster models. The measurement error or coefficient of repeatability was comparable for plaster and digital space analyses and ranged from 0.66 to 0.95mm. No difference was found between the space analysis performed in either the upper or lower dental arch. Hence, the null hypothesis was accepted. The digital model measurements were consistently larger, albeit by a relatively small amount, than the plaster models (0.35mm upper arch and 0.32mm lower arch). No difference was detected in the RLSA when performed using either plaster or digital models. Thus, digital space analysis provides a valid and reproducible alternative method in the new era of digital records. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Safarpour, S.; Abdullah, K.; Lim, H. S.; Dadras, M.
2017-09-01
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA's Terra satellites, for the 10 years period of 2000 - 2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer and winter and ordinary kriging yielded the best results for fall.
Scour around vertical wall abutment in cohesionless sediment bed
NASA Astrophysics Data System (ADS)
Pandey, M.; Sharma, P. K.; Ahmad, Z.
2017-12-01
At the time of floods, failure of bridges is the biggest disaster and mainly sub-structure (bridge abutments and piers) are responsible for this failure of bridges. It is very risky if these sub structures are not constructed after proper designing and analysis. Scour is a natural phenomenon in rivers or streams caused by the erosive action of the flowing water on the bed and banks. The abutment undermines due to river-bed erosion and scouring, which generally recognized as the main cause of abutment failure. Most of the previous studies conducted on scour around abutment have concerned with the prediction of the maximum scour depth (Lim, 1994; Melvill, 1992, 1997 and Dey and Barbhuiya, 2005). Dey and Barbhuiya (2005) proposed a relationship for computing maximum scour depth near an abutment, based on laboratory experiments, for computing maximum scour depth around vertical wall abutment, which was confined to their experimental data only. However, this relationship needs to be also verified by the other researchers data in order to support the reliability to the relationship and its wider applicability. In this study, controlled experimentations have been carried out on the scour near a vertical wall abutment. The collected data in this study along with data of the previous investigators have been carried out on the scour near vertical wall abutment. The collected data in this study along with data of the previous have been used to check the validity of the existing equation (Lim, 1994; Melvill, 1992, 1997 and Dey and Barbhuiya, 2005) of maximum scour depth around the vertical wall abutment. A new relationship is proposed to estimate the maximum scour depth around vertical wall abutment, it gives better results all relationships.
Singec, Ilyas; Crain, Andrew M; Hou, Junjie; Tobe, Brian T D; Talantova, Maria; Winquist, Alicia A; Doctor, Kutbuddin S; Choy, Jennifer; Huang, Xiayu; La Monaca, Esther; Horn, David M; Wolf, Dieter A; Lipton, Stuart A; Gutierrez, Gustavo J; Brill, Laurence M; Snyder, Evan Y
2016-09-13
Controlled differentiation of human embryonic stem cells (hESCs) can be utilized for precise analysis of cell type identities during early development. We established a highly efficient neural induction strategy and an improved analytical platform, and determined proteomic and phosphoproteomic profiles of hESCs and their specified multipotent neural stem cell derivatives (hNSCs). This quantitative dataset (nearly 13,000 proteins and 60,000 phosphorylation sites) provides unique molecular insights into pluripotency and neural lineage entry. Systems-level comparative analysis of proteins (e.g., transcription factors, epigenetic regulators, kinase families), phosphorylation sites, and numerous biological pathways allowed the identification of distinct signatures in pluripotent and multipotent cells. Furthermore, as predicted by the dataset, we functionally validated an autocrine/paracrine mechanism by demonstrating that the secreted protein midkine is a regulator of neural specification. This resource is freely available to the scientific community, including a searchable website, PluriProt. Published by Elsevier Inc.
Influence of number and depth of magnetic mirror on Alfvénic gap eigenmode
NASA Astrophysics Data System (ADS)
Chang, Lei; Hu, Ning; Yao, Jianyao
2016-10-01
Alfvénic gap eigenmode (AGE) can eject energetic particles from confinement and thereby threaten the success of magnetically controlled fusion. A low-temperature plasma cylinder is a promising candidate to study this eigenmode, due to easy diagnostic access and simple geometry, and the idea is to arrange a periodic array of magnetic mirrors along the plasma cylinder and introduce a local defect to break the field periodicity. The present work validates this idea by reproducing a clear AGE inside a spectral gap, and more importantly details the influence of the number and depth (or modulation factor) of magnetic mirror on the characteristics of AGE. Results show that AGE is suppressed by other modes inside the spectral gap when the number of magnetic mirrors is below a certain value, which leads to a weakened Bragg’s effect. The structure and frequency of AGE remain unchanged for a decreased number of magnetic mirrors, as long as this number is enough for the AGE formation. The width of spectral gap and decay constant (inverse of decay length) of AGE are linearly proportional to the depth of magnetic mirror, implying easier observation of AGE through a bigger mirror depth. The frequency of AGE shifts to a lower range with the depth increased, possibly due to the unfrozen plasma with field line and the invalidity of small-perturbation analysis. Nevertheless, it is exciting to find that the depth of field modulation can be increased to form AGE for a very limited number of magnetic mirrors. This is of particular interest for the experimental implementation of AGE on a low-temperature plasma cylinder with limited length. Project supported by the National Natural Science Foundation of China (Grant Nos. 11405271, 11372104, 75121543, 11332013, 11372363, and 11502037).
The research of breaking rock with liquid-solid two-phase jet flow
NASA Astrophysics Data System (ADS)
Cheng, X. Z.; Ren, F. S.; Fang, T. C.
2018-03-01
Abstracts. Particle impact drilling is an efficient way of breaking rock, which is mainly used in deep drilling and ultra-deep drilling. The differential equation was established based on the theory of Hertz and Newton’s second law, through the analysis of particle impact rock, the depth of particles into the rock was obtained. The mathematical model was established based on the effect of water impact crack. The research results show when water jet speed is more than 40 m/s, rock stability coefficient is more than 1.0, the rock fracture appear. Through the experimental research of particle impact drilling facilities, analysis of cuttings and the crack size which was analyzed through Scanning electron microscope consistent with the theoretical calculation, the validity of the model was verified.
NASA Astrophysics Data System (ADS)
Maierhofer, Christiane; Röllig, Mathias; Gower, Michael; Lodeiro, Maria; Baker, Graham; Monte, Christian; Adibekyan, Albert; Gutschwager, Berndt; Knazowicka, Lenka; Blahut, Ales
2018-05-01
For assuring the safety and reliability of components and constructions in energy applications made of fiber-reinforced polymers (e.g., blades of wind turbines and tidal power plants, engine chassis, flexible oil and gas pipelines) innovative non-destructive testing methods are required. Within the European project VITCEA complementary methods (shearography, microwave, ultrasonics and thermography) have been further developed and validated. Together with partners from the industry, test specimens have been constructed and selected on-site containing different artificial and natural defect artefacts. As base materials, carbon and glass fibers in different orientations and layering embedded in different matrix materials (epoxy, polyamide) have been considered. In this contribution, the validation of flash and lock-in thermography to these testing problems is presented. Data analysis is based on thermal contrasts and phase evaluation techniques. Experimental data are compared to analytical and numerical models. Among others, the influence of two different types of artificial defects (flat bottom holes and delaminations) with varying diameters and depths and of two different materials (CFRP and GFRP) with unidirectional and quasi-isotropic fiber alignment is discussed.
Does pressure matter in creating burns in a porcine model?
Singer, Adam J; Taira, Breena R; Anderson, Ryon; McClain, Steve A; Rosenberg, Lior
2010-01-01
Multiple animal models of burn injury have been reported, and only some of these have been fully validated. One of the most popular approaches is burn infliction by direct contact with the heat source. Previous investigators have reported that the pressure of application of the contact burn infliction device does not affect the depth of injury. We hypothesized that the depth of injury would increase with increasing pressure of application in a porcine burn model. Forty mid-dermal contact burns measuring 25 x 25 mm were created on the back and flanks of an anesthetized domestic pig (50 kg) using a brass bar preheated in 80 degrees C water for a period of 30 or 20 seconds. The bars were applied using a spring-loaded device designed to control the amount of pressure applied to the skin. The pressures applied by the brass bar were gravity (0.2 kg), 2.0, 2.7, 3.8, and 4.5 kg in replicates of eight. One hour later, 8-mm full-thickness biopsies were obtained for histologic analysis using Elastic Van Gieson staining by a board-certified dermatopathologist masked to burn conditions. The depth of complete and partial collagen injury was measured from the level of the basement membrane using a microscopic micrometer measuring lens. Groups were compared with analysis of variance (ANOVA). The association between depth of injury and pressure was determined with Pearson correlations. The mean (95% confidence interval) depths of complete collagen injury with 30-second exposures were as follows: gravity only, 0.51 (0.39-0.66) mm; 2.0 kg, 0.72 (0.55-0.88) mm; 2.7 kg, 0.68 (0.55-1.00) mm; 3.8 kg, 0.92 (0.80-1.00) mm; and 4.5 kg, 1.65 (1.55-1.75) mm. The differences in depth of injury between the various pressure groups were significant (ANOVA, P < .001). The mean (95% confidence interval) depths of partial collagen injury were as follows: gravity only, 1.10 (0.92-1.30) mm; 2.0 kg, 1.46 (1.28-1.63) mm; 2.7 kg, 1.51 (1.34-1.64) mm; 3.8 kg, 1.82 (1.71-1.94) mm; and 4.5 kg, 2.50 (2.39-2.62) mm; and ANOVA, P = .001. The associations between pressure of application and depth of complete and partial collagen injury were 0.73 (P < .001) and 0.65 (P < .001), respectively. There is a direct association between the pressure of burn device application and depth of injury. Future studies should standardize and specify the amount of pressure applied using the burn infliction device.
Detection of tunnel excavation using fiber optic reflectometry: experimental validation
NASA Astrophysics Data System (ADS)
Linker, Raphael; Klar, Assaf
2013-06-01
Cross-border smuggling tunnels enable unmonitored movement of people and goods, and pose a severe threat to homeland security. In recent years, we have been working on the development of a system based on fiber- optic Brillouin time domain reflectometry (BOTDR) for detecting tunnel excavation. In two previous SPIE publications we have reported the initial development of the system as well as its validation using small-scale experiments. This paper reports, for the first time, results of full-scale experiments and discusses the system performance. The results confirm that distributed measurement of strain profiles in fiber cables buried at shallow depth enable detection of tunnel excavation, and by proper data processing, these measurements enable precise localization of the tunnel, as well as reasonable estimation of its depth.
Depth cue reliance in surgeons and medical students.
Shah, J; Buckley, D; Frisby, J; Darzi, A
2003-09-01
Depth perception is reduced in endoscopic surgery, although little is known about the effect this has on surgical performance. To assess the role of depth cues, 45 subjects completed tests of depth cue reliance. Surgical skill was assessed using the Minimally Invasive Surgical Trainer-Virtual Reality, a previously validated laparoscopic simulator. We could demonstrate no difference in cue reliance for three depth cues--namely stereo, texture, and outline--between surgeons and medical students. Greater dominance on stereo for medical students was a positive finding and a negative finding for the surgeons when correlated with surgical performance. We suggest that surgeons learn to adapt to the nonstereo environment in MIS, and this is the first study to show evidence of this phenomenon. This difference in stereo reliance is a reflection of the experience that surgeons have with laparoscopy compared with medical students, who have none.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
NASA Astrophysics Data System (ADS)
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; Turner, David D.; Eloranta, Edwin W.
2017-06-01
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookup table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation (R2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21 µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.
Blanchard, Yann; Royer, Alain; O'Neill, Norman T.; ...
2017-06-09
Multiband downwelling thermal measurements of zenith sky radiance, along with cloud boundary heights, were used in a retrieval algorithm to estimate cloud optical depth and effective particle diameter of thin ice clouds in the Canadian High Arctic. Ground-based thermal infrared (IR) radiances for 150 semitransparent ice clouds cases were acquired at the Polar Environment Atmospheric Research Laboratory (PEARL) in Eureka, Nunavut, Canada (80° N, 86° W). We analyzed and quantified the sensitivity of downwelling thermal radiance to several cloud parameters including optical depth, effective particle diameter and shape, water vapor content, cloud geometric thickness and cloud base altitude. A lookupmore » table retrieval method was used to successfully extract, through an optimal estimation method, cloud optical depth up to a maximum value of 2.6 and to separate thin ice clouds into two classes: (1) TIC1 clouds characterized by small crystals (effective particle diameter ≤ 30 µm), and (2) TIC2 clouds characterized by large ice crystals (effective particle diameter > 30 µm). The retrieval technique was validated using data from the Arctic High Spectral Resolution Lidar (AHSRL) and Millimeter Wave Cloud Radar (MMCR). Inversions were performed over three polar winters and results showed a significant correlation ( R 2 = 0.95) for cloud optical depth retrievals and an overall accuracy of 83 % for the classification of TIC1 and TIC2 clouds. A partial validation relative to an algorithm based on high spectral resolution downwelling IR radiance measurements between 8 and 21µm was also performed. It confirms the robustness of the optical depth retrieval and the fact that the broadband thermal radiometer retrieval was sensitive to small particle (TIC1) sizes.« less
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
Alaska North Slope Tundra Travel Model and Validation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harry R. Bader; Jacynthe Guimond
2006-03-01
The Alaska Department of Natural Resources (DNR), Division of Mining, Land, and Water manages cross-country travel, typically associated with hydrocarbon exploration and development, on Alaska's arctic North Slope. This project is intended to provide natural resource managers with objective, quantitative data to assist decision making regarding opening of the tundra to cross-country travel. DNR designed standardized, controlled field trials, with baseline data, to investigate the relationships present between winter exploration vehicle treatments and the independent variables of ground hardness, snow depth, and snow slab thickness, as they relate to the dependent variables of active layer depth, soil moisture, and photosyntheticallymore » active radiation (a proxy for plant disturbance). Changes in the dependent variables were used as indicators of tundra disturbance. Two main tundra community types were studied: Coastal Plain (wet graminoid/moist sedge shrub) and Foothills (tussock). DNR constructed four models to address physical soil properties: two models for each main community type, one predicting change in depth of active layer and a second predicting change in soil moisture. DNR also investigated the limited potential management utility in using soil temperature, the amount of photosynthetically active radiation (PAR) absorbed by plants, and changes in microphotography as tools for the identification of disturbance in the field. DNR operated under the assumption that changes in the abiotic factors of active layer depth and soil moisture drive alteration in tundra vegetation structure and composition. Statistically significant differences in depth of active layer, soil moisture at a 15 cm depth, soil temperature at a 15 cm depth, and the absorption of photosynthetically active radiation were found among treatment cells and among treatment types. The models were unable to thoroughly investigate the interacting role between snow depth and disturbance due to a lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.« less
Extension of the frequency-domain pFFT method for wave structure interaction in finite depth
NASA Astrophysics Data System (ADS)
Teng, Bin; Song, Zhi-jie
2017-06-01
To analyze wave interaction with a large scale body in the frequency domain, a precorrected Fast Fourier Transform (pFFT) method has been proposed for infinite depth problems with the deep water Green function, as it can form a matrix with Toeplitz and Hankel properties. In this paper, a method is proposed to decompose the finite depth Green function into two terms, which can form matrices with the Toeplitz and a Hankel properties respectively. Then, a pFFT method for finite depth problems is developed. Based on the pFFT method, a numerical code pFFT-HOBEM is developed with the discretization of high order elements. The model is validated, and examinations on the computing efficiency and memory requirement of the new method have also been carried out. It shows that the new method has the same advantages as that for infinite depth.
There’s plenty of light at the bottom: statistics of photon penetration depth in random media
Martelli, Fabrizio; Binzoni, Tiziano; Pifferi, Antonio; Spinelli, Lorenzo; Farina, Andrea; Torricelli, Alessandro
2016-01-01
We propose a comprehensive statistical approach describing the penetration depth of light in random media. The presented theory exploits the concept of probability density function f(z|ρ, t) for the maximum depth reached by the photons that are eventually re-emitted from the surface of the medium at distance ρ and time t. Analytical formulas for f, for the mean maximum depth 〈zmax〉 and for the mean average depth reached by the detected photons at the surface of a diffusive slab are derived within the framework of the diffusion approximation to the radiative transfer equation, both in the time domain and the continuous wave domain. Validation of the theory by means of comparisons with Monte Carlo simulations is also presented. The results are of interest for many research fields such as biomedical optics, advanced microscopy and disordered photonics. PMID:27256988
Anti-scar Treatment for Deep Partial-thickness Burn Wounds
2016-10-01
Composition is expressed as percentage and as grams in brackets c Benzyl alcohol d Placebo Figure 1. In vitro release study of PF ointment...wounds C) pathology score. Additionally, burn wounds were validated using H&E, Masson’s trichrome and TUNEL staining to assess the depth of damage...TUNEL stain . The red arrows indicate the burn depth (A & B) or the boundary between dead (above) and live (below) tissues by TUNEL staining (C
NASA Astrophysics Data System (ADS)
Green, Daniel; Yu, Dapeng; Pattison, Ian
2017-04-01
Surface water flooding occurs when intense precipitation events overwhelm the drainage capacity of an area and excess overland flow is unable to infiltrate into the ground or drain via natural or artificial drainage channels, such as river channels, manholes or SuDS. In the UK, over 3 million properties are at risk from surface water flooding alone, accounting for approximately one third of the UK's flood risk. The risk of surface water flooding is projected to increase due to several factors, including population increases, land-use alterations and future climatic changes in precipitation resulting in an increased magnitude and frequency of intense precipitation events. Numerical inundation modelling is a well-established method of investigating surface water flood risk, allowing the researcher to gain a detailed understanding of the depth, velocity, discharge and extent of actual or hypothetical flood scenarios over a wide range of spatial scales. However, numerical models require calibration of key hydrological and hydraulic parameters (e.g. infiltration, evapotranspiration, drainage rate, roughness) to ensure model outputs adequately represent the flood event being studied. Furthermore, validation data such as crowdsourced images or spatially-referenced flood depth collected during a flood event may provide a useful validation of inundation depth and extent for actual flood events. In this study, a simplified two-dimensional inertial based flood inundation model requiring minimal pre-processing of data (FloodMap-HydroInundation) was used to model a short-duration, intense rainfall event (27.8 mm in 15 minutes) that occurred over the Loughborough University campus on the 28th June 2012. High resolution (1m horizontal, +/- 15cm vertical) DEM data, rasterised Ordnance Survey topographic structures data and precipitation data recorded at the University weather station were used to conduct numerical modelling over the small (< 2km2), contained urban catchment. To validate model outputs and allow a reconstruction of spatially referenced flood depth and extent during the flood event, crowdsourced images were obtained from social media (Twitter) and from individuals present during the flood event via the University noticeboards, as well as using dGPS flood depth data collected at one of the worst affected areas. An investigation into the sensitivity of key model parameters suggests that the numerical model code is highly sensitivity to changes within the recommended range of roughness and infiltration values, as well as changes in DEM and building mesh resolutions, but less sensitive to changes in evapotranspiration and drainage capacity parameters. The study also demonstrates the potential of using crowdsourced images to validate urban surface water flood models and inform parameterisation when calibrating numerical inundation models.
A recent deep earthquake doublet in light of long-term evolution of Nazca subduction
NASA Astrophysics Data System (ADS)
Zahradník, J.; Čížková, H.; Bina, C. R.; Sokos, E.; Janský, J.; Tavera, H.; Carvalho, J.
2017-03-01
Earthquake faulting at ~600 km depth remains puzzling. Here we present a new kinematic interpretation of two Mw7.6 earthquakes of November 24, 2015. In contrast to teleseismic analysis of this doublet, we use regional seismic data providing robust two-point source models, further validated by regional back-projection and rupture-stop analysis. The doublet represents segmented rupture of a ˜30-year gap in a narrow, deep fault zone, fully consistent with the stress field derived from neighbouring 1976-2015 earthquakes. Seismic observations are interpreted using a geodynamic model of regional subduction, incorporating realistic rheology and major phase transitions, yielding a model slab that is nearly vertical in the deep-earthquake zone but stagnant below 660 km, consistent with tomographic imaging. Geodynamically modelled stresses match the seismically inferred stress field, where the steeply down-dip orientation of compressive stress axes at ˜600 km arises from combined viscous and buoyant forces resisting slab penetration into the lower mantle and deformation associated with slab buckling and stagnation. Observed fault-rupture geometry, demonstrated likelihood of seismic triggering, and high model temperatures in young subducted lithosphere, together favour nanometric crystallisation (and associated grain-boundary sliding) attending high-pressure dehydration as a likely seismogenic mechanism, unless a segment of much older lithosphere is present at depth.
A new world survey expression for cosmic ray vertical intensity vs. depth in standard rock
NASA Technical Reports Server (NTRS)
Crouch, M.
1985-01-01
The cosmic ray data on vertical intensity versus depth below 10 to the 5th power g sq cm is fitted to a 5 parameter empirical formula to give an analytical expression for interpretation of muon fluxes in underground measurements. This expression updates earlier published results and complements the more precise curves obtained by numerical integration or Monte Carlo techniques in which the fit is made to an energy spectrum at the top of the atmosphere. The expression is valid in the transitional region where neutrino induced muons begin to be important, as well as at great depths where this component becomes dominant.
2014-11-01
39–44) has been explored in depth in the literature. Of particular interest for this study are investigations into roll control. Isolating the...Control Performance, Aerodynamic Modeling, and Validation of Coupled Simulation Techniques for Guided Projectile Roll Dynamics by Jubaraj...Simulation Techniques for Guided Projectile Roll Dynamics Jubaraj Sahu, Frank Fresconi, and Karen R. Heavey Weapons and Materials Research
Spring 2013 Graduate Engineering Internship Summary
NASA Technical Reports Server (NTRS)
Ehrlich, Joshua
2013-01-01
In the spring of 2013, I participated in the National Aeronautics and Space Administration (NASA) Pathways Intern Employment Program at the Kennedy Space Center (KSC) in Florida. This was my final internship opportunity with NASA, a third consecutive extension from a summer 2012 internship. Since the start of my tenure here at KSC, I have gained an invaluable depth of engineering knowledge and extensive hands-on experience. These opportunities have granted me the ability to enhance my systems engineering approach in the field of payload design and testing as well as develop a strong foundation in the area of composite fabrication and testing for repair design on space vehicle structures. As a systems engineer, I supported the systems engineering and integration team with final acceptance testing of the Vegetable Production System, commonly referred to as Veggie. Verification and validation (V and V) of Veggie was carried out prior to qualification testing of the payload, which incorporated the process of confirming the system's design requirements dependent on one or more validation methods: inspection, analysis, demonstration, and testing.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Faddegon, Bruce A.; Shin, Jungwook; Castenada, Carlos M.; Ramos-Méndez, José; Daftari, Inder K.
2015-01-01
Purpose: To measure depth dose curves for a 67.5 ± 0.1 MeV proton beam for benchmarking and validation of Monte Carlo simulation. Methods: Depth dose curves were measured in 2 beam lines. Protons in the raw beam line traversed a Ta scattering foil, 0.1016 or 0.381 mm thick, a secondary emission monitor comprised of thin Al foils, and a thin Kapton exit window. The beam energy and peak width and the composition and density of material traversed by the beam were known with sufficient accuracy to permit benchmark quality measurements. Diodes for charged particle dosimetry from two different manufacturers were used to scan the depth dose curves with 0.003 mm depth reproducibility in a water tank placed 300 mm from the exit window. Depth in water was determined with an uncertainty of 0.15 mm, including the uncertainty in the water equivalent depth of the sensitive volume of the detector. Parallel-plate chambers were used to verify the accuracy of the shape of the Bragg peak and the peak-to-plateau ratio measured with the diodes. The uncertainty in the measured peak-to-plateau ratio was 4%. Depth dose curves were also measured with a diode for a Bragg curve and treatment beam spread out Bragg peak (SOBP) on the beam line used for eye treatment. The measurements were compared to Monte Carlo simulation done with geant4 using topas. Results: The 80% dose at the distal side of the Bragg peak for the thinner foil was at 37.47 ± 0.11 mm (average of measurement with diodes from two different manufacturers), compared to the simulated value of 37.20 mm. The 80% dose for the thicker foil was at 35.08 ± 0.15 mm, compared to the simulated value of 34.90 mm. The measured peak-to-plateau ratio was within one standard deviation experimental uncertainty of the simulated result for the thinnest foil and two standard deviations for the thickest foil. It was necessary to include the collimation in the simulation, which had a more pronounced effect on the peak-to-plateau ratio for the thicker foil. The treatment beam, being unfocussed, had a broader Bragg peak than the raw beam. A 1.3 ± 0.1 MeV FWHM peak width in the energy distribution was used in the simulation to match the Bragg peak width. An additional 1.3–2.24 mm of water in the water column was required over the nominal values to match the measured depth penetration. Conclusions: The proton Bragg curve measured for the 0.1016 mm thick Ta foil provided the most accurate benchmark, having a low contribution of proton scatter from upstream of the water tank. The accuracy was 0.15% in measured beam energy and 0.3% in measured depth penetration at the Bragg peak. The depth of the distal edge of the Bragg peak in the simulation fell short of measurement, suggesting that the mean ionization potential of water is 2–5 eV higher than the 78 eV used in the stopping power calculation for the simulation. The eye treatment beam line depth dose curves provide validation of Monte Carlo simulation of a Bragg curve and SOBP with 4%/2 mm accuracy. PMID:26133619
NASA Astrophysics Data System (ADS)
Zawadzka, Olga; Stachlewska, Iwona S.; Markowicz, Krzysztof M.; Nemuc, Anca; Stebel, Kerstin
2018-04-01
During an exceptionally warm September of 2016, the unique, stable weather conditions over Poland allowed for an extensive testing of the new algorithm developed to improve the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI) aerosol optical depth (AOD) retrieval. The development was conducted in the frame of the ESA-ESRIN SAMIRA project. The new AOD algorithm aims at providing the aerosol optical depth maps over the territory of Poland with a high temporal resolution of 15 minutes. It was tested on the data set obtained between 11-16 September 2016, during which a day of relatively clean atmospheric background related to an Arctic airmass inflow was surrounded by a few days with well increased aerosol load of different origin. On the clean reference day, for estimating surface reflectance the AOD forecast available on-line via the Copernicus Atmosphere Monitoring Service (CAMS) was used. The obtained AOD maps were validated against AODs available within the Poland-AOD and AERONET networks, and with AOD values obtained from the PollyXT-UW lidar. of the University of Warsaw (UW).
Site characterization at Groningen gas field area through joint surface-borehole H/V analysis
NASA Astrophysics Data System (ADS)
Spica, Zack J.; Perton, Mathieu; Nakata, Nori; Liu, Xin; Beroza, Gregory C.
2018-01-01
A new interpretation of the horizontal to vertical (H/V) spectral ratio in terms of the Diffuse Field Assumption (DFA) has fuelled a resurgence of interest in that approach. The DFA links H/V measurements to Green's function retrieval through autocorrelation of the ambient seismic field. This naturally allows for estimation of layered velocity structure. In this contribution, we further explore the potential of H/V analysis. Our study is facilitated by a distributed array of surface and co-located borehole stations deployed at multiple depths, and by detailed prior information on velocity structure that is available due to development of the Groningen gas field. We use the vertical distribution of H/V spectra recorded at discrete depths inside boreholes to obtain shear wave velocity models of the shallow subsurface. We combine both joint H/V inversion and borehole interferometry to reduce the non-uniqueness of the problem and to allow faster convergence towards a reliable velocity model. The good agreement between our results and velocity models from an independent study validates the methodology, demonstrates the power of the method, but more importantly provides further constraints on the shallow velocity structure, which is an essential component of integrated hazard assessment in the area.
The use of waveform shapes to automatically determine earthquake focal depth
Sipkin, S.A.
2000-01-01
Earthquake focal depth is an important parameter for rapidly determining probable damage caused by a large earthquake. In addition, it is significant both for discriminating between natural events and explosions and for discriminating between tsunamigenic and nontsunamigenic earthquakes. For the purpose of notifying emergency management and disaster relief organizations as well as issuing tsunami warnings, potential time delays in determining source parameters are particularly detrimental. We present a method for determining earthquake focal depth that is well suited for implementation in an automated system that utilizes the wealth of broadband teleseismic data that is now available in real time from the global seismograph networks. This method uses waveform shapes to determine focal depth and is demonstrated to be valid for events with magnitudes as low as approximately 5.5.
Construction Of Critical Thinking Skills Test Instrument Related The Concept On Sound Wave
NASA Astrophysics Data System (ADS)
Mabruroh, F.; Suhandi, A.
2017-02-01
This study aimed to construct test instrument of critical thinking skills of high school students related the concept on sound wave. This research using a mixed methods with sequential exploratory design, consists of: 1) a preliminary study; 2) design and review of test instruments. The form of test instruments in essay questions, consist of 18 questions that was divided into 5 indicators and 8 sub-indicators of the critical thinking skills expressed by Ennis, with questions that are qualitative and contextual. Phases of preliminary study include: a) policy studies; b) survey to the school; c) and literature studies. Phases of the design and review of test instruments consist of two steps, namely a draft design of test instruments include: a) analysis of the depth of teaching materials; b) the selection of indicators and sub-indicators of critical thinking skills; c) analysis of indicators and sub-indicators of critical thinking skills; d) implementation of indicators and sub-indicators of critical thinking skills; and e) making the descriptions about the test instrument. In the next phase of the review test instruments, consist of: a) writing about the test instrument; b) validity test by experts; and c) revision of test instruments based on the validator.
Zhou, Shuntai; Jones, Corbin; Mieczkowski, Piotr
2015-01-01
ABSTRACT Validating the sampling depth and reducing sequencing errors are critical for studies of viral populations using next-generation sequencing (NGS). We previously described the use of Primer ID to tag each viral RNA template with a block of degenerate nucleotides in the cDNA primer. We now show that low-abundance Primer IDs (offspring Primer IDs) are generated due to PCR/sequencing errors. These artifactual Primer IDs can be removed using a cutoff model for the number of reads required to make a template consensus sequence. We have modeled the fraction of sequences lost due to Primer ID resampling. For a typical sequencing run, less than 10% of the raw reads are lost to offspring Primer ID filtering and resampling. The remaining raw reads are used to correct for PCR resampling and sequencing errors. We also demonstrate that Primer ID reveals bias intrinsic to PCR, especially at low template input or utilization. cDNA synthesis and PCR convert ca. 20% of RNA templates into recoverable sequences, and 30-fold sequence coverage recovers most of these template sequences. We have directly measured the residual error rate to be around 1 in 10,000 nucleotides. We use this error rate and the Poisson distribution to define the cutoff to identify preexisting drug resistance mutations at low abundance in an HIV-infected subject. Collectively, these studies show that >90% of the raw sequence reads can be used to validate template sampling depth and to dramatically reduce the error rate in assessing a genetically diverse viral population using NGS. IMPORTANCE Although next-generation sequencing (NGS) has revolutionized sequencing strategies, it suffers from serious limitations in defining sequence heterogeneity in a genetically diverse population, such as HIV-1 due to PCR resampling and PCR/sequencing errors. The Primer ID approach reveals the true sampling depth and greatly reduces errors. Knowing the sampling depth allows the construction of a model of how to maximize the recovery of sequences from input templates and to reduce resampling of the Primer ID so that appropriate multiplexing can be included in the experimental design. With the defined sampling depth and measured error rate, we are able to assign cutoffs for the accurate detection of minority variants in viral populations. This approach allows the power of NGS to be realized without having to guess about sampling depth or to ignore the problem of PCR resampling, while also being able to correct most of the errors in the data set. PMID:26041299
Correction techniques for depth errors with stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Holden, Anthony; Williams, Steven P.
1992-01-01
Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.
Tapia, Viridiana Juarez; Drizin, Julia Helene; Dalle Ore, Cecilia; Nieto, Marcelo; Romero, Yajahira; Magallon, Sandra; Nayak, Rohith; Sigler, Alicia; Malcarne, Vanessa; Gosman, Amanda
2017-05-01
Craniofacial surgeons treat patients with diverse craniofacial conditions (CFCs). Yet, little is known about the health-related quality of life (HRQoL) impact of diverse CFCs. Currently, there are no suitable instruments that measure the HRQoL of patients with diverse CFCs from the perspective of children and parents. The objective of this study was to develop the items and support the content validity of a comprehensive patient and parent-reported outcomes measure. An iterative process consisting of a systematic literature review, expert opinion and in-depth interviews with patients and parents of patients with diverse CFCs was used. The literature review and expert opinion were used to generate in-depth interview questions. We interviewed 127 subjects: 80 parents of patients ages 0 to 18 years or older and 47 patients ages 7 to 18 years or older. English and Spanish speakers were represented in our sample. The majority of subjects originated from the United States and Mexico (83%). Craniofacial conditions included were cleft lip/palate, craniosynostosis, craniofacial microsomia, microtia, and dermatological conditions. Semistructured interviews were conducted until content saturation was achieved. Line-by-line analysis of interview transcripts identified HRQoL themes. Themes were interpreted and organized into larger domains that represent the conceptual framework of CFC-associated HRQoL. Themes were operationalized into items that represent the HRQoL issues of patients for both parent and patient versions. Six final bilingual and bicultural scales based on the domains derived from the literature review, expert opinion, and in-depth interviews were developed: (1) "Social Impact," (2) "Psychological Function," (3) "Physical Function," (4) "Family Impact," (5) "Appearance," And (6) "Finding Meaning." Some cultural differences were identified: in contrast to children from Mexico and other developing nations, families from the United States did not report public harassment or extremely negative public reactions to patients' CFC. Religion and spirituality were common themes in interviews of Spanish-speaking subjects but less common in interviews of English-speaking subjects. Qualitative methods involving pediatric patients with diverse CFCs and their parents in the item development process support the content validity for this bilingual and bicultural HRQoL instrument. The items developed in this study will now undergo psychometric testing in national multisite studies for validation.
Li, Junhua; Chen, Hao; Savini, Giacomo; Lu, Weicong; Yu, Xinxin; Bao, Fangjun; Wang, Qinmei; Huang, Jinhai
2016-01-01
To evaluate the agreement of ocular measurements obtained with a new optical biometer (AL-Scan) and a previously validated optical biometer (Lenstar). Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China. Prospective observational cross-sectional study. For a comprehensive comparison between the partial coherence interferometry (PCI) device and the optical low-coherence reflectometry (OLCR) device, the axial length (AL), central corneal thickness (CCT), anterior chamber depth (ACD), aqueous depth, mean keratometry (K), astigmatism, white-to-white (WTW), and pupil diameter were measured 3 times per device in eyes with cataract. The sequence of the device was in random order. The mean values were compared and 95% limits of agreement (LoA) were assessed. Ninety-two eyes of 92 cataract patients were included. Bland-Altman analysis showed excellent agreement between the PCI device and the OLCR device for AL, CCT, ACD and aqueous depth measurements with narrow 95% LoA (-0.05 to 0.06 mm, -13.39 to 15.61 μm, -0.11 to 0.10 mm, and -0.12 to 0.10 mm, respectively), and the P values were more than 0.05. The mean K, astigmatism, and WTW values provided by the PCI device were in good agreement with the OLCR device, although statistically significant differences were detected. A major difference was observed in the pupil diameter measurement, with a 95% LoA of -0.73 to 1.21 mm. The PCI device biometer provided ocular measurements similar to those provided by the OLCR device for most parameters, especially for AL, CCT, and ACD. The pupil diameter values obtained with the PCI device were in poor agreement with the OLCR device, and these measurements should be interpreted with necessary adjustment. None of the authors has a proprietary or financial interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Knoop, Henning; Gründel, Marianne; Zilliges, Yvonne; Lehmann, Robert; Hoffmann, Sabrina; Lockau, Wolfgang; Steuer, Ralf
2013-01-01
Cyanobacteria are versatile unicellular phototrophic microorganisms that are highly abundant in many environments. Owing to their capability to utilize solar energy and atmospheric carbon dioxide for growth, cyanobacteria are increasingly recognized as a prolific resource for the synthesis of valuable chemicals and various biofuels. To fully harness the metabolic capabilities of cyanobacteria necessitates an in-depth understanding of the metabolic interconversions taking place during phototrophic growth, as provided by genome-scale reconstructions of microbial organisms. Here we present an extended reconstruction and analysis of the metabolic network of the unicellular cyanobacterium Synechocystis sp. PCC 6803. Building upon several recent reconstructions of cyanobacterial metabolism, unclear reaction steps are experimentally validated and the functional consequences of unknown or dissenting pathway topologies are discussed. The updated model integrates novel results with respect to the cyanobacterial TCA cycle, an alleged glyoxylate shunt, and the role of photorespiration in cellular growth. Going beyond conventional flux-balance analysis, we extend the computational analysis to diurnal light/dark cycles of cyanobacterial metabolism. PMID:23843751
Martin, RobRoy L.
2012-01-01
Purpose/Background: The purpose of this study was to systematically review the literature for functional performance tests with evidence of reliability and validity that could be used for a young, athletic population with hip dysfunction. Methods: A search of PubMed and SPORTDiscus databases were performed to identify movement, balance, hop/jump, or agility functional performance tests from the current peer-reviewed literature used to assess function of the hip in young, athletic subjects. Results: The single-leg stance, deep squat, single-leg squat, and star excursion balance tests (SEBT) demonstrated evidence of validity and normative data for score interpretation. The single-leg stance test and SEBT have evidence of validity with association to hip abductor function. The deep squat test demonstrated evidence as a functional performance test for evaluating femoroacetabular impingement. Hop/Jump tests and agility tests have no reported evidence of reliability or validity in a population of subjects with hip pathology. Conclusions: Use of functional performance tests in the assessment of hip dysfunction has not been well established in the current literature. Diminished squat depth and provocation of pain during the single-leg balance test have been associated with patients diagnosed with FAI and gluteal tendinopathy, respectively. The SEBT and single-leg squat tests provided evidence of convergent validity through an analysis of kinematics and muscle function in normal subjects. Reliability of functional performance tests have not been established on patients with hip dysfunction. Further study is needed to establish reliability and validity of functional performance tests that can be used in a young, athletic population with hip dysfunction. Level of Evidence: 2b (Systematic Review of Literature) PMID:22893860
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2015-04-01
Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.
NASA Astrophysics Data System (ADS)
Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio
2015-12-01
This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).
Finite element simulation of crack depth measurements in concrete using diffuse ultrasound
NASA Astrophysics Data System (ADS)
Seher, Matthias; Kim, Jin-Yeon; Jacobs, Laurence J.
2012-05-01
This research simulates the measurements of crack depth in concrete using diffuse ultrasound. The finite element method is employed to simulate the ultrasonic diffusion process around cracks with different geometrical shapes, with the goal of gaining physical insight into the data obtained from experimental measurements. The commercial finite element software Ansys is used to implement the two-dimensional concrete model. The model is validated with an analytical solution and experimental results. It is found from the simulation results that preliminary knowledge of the crack geometry is required to interpret the energy evolution curves from measurements and to correctly determine the crack depth.
Prediction of soil frost penetration depth in northwest of Iran using air freezing indices
NASA Astrophysics Data System (ADS)
Mohammadi, H.; Moghbel, M.; Ranjbar, F.
2016-11-01
Information about soil frost penetration depth can be effective in finding appropriate solutions to reduce the agricultural crop damage, transportations, and building facilities. Amongst proper methods to achieve this information are the statistical and empirical models capable of estimating soil frost penetration depth. Therefore, the main objective of this research is to calculate soil frost penetration depth in northwest of Iran during the year 2007-2008 to validate two different models accuracy. To do so, the relationship between air and soil temperature in different depths (5-10-20-30-50-100 cm) at three times of the day (3, 9, and 15 GMT) for 14 weather stations over 7 provinces was analyzed using linear regression. Then, two different air freezing indices (AFIs) including Norwegian and Finn AFI was implemented. Finally, the frost penetration depth was calculated by McKeown method and the accuracy of models determined by actual soil frost penetration depth. The results demonstrated that there is a significant correlation between air and soil depth temperature in all studied stations up to the 30 cm under the surface. Also, according to the results, Norwegian index can be effectively used for determination of soil frost depth penetration and the correlation coefficient between actual and estimated soil frost penetration depth is r = 0.92 while the Finn index overestimates the frost depth in all stations with correlation coefficient r = 0.70.
Deep Water Munitions Detection System
2010-03-01
information if it does not display a currently valid OMB control number. 1. REPORT DATE MAR 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00...water systems to deeper water depths would result in even greater costs. While some of the cost escalation may be unavoidable, it is desirable to...magnetometers, spaced 61 cm apart, on a towed sensor platform. The sensor platform has active control elements that allow its depth to be changed
NASA Technical Reports Server (NTRS)
Jethva, Hiren; Torres, Omar; Bhartia, Pawan K.; Remer, Lorraine; Redemann, Jens; Dunagan, Stephen E.; Livingston, John; Shinozuka, Yohei; Kacenelenbogen, Meloe; Segal-Rosenbeimer, Michal;
2014-01-01
Absorbing aerosols produced from biomass burning and dust outbreaks are often found to overlay lower level cloud decks and pose greater potentials of exerting positive radiative effects (warming) whose magnitude directly depends on the aerosol loading above cloud, optical properties of clouds and aerosols, and cloud fraction. Recent development of a 'color ratio' (CR) algorithm applied to observations made by the Aura/OMI and Aqua/MODIS constitutes a major breakthrough and has provided unprecedented maps of above-cloud aerosol optical depth (ACAOD). The CR technique employs reflectance measurements at TOA in two channels (354 and 388 nm for OMI; 470 and 860 nm for MODIS) to retrieve ACAOD in near-UV and visible regions and aerosol-corrected cloud optical depth, simultaneously. An inter-satellite comparison of ACAOD retrieved from NASA's A-train sensors reveals a good level of agreement between the passive sensors over the homogeneous cloud fields. Direct measurements of ACA such as carried out by the NASA Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) can be of immense help in validating ACA retrievals. We validate the ACA optical depth retrieved using the CR method applied to the MODIS cloudy-sky reflectance against the airborne AATS and 4STAR measurements. A thorough search of the historic AATS-4STAR database collected during different field campaigns revealed five events where biomass burning, dust, and wildfire-emitted aerosols were found to overlay lower level cloud decks observed during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS- 2013, respectively. The co-located satellite-airborne measurements revealed a good agreement (RMSE less than 0.1 for AOD at 500 nm) with most matchups falling within the estimated uncertainties in the MODIS retrievals. An extensive validation of satellite-based ACA retrievals requires equivalent field measurements particularly over the regions where ACA are often observed from satellites, i.e., south-eastern Atlantic Ocean, tropical Atlantic Ocean, northern Arabian Sea, South-East and North-East Asia.
Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest
2014-12-01
Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2016-01-01
Propellant slosh is a potential source of disturbance critical to the stability of space vehicles. The slosh dynamics are typically represented by a mechanical model of a spring-mass-damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control analysis. A Volume-Of-Fluid (VOF) based Computational Fluid Dynamics (CFD) program developed at MSFC was applied to extract slosh damping in the baffled tank from the first principle. First the experimental data using water with sub-scale smooth wall tank were used as the baseline validation. It is demonstrated that CFD can indeed accurately predict low damping values from the smooth wall at different fill levels. The damping due to a ring baffles at different depths from the free surface was then simulated, and fairly good agreement with experimental measurement was observed. Comparison with an empirical correlation of Miles equation is also made.
Surface roughness formation during shot peen forming
NASA Astrophysics Data System (ADS)
Koltsov, V. P.; Vinh, Le Tri; Starodubtseva, D. A.
2018-03-01
Shot peen forming (SPF) is used for forming panels and skins, and for hardening. As a rule, shot peen forming is performed after milling. Surface roughness is a complex structure, a combination of an original microrelief and shot peen forming indentations of different depths and chaotic distribution along the surface. As far as shot peen forming is a random process, surface roughness resulted from milling and shot peen forming is random too. During roughness monitoring, it is difficult to determine the basic surface area which would ensure accurate results. It can be assumed that the basic area depends on the random roughness which is characterized by the degree of shot peen forming coverage. The analysis of depth and shot peen forming indentations distribution along the surface made it possible to identify the shift of an original center profile plane and create a mathematical model for the arithmetic mean deviation of the profile. Experimental testing proved model validity and determined an inversely proportional dependency of the basic area on the degree of coverage.
NASA Technical Reports Server (NTRS)
Ahuja, K. K.; Mendoza, J.
1995-01-01
This report documents the results of an experimental investigation on the response of a cavity to external flowfields. The primary objective of this research was to acquire benchmark of data on the effects of cavity length, width, depth, upstream boundary layer, and flow temperature on cavity noise. These data were to be used for validation of computational aeroacoustic (CAA) codes on cavity noise. To achieve this objective, a systematic set of acoustic and flow measurements were made for subsonic turbulent flows approaching a cavity. These measurements were conducted in the research facilities of the Georgia Tech research institute. Two cavity models were designed, one for heated flow and another for unheated flow studies. Both models were designed such that the cavity length (L) could easily be varied while holding fixed the depth (D) and width (W) dimensions of the cavity. Depth and width blocks were manufactured so that these dimensions could be varied as well. A wall jet issuing from a rectangular nozzle was used to simulate flows over the cavity.
Jimenez, Connie R; Piersma, Sander; Pham, Thang V
2007-12-01
Proteomics aims to create a link between genomic information, biological function and disease through global studies of protein expression, modification and protein-protein interactions. Recent advances in key proteomics tools, such as mass spectrometry (MS) and (bio)informatics, provide tremendous opportunities for biomarker-related clinical applications. In this review, we focus on two complementary MS-based approaches with high potential for the discovery of biomarker patterns and low-abundant candidate biomarkers in biofluids: high-throughput matrix-assisted laser desorption/ionization time-of-flight mass spectroscopy-based methods for peptidome profiling and label-free liquid chromatography-based methods coupled to MS for in-depth profiling of biofluids with a focus on subproteomes, including the low-molecular-weight proteome, carrier-bound proteome and N-linked glycoproteome. The two approaches differ in their aims, throughput and sensitivity. We discuss recent progress and challenges in the analysis of plasma/serum and proximal fluids using these strategies and highlight the potential of liquid chromatography-MS-based proteomics of cancer cell and tumor secretomes for the discovery of candidate blood-based biomarkers. Strategies for candidate validation are also described.
NASA Astrophysics Data System (ADS)
Zempila, Melina Maria; Fountoulakis, Ilias; Taylor, Michael; Kazadzis, Stelios; Arola, Antti; Koukouli, Maria Elissavet; Bais, Alkiviadis; Meleti, Chariklia; Balis, Dimitrios
2018-06-01
The aim of this study is to validate the Ozone Monitoring Instrument (OMI) erythemal dose rates using ground-based measurements in Thessaloniki, Greece. In the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki, a Yankee Environmental System UVB-1 radiometer measures the erythemal dose rates every minute, and a Norsk Institutt for Luftforskning (NILU) multi-filter radiometer provides multi-filter based irradiances that were used to derive erythemal dose rates for the period 2005-2014. Both these datasets were independently validated against collocated UV irradiance spectra from a Brewer MkIII spectrophotometer. Cloud detection was performed based on measurements of the global horizontal radiation from a Kipp & Zonen pyranometer and from NILU measurements in the visible range. The satellite versus ground observation validation was performed taking into account the effect of temporal averaging, limitations related to OMI quality control criteria, cloud conditions, the solar zenith angle and atmospheric aerosol loading. Aerosol optical depth was also retrieved using a collocated CIMEL sunphotometer in order to assess its impact on the comparisons. The effect of total ozone columns satellite versus ground-based differences on the erythemal dose comparisons was also investigated. Since most of the public awareness alerts are based on UV Index (UVI) classifications, an analysis and assessment of OMI capability for retrieving UVIs was also performed. An overestimation of the OMI erythemal product by 3-6% and 4-8% with respect to ground measurements is observed when examining overpass and noontime estimates respectively. The comparisons revealed a relatively small solar zenith angle dependence, with the OMI data showing a slight dependence on aerosol load, especially at high aerosol optical depth values. A mean underestimation of 2% in OMI total ozone columns under cloud-free conditions was found to lead to an overestimation in OMI erythemal doses of 1-5%.While OMI overestimated the erythemal dose rates over the range of cloudiness conditions examined, its UVIs were found to be reliable for the purpose of characterizing the ambient UV radiation impact.
Lai, Qiliang; Liu, Yang; Yuan, Jun; Du, Juan; Wang, Liping; Sun, Fengqin; Shao, Zongze
2014-01-01
Thalassospira bacteria are widespread and have been isolated from various marine environments. Less is known about their genetic diversity and biogeography, as well as their role in marine environments, many of them cannot be discriminated merely using the 16S rRNA gene. To address these issues, in this report, the phylogenetic analysis of 58 strains from seawater and deep sea sediments were carried out using the multilocus sequence analysis (MLSA) based on acsA, aroE, gyrB, mutL, rpoD and trpB genes, and the DNA-DNA hybridization (DDH) and average nucleotide identity (ANI) based on genome sequences. The MLSA analysis demonstrated that the 58 strains were clearly separated into 15 lineages, corresponding to seven validly described species and eight potential novel species. The DDH and ANI values further confirmed the validity of the MLSA analysis and eight potential novel species. The MLSA interspecies gap of the genus Thalassospira was determined to be 96.16-97.12% sequence identity on the basis of the combined analyses of the DDH and MLSA, while the ANIm interspecies gap was 95.76-97.20% based on the in silico DDH analysis. Meanwhile, phylogenetic analyses showed that the Thalassospira bacteria exhibited distribution pattern to a certain degree according to geographic regions. Moreover, they clustered together according to the habitats depth. For short, the phylogenetic analyses and biogeography of the Thalassospira bacteria were systematically investigated for the first time. These results will be helpful to explore further their ecological role and adaptive evolution in marine environments.
Yuan, Jun; Du, Juan; Wang, Liping; Sun, Fengqin; Shao, Zongze
2014-01-01
Thalassospira bacteria are widespread and have been isolated from various marine environments. Less is known about their genetic diversity and biogeography, as well as their role in marine environments, many of them cannot be discriminated merely using the 16S rRNA gene. To address these issues, in this report, the phylogenetic analysis of 58 strains from seawater and deep sea sediments were carried out using the multilocus sequence analysis (MLSA) based on acsA, aroE, gyrB, mutL, rpoD and trpB genes, and the DNA-DNA hybridization (DDH) and average nucleotide identity (ANI) based on genome sequences. The MLSA analysis demonstrated that the 58 strains were clearly separated into 15 lineages, corresponding to seven validly described species and eight potential novel species. The DDH and ANI values further confirmed the validity of the MLSA analysis and eight potential novel species. The MLSA interspecies gap of the genus Thalassospira was determined to be 96.16–97.12% sequence identity on the basis of the combined analyses of the DDH and MLSA, while the ANIm interspecies gap was 95.76–97.20% based on the in silico DDH analysis. Meanwhile, phylogenetic analyses showed that the Thalassospira bacteria exhibited distribution pattern to a certain degree according to geographic regions. Moreover, they clustered together according to the habitats depth. For short, the phylogenetic analyses and biogeography of the Thalassospira bacteria were systematically investigated for the first time. These results will be helpful to explore further their ecological role and adaptive evolution in marine environments. PMID:25198177
Investigation of Motorcycle Steering Torque Components
NASA Astrophysics Data System (ADS)
Cossalter, V.; Lot, R.; Massaro, M.; Peretto, M.
2011-10-01
When driving along a circular path, the rider controls a motorcycle mainly by the steering torque. This work addresses an in-depth analysis of the steady state cornering and in particular the decomposition of the motorcycle steering torque in its main components, such as road-tyre forces, gyroscopic torques, centrifugal and gravity effects. A detailed and experimentally validated multibody model of the motorcycle is used herein to analyze the steering torque components at different speeds and lateral accelerations. First the road tests are compared with the numerical results for three different vehicles and then a numerical investigation is carried out to decompose the steering torque. Finally, the effect of longitudinal acceleration and deceleration on steering torque components is presented.
Validation of MODIS 3 km land aerosol optical depth from NASA's EOS Terra and Aqua missions
NASA Astrophysics Data System (ADS)
Gupta, Pawan; Remer, Lorraine A.; Levy, Robert C.; Mattoo, Shana
2018-05-01
In addition to the standard resolution product (10 km), the MODerate resolution Imaging Spectroradiometer (MODIS) Collection 6 (C006) data release included a higher resolution (3 km). Other than accommodations for the two different resolutions, the 10 and 3 km Dark Target (DT) algorithms are basically the same. In this study, we perform global validation of the higher-resolution aerosol optical depth (AOD) over global land by comparing against AErosol RObotic NETwork (AERONET) measurements. The MODIS-AERONET collocated data sets consist of 161 410 high-confidence AOD pairs from 2000 to 2015 for Terra MODIS and 2003 to 2015 for Aqua MODIS. We find that 62.5 and 68.4 % of AODs retrieved from Terra MODIS and Aqua MODIS, respectively, fall within previously published expected error bounds of ±(0.05 + 0.2 × AOD), with a high correlation (R = 0.87). The scatter is not random, but exhibits a mean positive bias of ˜ 0.06 for Terra and ˜ 0.03 for Aqua. These biases for the 3 km product are approximately 0.03 larger than the biases found in similar validations of the 10 km product. The validation results for the 3 km product did not have a relationship to aerosol loading (i.e., true AOD), but did exhibit dependence on quality flags, region, viewing geometry, and aerosol spatial variability. Time series of global MODIS-AERONET differences show that validation is not static, but has changed over the course of both sensors' lifetimes, with Terra MODIS showing more change over time. The likely cause of the change of validation over time is sensor degradation, but changes in the distribution of AERONET stations and differences in the global aerosol system itself could be contributing to the temporal variability of validation.
Zeng, Lili; Wang, Dongxiao; Chen, Ju; Wang, Weiqiang; Chen, Rongyu
2016-04-26
In addition to the oceanographic data available for the South China Sea (SCS) from the World Ocean Database (WOD) and Array for Real-time Geostrophic Oceanography (Argo) floats, a suite of observations has been made by the South China Sea Institute of Oceanology (SCSIO) starting from the 1970s. Here, we assemble a SCS Physical Oceanographic Dataset (SCSPOD14) based on 51,392 validated temperature and salinity profiles collected from these three datasets for the period 1919-2014. A gridded dataset of climatological monthly mean temperature, salinity, and mixed and isothermal layer depth derived from an objective analysis of profiles is also presented. Comparisons with the World Ocean Atlas (WOA) and IFREMER/LOS Mixed Layer Depth Climatology confirm the reliability of the new dataset. This unique dataset offers an invaluable baseline perspective on the thermodynamic processes, spatial and temporal variability of water masses, and basin-scale and mesoscale oceanic structures in the SCS. We anticipate improvements and regular updates to this product as more observations become available from existing and future in situ networks.
Zeng, Lili; Wang, Dongxiao; Chen, Ju; Wang, Weiqiang; Chen, Rongyu
2016-01-01
In addition to the oceanographic data available for the South China Sea (SCS) from the World Ocean Database (WOD) and Array for Real-time Geostrophic Oceanography (Argo) floats, a suite of observations has been made by the South China Sea Institute of Oceanology (SCSIO) starting from the 1970s. Here, we assemble a SCS Physical Oceanographic Dataset (SCSPOD14) based on 51,392 validated temperature and salinity profiles collected from these three datasets for the period 1919–2014. A gridded dataset of climatological monthly mean temperature, salinity, and mixed and isothermal layer depth derived from an objective analysis of profiles is also presented. Comparisons with the World Ocean Atlas (WOA) and IFREMER/LOS Mixed Layer Depth Climatology confirm the reliability of the new dataset. This unique dataset offers an invaluable baseline perspective on the thermodynamic processes, spatial and temporal variability of water masses, and basin-scale and mesoscale oceanic structures in the SCS. We anticipate improvements and regular updates to this product as more observations become available from existing and future in situ networks. PMID:27116565
Enhanced visualization of the retinal vasculature using depth information in OCT.
de Moura, Joaquim; Novo, Jorge; Charlón, Pablo; Barreira, Noelia; Ortega, Marcos
2017-12-01
Retinal vessel tree extraction is a crucial step for analyzing the microcirculation, a frequently needed process in the study of relevant diseases. To date, this has normally been done by using 2D image capture paradigms, offering a restricted visualization of the real layout of the retinal vasculature. In this work, we propose a new approach that automatically segments and reconstructs the 3D retinal vessel tree by combining near-infrared reflectance retinography information with Optical Coherence Tomography (OCT) sections. Our proposal identifies the vessels, estimates their calibers, and obtains the depth at all the positions of the entire vessel tree, thereby enabling the reconstruction of the 3D layout of the complete arteriovenous tree for subsequent analysis. The method was tested using 991 OCT images combined with their corresponding near-infrared reflectance retinography. The different stages of the methodology were validated using the opinion of an expert as a reference. The tests offered accurate results, showing coherent reconstructions of the 3D vasculature that can be analyzed in the diagnosis of relevant diseases affecting the retinal microcirculation, such as hypertension or diabetes, among others.
Twinn, S
1997-08-01
Although the complexity of undertaking qualitative research with non-English speaking informants has become increasingly recognized, few empirical studies exist which explore the influence of translation on the findings of the study. The aim of this exploratory study was therefore to examine the influence of translation on the reliability and validity of the findings of a qualitative research study. In-depth interviews were undertaken in Cantonese with a convenience sample of six women to explore their perceptions of factors influencing their uptake of Pap smears. Data analysis involved three stages. The first stage involved the translation and transcription of all the interviews into English independently by two translators as well as transcription into Chinese by a third researcher. The second stage involved content analysis of the three data sets to develop categories and themes and the third stage involved a comparison of the categories and themes generated from the Chinese and English data sets. Despite no significant differences in the major categories generated from the Chinese and English data, some minor differences were identified in the themes generated from the data. More significantly the results of the study demonstrated some important issues to consider when using translation in qualitative research, in particular the complexity of managing data when no equivalent word exists in the target language and the influence of the grammatical style on the analysis. In addition the findings raise questions about the significance of the conceptual framework of the research design and sampling to the validity of the study. The importance of using only one translator to maximize the reliability of the study was also demonstrated. In addition the author suggests the findings demonstrate particular problems in using translation in phenomenological research designs.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
NASA Astrophysics Data System (ADS)
Kanari, M.; Ketter, T.; Tibor, G.; Schattner, U.
2017-12-01
We aim to characterize the seafloor morphology and its shallow sub-surface structures and deformations in the deep part of the Levant basin (eastern Mediterranean) using recently acquired high-resolution shallow seismic reflection data and multibeam bathymetry, which allow quantitative analysis of morphology and structure. The Levant basin at the eastern Mediterranean is considered a passive continental margin, where most of the recent geological processes were related in literature to salt tectonics rooted at the Messinian deposits from 6Ma. We analyzed two sets of recently acquired high-resolution data from multibeam bathymetry and 3.5 kHz Chirp sub-bottom seismic reflection in the deep basin of the continental shelf offshore Israel (water depths up to 2100 m). Semi-automatic mapping of seafloor features and seismic data interpretation resulted in quantitative morphological analysis of the seafloor and its underlying sediment with penetration depth up to 60 m. The quantitative analysis and its interpretation are still in progress. Preliminary results reveal distinct morphologies of four major elements: channels, faults, folds and sediment waves, validated by seismic data. From the spatial distribution and orientation analyses of these phenomena, we identify two primary process types which dominate the formation of the seafloor in the Levant basin: structural and sedimentary. Characterization of the geological and geomorphological processes forming the seafloor helps to better understand the transport mechanisms and the relations between sediment transport and deposition in deep water and the shallower parts of the shelf and slope.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynoso, F; Cho, S
Purpose: To develop and validate a Monte Carlo (MC) model of a Phillips RT-250 orthovoltage unit to test various beam spectrum modulation strategies for in vitro/vivo studies. A model of this type would enable the production of unconventional beams from a typical orthovoltage unit for novel therapeutic applications such as gold nanoparticle-aided radiotherapy. Methods: The MCNP5 code system was used to create a MC model of the head of RT-250 and a 30 × 30 × 30 cm{sup 3} water phantom. For the x-ray machine head, the current model includes the vacuum region, beryllium window, collimators, inherent filters and exteriormore » steel housing. For increased computational efficiency, the primary x-ray spectrum from the target was calculated from a well-validated analytical software package. Calculated percentage-depth-dose (PDD) values and photon spectra were validated against experimental data from film and Compton-scatter spectrum measurements. Results: The model was validated for three common settings of the machine namely, 250 kVp (0.25 mm Cu), 125 kVp (2 mm Al), and 75 kVp (2 mm Al). The MC results for the PDD curves were compared with film measurements and showed good agreement for all depths with a maximum difference of 4 % around dmax and under 2.5 % for all other depths. The primary photon spectra were also measured and compared with the MC results showing reasonable agreement between the two, validating the input spectra and the final spectra as predicted by the current MC model. Conclusion: The current MC model accurately predicted the dosimetric and spectral characteristics of each beam from the RT-250 orthovoltage unit, demonstrating its applicability and reliability for beam spectrum modulation tasks. It accomplished this without the need to model the bremsstrahlung xray production from the target, while significantly improved on computational efficiency by at least two orders of magnitude. Supported by DOD/PCRP grant W81XWH-12-1-0198.« less
High-resolution characterization of a hepatocellular carcinoma genome.
Totoki, Yasushi; Tatsuno, Kenji; Yamamoto, Shogo; Arai, Yasuhito; Hosoda, Fumie; Ishikawa, Shumpei; Tsutsumi, Shuichi; Sonoda, Kohtaro; Totsuka, Hirohiko; Shirakihara, Takuya; Sakamoto, Hiromi; Wang, Linghua; Ojima, Hidenori; Shimada, Kazuaki; Kosuge, Tomoo; Okusaka, Takuji; Kato, Kazuto; Kusuda, Jun; Yoshida, Teruhiko; Aburatani, Hiroyuki; Shibata, Tatsuhiro
2011-05-01
Hepatocellular carcinoma, one of the most common virus-associated cancers, is the third most frequent cause of cancer-related death worldwide. By massively parallel sequencing of a primary hepatitis C virus-positive hepatocellular carcinoma (36× coverage) and matched lymphocytes (>28× coverage) from the same individual, we identified more than 11,000 somatic substitutions of the tumor genome that showed predominance of T>C/A>G transition and a decrease of the T>C substitution on the transcribed strand, suggesting preferential DNA repair. Gene annotation enrichment analysis of 63 validated non-synonymous substitutions revealed enrichment of phosphoproteins. We further validated 22 chromosomal rearrangements, generating four fusion transcripts that had altered transcriptional regulation (BCORL1-ELF4) or promoter activity. Whole-exome sequencing at a higher sequence depth (>76× coverage) revealed a TSC1 nonsense substitution in a subpopulation of the tumor cells. This first high-resolution characterization of a virus-associated cancer genome identified previously uncharacterized mutation patterns, intra-chromosomal rearrangements and fusion genes, as well as genetic heterogeneity within the tumor.
Mesorectal Invasion Depth in Rectal Carcinoma Is Associated With Low Survival.
Lino-Silva, Leonardo S; Loaeza-Belmont, Reynaldo; Gómez Álvarez, Miguel A; Vela-Sarmiento, Itzel; Aguilar-Romero, José M; Domínguez-Rodríguez, Jorge A; Salcedo-Hernández, Rosa A; Ruiz-García, Erika B; Maldonado-Martínez, Héctor A; Herrera-Gómez, Ángel
2017-03-01
Most cases of rectal cancer (RC) in our institution are in pathologic stage T3. They are a heterogeneous group but have been classified in a single-stage category. We performed the present study to validate the prognostic significance of the mesorectal extension depth (MED) in T3 RC measured in millimeters beyond the muscularis propria plane. We performed a retrospective analysis of 104 patients with T3 RC who had undergone curative surgery after a course of preoperative chemoradiotherapy at a tertiary referral cancer hospital. The patients were grouped by MED (T3a, < 1 mm; T3b, 1-5 mm; T3c > 5-10 mm; and T3d > 10 mm). The clinicopathologic data and disease-free survival were analyzed. The 5-year disease-free survival rate according to the T3 subclassification was 87.5% for those with T3a, 57.9% for T3b, 38.7% for T3c, and 40.3% for those with T3d tumors (P = .050). On univariate and multivariate analysis, the prognostic factors affecting survival were overall recurrence (hazard ratio [HR], 3.670; 95% confidence interval [CI], 1.710-7.837; P = .001), histologic grade (HR, 2.204; 95% CI, 1.156-4.199; P = .016), mesorectal invasion depth (HR, 1.885; 95% CI, 1.164-3.052; P = .010), and lymph node metastasis (HR, 1.211; 95% CI, 1.015-1.444; P = .033). MED is a significant prognostic factor in patients with T3 RC who have undergone neoadjuvant chemoradiotherapy, especially when the MED is > 5 mm. The MED could be as important as other clinicopathologic factors in predicting disease-specific survival. Copyright © 2016 Elsevier Inc. All rights reserved.
Validation of YCAR algorithm over East Asia TCCON sites
NASA Astrophysics Data System (ADS)
Kim, W.; Kim, J.; Jung, Y.; Lee, H.; Goo, T. Y.; Cho, C. H.; Lee, S.
2016-12-01
In order to reduce the retrieval error of TANSO-FTS column averaged CO2 concentration (XCO2) induced by aerosol, we develop the Yonsei university CArbon Retrieval (YCAR) algorithm using aerosol information from TANSO-Cloud and Aerosol Imager (TANSO-CAI), providing simultaneous aerosol optical depth properties for the same geometry and optical path along with the FTS. Also we validate the retrieved results using ground-based TCCON measurement. Particularly this study first utilized the measurements at Anmyeondo, the only TCCON site located in South Korea, which can improve the quality of validation in East Asia. After the post screening process, YCAR algorithms have higher data availability by 33 - 85 % than other operational algorithms (NIES, ACOS, UoL). Although the YCAR algorithm has higher data availability, regression analysis with TCCON measurements are better or similar to other algorithms; Regression line of YCAR algorithm is close to linear identity function with RMSE of 2.05, bias of - 0.86 ppm. According to error analysis, retrieval error of YCAR algorithm is 1.394 - 1.478 ppm at East Asia. In addition, spatio-temporal sampling error of 0.324 - 0.358 ppm for each single sounding retrieval is also analyzed with Carbon Tracker - Asia data. These results of error analysis reveal the reliability and accuracy of latest version of our YCAR algorithm. Both XCO2 values retrieved using YCAR algorithm on TANSO-FTS and TCCON measurements show the consistent increasing trend about 2.3 - 2.6 ppm per year. Comparing to the increasing rate of global background CO2 amount measured in Mauna Loa, Hawaii (2 ppm per year), the increasing trend in East Asia shows about 30% higher trend due to the rapid increase of CO2 emission from the source region.
NASA Astrophysics Data System (ADS)
Christian, Paul M.
2002-07-01
This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).
NASA Astrophysics Data System (ADS)
Sun, K.; Chao, X.; Sur, R.; Goldenstein, C. S.; Jeffries, J. B.; Hanson, R. K.
2013-12-01
A novel strategy has been developed for analysis of wavelength-scanned, wavelength modulation spectroscopy (WMS) with tunable diode lasers (TDLs). The method simulates WMS signals to compare with measurements to determine gas properties (e.g., temperature, pressure and concentration of the absorbing species). Injection-current-tuned TDLs have simultaneous wavelength and intensity variation, which severely complicates the Fourier expansion of the simulated WMS signal into harmonics of the modulation frequency (fm). The new method differs from previous WMS analysis strategies in two significant ways: (1) the measured laser intensity is used to simulate the transmitted laser intensity and (2) digital lock-in and low-pass filter software is used to expand both simulated and measured transmitted laser intensities into harmonics of the modulation frequency, WMS-nfm (n = 1, 2, 3,…), avoiding the need for an analytic model of intensity modulation or Fourier expansion of the simulated WMS harmonics. This analysis scheme is valid at any optical depth, modulation index, and at all values of scanned-laser wavelength. The method is demonstrated and validated with WMS of H2O dilute in air (1 atm, 296 K, near 1392 nm). WMS-nfm harmonics for n = 1 to 6 are extracted and the simulation and measurements are found in good agreement for the entire WMS lineshape. The use of 1f-normalization strategies to realize calibration-free wavelength-scanned WMS is also discussed.
Bi, Sheng; Xia, Ming
2015-08-11
To compare the validity and safety between holmium: YAG laser and traditional surgery in partial nephrectomy. A total of 28 patients were divided into two groups (holmium: YAG laser group without renal artery clamping and traditional surgery group with renal artery clamping). The intraoperative blood loss, total operative time, renal artery clamping time, postoperative hospital stay, separated renal function, postoperative complications and depth of tissue injury were recorded. The intraoperative blood loss, total operative time, renal artery clamping time, postoperative hospital stay, separated renal function, postoperative complications and depth of tissue injury were 80 ml, 77 min, 0 min, 7.4 days, 35 ml/min, 0, 0.9 cm, respectively, in holmium: YAG laser group. And in traditional surgery group were 69 ml, 111 min, 25.5 min, 7.3 days, 34 ml/min, 0, 2.0 cm, respectively. The differences of total operative time, renal artery clamping time and depth of tissue injury between two groups were statistically significant. The others were not statistically significant. Holmium: YAG laser is effective and safe in partial nephrectomy. It can decrease the total operative time, minimize the warm ischemia time and enlarge the extent of surgical excision.
NASA Astrophysics Data System (ADS)
Kusznir, Nick; Alvey, Andy; Roberts, Alan
2017-04-01
The 3D mapping of crustal thickness for continental shelves and oceanic crust, and the determination of ocean-continent transition (OCT) structure and continent-ocean boundary (COB) location, represents a substantial challenge. Geophysical inversion of satellite derived free-air gravity anomaly data incorporating a lithosphere thermal anomaly correction (Chappell & Kusznir, 2008) now provides a useful and reliable methodology for mapping crustal thickness in the marine domain. Using this we have produced the first comprehensive maps of global crustal thickness for oceanic and continental shelf regions. Maps of crustal thickness and continental lithosphere thinning factor from gravity inversion may be used to determine the distribution of oceanic lithosphere, micro-continents and oceanic plateaux including for the inaccessible polar regions (e.g. Arctic Ocean, Alvey et al.,2008). The gravity inversion method provides a prediction of continent-ocean boundary location which is independent of ocean magnetic anomaly and isochron interpretation. Using crustal thickness and continental lithosphere thinning factor maps with superimposed shaded-relief free-air gravity anomaly, we can improve the determination of pre-breakup rifted margin conjugacy and sea-floor spreading trajectory during ocean basin formation. By restoring crustal thickness & continental lithosphere thinning to their initial post-breakup configuration we show the geometry and segmentation of the rifted continental margins at their time of breakup, together with the location of highly-stretched failed breakup basins and rifted micro-continents. For detailed analysis to constrain OCT structure, margin type (i.e. magma poor, "normal" or magma rich) and COB location, a suite of quantitative analytical methods may be used which include: (i) Crustal cross-sections showing Moho depth and crustal basement thickness from gravity inversion. (ii) Residual depth anomaly (RDA) analysis which is used to investigate OCT bathymetric anomalies with respect to expected oceanic values. This includes flexural backstripping to produce bathymetry corrected for sediment loading. (iii) Subsidence analysis which is used to determine the distribution of continental lithosphere thinning. (iv) Joint inversion of time-domain deep seismic reflection and gravity anomaly data which is used to determine lateral variations in crustal basement density and velocity across the OCT, and to validate deep seismic reflection interpretations of Moho depth. The combined interpretation of these independent quantitative measurements is used to determine crustal thickness and composition across the ocean-continent-transition. This integrated approach has been validated on the Iberian margin where ODP drilling provides ground-truth of ocean-continent-transition crustal structure, continent-ocean-boundary location and magmatic type.
Eide, Ingvar; Westad, Frank
2018-01-01
A pilot study demonstrating real-time environmental monitoring with automated multivariate analysis of multi-sensor data submitted online has been performed at the cabled LoVe Ocean Observatory located at 258 m depth 20 km off the coast of Lofoten-Vesterålen, Norway. The major purpose was efficient monitoring of many variables simultaneously and early detection of changes and time-trends in the overall response pattern before changes were evident in individual variables. The pilot study was performed with 12 sensors from May 16 to August 31, 2015. The sensors provided data for chlorophyll, turbidity, conductivity, temperature (three sensors), salinity (calculated from temperature and conductivity), biomass at three different depth intervals (5-50, 50-120, 120-250 m), and current speed measured in two directions (east and north) using two sensors covering different depths with overlap. A total of 88 variables were monitored, 78 from the two current speed sensors. The time-resolution varied, thus the data had to be aligned to a common time resolution. After alignment, the data were interpreted using principal component analysis (PCA). Initially, a calibration model was established using data from May 16 to July 31. The data on current speed from two sensors were subject to two separate PCA models and the score vectors from these two models were combined with the other 10 variables in a multi-block PCA model. The observations from August were projected on the calibration model consecutively one at a time and the result was visualized in a score plot. Automated PCA of multi-sensor data submitted online is illustrated with an attached time-lapse video covering the relative short time period used in the pilot study. Methods for statistical validation, and warning and alarm limits are described. Redundant sensors enable sensor diagnostics and quality assurance. In a future perspective, the concept may be used in integrated environmental monitoring.
Mentiplay, Benjamin F; Hasanki, Ksaniel; Perraton, Luke G; Pua, Yong-Hao; Charlton, Paula C; Clark, Ross A
2018-03-01
The Microsoft Xbox One Kinect™ (Kinect V2) contains a depth camera that can be used to manually identify anatomical landmark positions in three-dimensions independent of the standard skeletal tracking, and therefore has potential for low-cost, time-efficient three-dimensional movement analysis (3DMA). This study examined inter-session reliability and concurrent validity of the Kinect V2 for the assessment of coronal and sagittal plane kinematics for the trunk, hip and knee during single leg squats (SLS) and drop vertical jumps (DVJ). Thirty young, healthy participants (age = 23 ± 5yrs, male/female = 15/15) performed a SLS and DVJ protocol that was recorded concurrently by the Kinect V2 and 3DMA during two sessions, one week apart. The Kinect V2 demonstrated good to excellent reliability for all SLS and DVJ variables (ICC ≥ 0.73). Concurrent validity ranged from poor to excellent (ICC = 0.02 to 0.98) during the SLS task, although trunk, hip and knee flexion and two-dimensional measures of knee abduction and frontal plane projection angle all demonstrated good to excellent validity (ICC ≥ 0.80). Concurrent validity for the DVJ task was typically worse, with only two variables exceeding ICC = 0.75 (trunk and hip flexion). These findings indicate that the Kinect V2 may have potential for large-scale screening for ACL injury risk, however future prospective research is required.
A Citizen Science Campaign to Validate Snow Remote-Sensing Products
NASA Astrophysics Data System (ADS)
Wikstrom Jones, K.; Wolken, G. J.; Arendt, A. A.; Hill, D. F.; Crumley, R. L.; Setiawan, L.; Markle, B.
2017-12-01
The ability to quantify seasonal water retention and storage in mountain snow packs has implications for an array of important topics, including ecosystem function, water resources, hazard mitigation, validation of remote sensing products, climate modeling, and the economy. Runoff simulation models, which typically rely on gridded climate data and snow remote sensing products, would be greatly improved if uncertainties in estimates of snow depth distribution in high-elevation complex terrain could be reduced. This requires an increase in the spatial and temporal coverage of observational snow data in high-elevation data-poor regions. To this end, we launched Community Snow Observations (CSO). Participating citizen scientists use Mountain Hub, a multi-platform mobile and web-based crowdsourcing application that allows users to record, submit, and instantly share geo-located snow depth, snow water equivalence (SWE) measurements, measurement location photos, and snow grain information with project scientists and other citizen scientists. The snow observations are used to validate remote sensing products and modeled snow depth distribution. The project's prototype phase focused on Thompson Pass in south-central Alaska, an important infrastructure corridor that includes avalanche terrain and the Lowe River drainage and is essential to the City of Valdez and the fisheries of Prince William Sound. This year's efforts included website development, expansion of the Mountain Hub tool, and recruitment of citizen scientists through a combination of social media outreach, community presentations, and targeted recruitment of local avalanche professionals. We also conducted two intensive field data collection campaigns that coincided with an aerial photogrammetric survey. With more than 400 snow depth observations, we have generated a new snow remote-sensing product that better matches actual SWE quantities for Thompson Pass. In the next phase of the citizen science portion of this project we will focus on expanding our group of participants to a larger geographic area in Alaska, further develop our partnership with Mountain Hub, and build relationships in new communities as we conduct a photogrammetric survey in a different region next year.
Schoellhamer, D.H.
2002-01-01
Suspended sediment concentration (SSC) data from San Pablo Bay, California, were analyzed to compare the basin-scale effect of dredging and disposal of dredged material (dredging operations) and natural estuarine processes. The analysis used twelve 3-wk to 5-wk periods of mid-depth and near-bottom SSC data collected at Point San Pablo every 15 min from 1993-1998. Point San Pablo is within a tidal excursion of a dredged-material disposal site. The SSC data were compared to dredging volume, Julian day, and hydrodynamic and meteorological variables that could affect SSC. Kendall's ??, Spearman's ??, and weighted (by the fraction of valid data in each period) Spearman's ??w correlation coefficients of the variables indicated which variables were significantly correlated with SSC. Wind-wave resuspension had the greatest effect on SSC. Median water-surface elevation was the primary factor affecting mid-depth SSC. Greater depths inhibit wind-wave resuspension of bottom sediment and indicate greater influence of less turbid water from down estuary. Seasonal variability in the supply of erodible sediment is the primary factor affecting near-bottom SSC. Natural physical processes in San Pablo Bay are more areally extensive, of equal or longer duration, and as frequent as dredging operations (when occurring), and they affect SSC at the tidal time scale. Natural processes control SSC at Point San Pablo even when dredging operations are occurring.
Near-Surface Effects of Free Atmosphere Stratification in Free Convection
NASA Astrophysics Data System (ADS)
Mellado, Juan Pedro; van Heerwaarden, Chiel C.; Garcia, Jade Rachele
2016-04-01
The effect of a linear stratification in the free atmosphere on near-surface properties in a free convective boundary layer (CBL) is investigated by means of direct numerical simulation. We consider two regimes: a neutral stratification regime, which represents a CBL that grows into a residual layer, and a strong stratification regime, which represents the equilibrium (quasi-steady) entrainment regime. We find that the mean buoyancy varies as z^{-1/3}, in agreement with classical similarity theory. However, the root-mean-square (r.m.s.) of the buoyancy fluctuation and the r.m.s. of the vertical velocity vary as z^{-0.45} and ln z, respectively, both in disagreement with theory. These scaling laws are independent of the stratification regime, but the depth over which they are valid depends on the stratification. In the strong stratification regime, this depth is about 20 to 25 % of the CBL depth instead of the commonly used 10 %, which we only observe under neutral conditions. In both regimes, the near-surface flow structure can be interpreted as a hierarchy of circulations attached to the surface. Based on this structure, we define a new near-surface layer in free convection, the plume-merging layer, that is conceptually different from the constant-flux layer. The varying depth of the plume-merging layer depending on the stratification accounts for the varying depth of validity of the scaling laws. These findings imply that the buoyancy transfer law needed in mixed-layer and single-column models is well described by the classical similarity theory, independent of the stratification in the free atmosphere, even though other near-surface properties, such as the r.m.s. of the buoyancy fluctuation and the r.m.s. of the vertical velocity, are inconsistent with that theory.
Rusterholz, Thomas; Achermann, Peter; Dürr, Roland; Koenig, Thomas; Tarokh, Leila
2017-06-01
Investigating functional connectivity between brain networks has become an area of interest in neuroscience. Several methods for investigating connectivity have recently been developed, however, these techniques need to be applied with care. We demonstrate that global field synchronization (GFS), a global measure of phase alignment in the EEG as a function of frequency, must be applied considering signal processing principles in order to yield valid results. Multichannel EEG (27 derivations) was analyzed for GFS based on the complex spectrum derived by the fast Fourier transform (FFT). We examined the effect of window functions on GFS, in particular of non-rectangular windows. Applying a rectangular window when calculating the FFT revealed high GFS values for high frequencies (>15Hz) that were highly correlated (r=0.9) with spectral power in the lower frequency range (0.75-4.5Hz) and tracked the depth of sleep. This turned out to be spurious synchronization. With a non-rectangular window (Tukey or Hanning window) these high frequency synchronization vanished. Both, GFS and power density spectra significantly differed for rectangular and non-rectangular windows. Previous papers using GFS typically did not specify the applied window and may have used a rectangular window function. However, the demonstrated impact of the window function raises the question of the validity of some previous findings at higher frequencies. We demonstrated that it is crucial to apply an appropriate window function for determining synchronization measures based on a spectral approach to avoid spurious synchronization in the beta/gamma range. Copyright © 2017 Elsevier B.V. All rights reserved.
Numerical Uncertainty Quantification for Radiation Analysis Tools
NASA Technical Reports Server (NTRS)
Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha
2007-01-01
Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.
Data for Renewable Energy Planning, Policy, and Investment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Sarah L
Reliable, robust, and validated data are critical for informed planning, policy development, and investment in the clean energy sector. The Renewable Energy (RE) Explorer was developed to support data-driven renewable energy analysis that can inform key renewable energy decisions globally. This document presents the types of geospatial and other data at the core of renewable energy analysis and decision making. Individual data sets used to inform decisions vary in relation to spatial and temporal resolution, quality, and overall usefulness. From Data to Decisions, a complementary geospatial data and analysis decision guide, provides an in-depth view of these and other considerationsmore » to enable data-driven planning, policymaking, and investment. Data support a wide variety of renewable energy analyses and decisions, including technical and economic potential assessment, renewable energy zone analysis, grid integration, risk and resiliency identification, electrification, and distributed solar photovoltaic potential. This fact sheet provides information on the types of data that are important for renewable energy decision making using the RE Data Explorer or similar types of geospatial analysis tools.« less
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
Molecular dysexpression in gastric cancer revealed by integrated analysis of transcriptome data.
Li, Xiaomei; Dong, Weiwei; Qu, Xueling; Zhao, Huixia; Wang, Shuo; Hao, Yixin; Li, Qiuwen; Zhu, Jianhua; Ye, Min; Xiao, Wenhua
2017-05-01
Gastric cancer (GC) is often diagnosed in the advanced stages and is associated with a poor prognosis. Obtaining an in depth understanding of the molecular mechanisms of GC has lagged behind compared with other cancers. This study aimed to identify candidate biomarkers for GC. An integrated analysis of microarray datasets was performed to identify differentially expressed genes (DEGs) between GC and normal tissues. Gene ontology and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were then performed to identify the functions of the DEGs. Furthermore, a protein-protein interaction (PPI) network of the DEGs was constructed. The expression levels of the DEGs were validated in human GC tissues using reverse transcription-quantitative polymerase chain reaction (RT-qPCR). A set of 689 DEGs were identified in GC tissues, as compared with normal tissues, including 202 upregulated DEGs and 487 downregulated DEGs. The KEGG pathway analysis suggested that various pathways may play important roles in the pathology of GC, including pathways related to protein digestion and absorption, extracellular matrix-receptor interaction, and the metabolism of xenobiotics by cytochrome P450. The PPI network analysis indicated that the significant hub proteins consisted of SPP1, TOP2A and ARPC1B. RT-qPCR validation indicated that the expression levels of the top 10 most significantly dysexpressed genes were consistent with the illustration of the integrated analysis. The present study yielded a reference list of reliable DEGs, which represents a robust pool of candidates for further evaluation of GC pathogenesis and treatment.
NASA Technical Reports Server (NTRS)
Laymon, Charles A.; Crosson, William L.; Limaye, Ashutosh; Manu, Andrew; Archer, Frank
2005-01-01
We compare soil moisture retrieved with an inverse algorithm with observations of mean moisture in the 0-6 cm soil layer. A significant discrepancy is noted between the retrieved and observed moisture. Using emitting depth functions as weighting functions to convert the observed mean moisture to observed effective moisture removes nearly one-half of the discrepancy noted. This result has important implications in remote sensing validation studies.
NASA Astrophysics Data System (ADS)
Harbitz, C. B.; Frauenfelder, R.; Kaiser, G.; Glimsdal, S.; Sverdrup-thygeson, K.; Løvholt, F.; Gruenburg, L.; Mc Adoo, B. G.
2015-12-01
The 2011 Tōhoku tsunami caused a high number of fatalities and massive destruction. Data collected after the event allow for retrospective analyses. Since 2009, NGI has developed a generic GIS model for local analyses of tsunami vulnerability and mortality risk. The mortality risk convolves the hazard, exposure, and vulnerability. The hazard is represented by the maximum tsunami flow depth (with a corresponding likelihood), the exposure is described by the population density in time and space, while the vulnerability is expressed by the probability of being killed as a function of flow depth and building class. The analysis is further based on high-resolution DEMs. Normally a certain tsunami scenario with a corresponding return period is applied for vulnerability and mortality risk analysis. Hence, the model was first employed for a tsunami forecast scenario affecting Bridgetown, Barbados, and further developed in a forecast study for the city of Batangas in the Philippines. Subsequently, the model was tested by hindcasting the 2009 South Pacific tsunami in American Samoa. This hindcast was based on post-tsunami information. The GIS model was adapted for optimal use of the available data and successfully estimated the degree of mortality.For further validation and development, the model was recently applied in the RAPSODI project for hindcasting the 2011 Tōhoku tsunami in Sendai and Ishinomaki. With reasonable choices of building vulnerability, the estimated expected number of fatalities agree well with the reported death toll. The results of the mortality hindcast for the 2011 Tōhoku tsunami substantiate that the GIS model can help to identify high tsunami mortality risk areas, as well as identify the main risk drivers.The research leading to these results has received funding from CONCERT-Japan Joint Call on Efficient Energy Storage and Distribution/Resilience against Disasters (http://www.concertjapan.eu; project RAPSODI - Risk Assessment and design of Prevention Structures fOr enhanced tsunami DIsaster resilience http://www.ngi.no/en/Project-pages/RAPSODI/), and from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603839 (Project ASTARTE - Assessment, STrategy And Risk reduction for Tsunamis in Europe http://www.astarte-project.eu/).
Modeling the investment casting of a titanium crown.
Atwood, R C; Lee, P D; Curtis, R V; Maijer, D M
2007-01-01
The objective of this study was to apply computational modeling tools to assist in the design of titanium dental castings. The tools developed should incorporate state-of-the-art micromodels to predict the depth to which the mechanical properties of the crown are affected by contamination from the mold. The model should also be validated by comparison of macro- and micro-defects found in a typical investment cast titanium tooth crown. Crowns were hand-waxed and investment cast in commercial purity grade 1 (CP-1) titanium by a commercial dental laboratory. The castings were analyzed using X-ray microtomography (XMT). Following sectioning, analysis continued with optical and scanning electron microscopy, and microhardness testing. An in-house cellular-automata solidification and finite-difference diffusion program was coupled with a commercial casting program to model the investment casting process. A three-dimensional (3D) digital image generated by X-ray tomography was used to generate an accurate geometric representation of a molar crown casting. Previously reported work was significantly expanded upon by including transport of dissolved oxygen and impurity sources upon the arbitrarily shaped surface of the crown, and improved coupling of micro- and macro-scale simulations. Macroscale modeling was found to be sufficient to accurately predict the location of the large internal porosity. These are shrinkage pores located in the thick sections of the cusp. The model was used to determine the influence of sprue design on the size and location of these pores. Combining microscale with macroscale modeling allowed the microstructure and depth of contamination to be predicted qualitatively. This combined model predicted a surprising result--the dissolution of silicon from the mold into the molten titanium is sufficient to depress the freezing point of the liquid metal such that the crown solidifies the subsurface. Solidification then progresses inwards and back out to the surface through the silicon-enriched near-surface layer. The microstructure and compositional analysis of the near-surface region are consistent with this prediction. A multiscale model was developed and validated, which can be used to design CP-Ti dental castings to minimize both macro- and micro-defects, including shrinkage porosity, grain size and the extent of surface contamination due to reaction with the mold material. The model predicted the surprising result that the extent of Si contamination from the mold was sufficient to suppress the liquidus temperature to the extent that the surface (to a depth of approximately 100 microm) of the casting solidifies after the bulk. This significantly increases the oxygen pickup, thereby increasing the depth of formation of alpha casing. The trend towards mold materials with reduced Si in order to produce easier-to-finish titanium castings is a correct approach.
Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups
NASA Astrophysics Data System (ADS)
Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).
Prediction of Cutting Force in Turning Process-an Experimental Approach
NASA Astrophysics Data System (ADS)
Thangarasu, S. K.; Shankar, S.; Thomas, A. Tony; Sridhar, G.
2018-02-01
This Paper deals with a prediction of Cutting forces in a turning process. The turning process with advanced cutting tool has a several advantages over grinding such as short cycle time, process flexibility, compatible surface roughness, high material removal rate and less environment problems without the use of cutting fluid. In this a full bridge dynamometer has been used to measure the cutting forces over mild steel work piece and cemented carbide insert tool for different combination of cutting speed, feed rate and depth of cut. The experiments are planned based on taguchi design and measured cutting forces were compared with the predicted forces in order to validate the feasibility of the proposed design. The percentage contribution of each process parameter had been analyzed using Analysis of Variance (ANOVA). Both the experimental results taken from the lathe tool dynamometer and the designed full bridge dynamometer were analyzed using Taguchi design of experiment and Analysis of Variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahaney, W.C.; Boyer, M.G.
1986-08-01
Microflora (bacteria and fungi) distributions in several paleosols from Mount Kenya, East Africa, provide important information about contamination of buried soil horizons dated by radiocarbon. High counts of bacteria and fungi in buried soils provide evidence for contamination by plant root effects or ground water movement. Profiles with decreasing counts versus depth appear to produce internally consistent and accurate radiocarbon dates. Profiles with disjunct or bimodal distributions of microflora at various depths produce internally inconsistent chronological sequences of radiocarbon-dated buried surfaces. Preliminary results suggest that numbers up to 5 x 10/sup 2/ g/sup -1/ for bacteria in buried A horizonsmore » do not appear to affect the validity of /sup 14/C dates. Beyond this threshold value, contamination appears to produce younger dates, the difference between true age and /sup 14/C age increasing with the amount of microflora contamination.« less
Routine Mapping of the Snow Depth Distribution on Sea Ice
NASA Astrophysics Data System (ADS)
Farrell, S. L.; Newman, T.; Richter-Menge, J.; Dattler, M.; Paden, J. D.; Yan, S.; Li, J.; Leuschen, C.
2016-12-01
The annual growth and retreat of the polar sea ice cover is influenced by the seasonal accumulation, redistribution and melt of snow on sea ice. Due to its high albedo and low thermal conductivity, snow is also a controlling parameter in the mass and energy budgets of the polar climate system. Under a changing climate scenario it is critical to obtain reliable and routine measurements of snow depth, across basin scales, and long time periods, so as to understand regional, seasonal and inter-annual variability, and the subsequent impacts on the sea ice cover itself. Moreover the snow depth distribution remains a significant source of uncertainty in the derivation of sea ice thickness from remote sensing measurements, as well as in numerical model predictions of future climate state. Radar altimeter systems flown onboard NASA's Operation IceBridge (OIB) mission now provide annual measurements of snow across both the Arctic and Southern Ocean ice packs. We describe recent advances in the processing techniques used to interpret airborne radar waveforms and produce accurate and robust snow depth results. As a consequence of instrument effects and data quality issues associated with the initial release of the OIB airborne radar data, the entire data set was reprocessed to remove coherent noise and sidelobes in the radar echograms. These reprocessed data were released to the community in early 2016, and are available for improved derivation of snow depth. Here, using the reprocessed data, we present the results of seven years of radar measurements collected over Arctic sea ice at the end of winter, just prior to melt. Our analysis provides the snow depth distribution on both seasonal and multi-year sea ice. We present the inter-annual variability in snow depth for both the Central Arctic and the Beaufort/Chukchi Seas. We validate our results via comparison with temporally and spatially coincident in situ measurements gathered during many of the OIB surveys. The results will influence future sensor suite development for sea ice studies, and they provide a new metric for comparison with other sea ice observations. Integrating these novel snow depth observations with modeling studies will help inform model development, and advance our predictive capabilities to help better understand how sea ice is responding to a changing climate.
Cippitelli, Enea; Gasparrini, Samuele; Spinsante, Susanna; Gambi, Ennio
2015-01-01
The Microsoft Kinect sensor has gained attention as a tool for gait analysis for several years. Despite the many advantages the sensor provides, however, the lack of a native capability to extract joints from the side view of a human body still limits the adoption of the device to a number of relevant applications. This paper presents an algorithm to locate and estimate the trajectories of up to six joints extracted from the side depth view of a human body captured by the Kinect device. The algorithm is then applied to extract data that can be exploited to provide an objective score for the “Get Up and Go Test”, which is typically adopted for gait analysis in rehabilitation fields. Starting from the depth-data stream provided by the Microsoft Kinect sensor, the proposed algorithm relies on anthropometric models only, to locate and identify the positions of the joints. Differently from machine learning approaches, this solution avoids complex computations, which usually require significant resources. The reliability of the information about the joint position output by the algorithm is evaluated by comparison to a marker-based system. Tests show that the trajectories extracted by the proposed algorithm adhere to the reference curves better than the ones obtained from the skeleton generated by the native applications provided within the Microsoft Kinect (Microsoft Corporation, Redmond, WA, USA, 2013) and OpenNI (OpenNI organization, Tel Aviv, Israel, 2013) Software Development Kits. PMID:25594588
Soil depth mapping using seismic surface waves: Evaluation on eroded loess covered hillslopes
NASA Astrophysics Data System (ADS)
Bernardie, Severine; Samyn, Kevin; Cerdan, Olivier; Grandjean, Gilles
2010-05-01
The purposes of the multidisciplinary DIGISOIL project are the integration and improvement of in situ and proximal technologies for the assessment of soil properties and soil degradation indicators. Foreseen developments concern sensor technologies, data processing and their integration to applications of (digital) soil mapping (DSM). Among available techniques, the seismic one is, in this study, particularly tested for characterising soil vulnerability to erosion. The spectral analysis of surface waves (SASW) method is an in situ seismic technique used for evaluation of the stiffnesses (G) and associated depth in layered systems. A profile of Rayleigh wave velocity versus frequency, i.e., the dispersion curve, is calculated from each recorded seismogram before to be inverted to obtain the vertical profile of shear wave velocity Vs. Then, the soil stiffness can easily be calculated from the shear velocity if the material density is estimated, and the soil stiffness as a function of depth can be obtained. This last information can be a good indicator to identify the soil bedrock limit. SASW measurements adapted to soil characterisation is proposed in the DIGISOIL project, as it produces in an easy and quick way a 2D map of the soil. This system was tested for the digital mapping of the depth of loamy material in a catchment of the European loess belt. The validation of this methodology has been performed with the realisation of several acquisitions along the seismic profiles: - Several boreholes were drilled until the bedrock, permitting to get the geological features of the soil and the depth of the bedrock; - Several laboratory measurements of various parameters were done on samples taken from the boreholes at various depths, such as dry density, solid density, and water content; - Dynamic penetration tests were also conducted along the seismic profile, until the bedrock is attained. Some empirical correlations between the parameters measured with laboratory tests, the qc obtained from the dynamic penetration tests and the Vs acquired from the SASW measurements permit to assess the accuracy of the procedure and to evaluate its limitations. The depth to bedrock determined by this procedure can then be combined with the soil erosion susceptibility to produce a risk map. This methodology will help to target measures within areas that show a reduced soil depth associated with a high soil erosion susceptibility.
Broström, Anders; Arestedt, Kristofer Franzén; Nilsen, Per; Strömberg, Anna; Ulander, Martin; Svanborg, Eva
2010-12-01
Continuous positive airway pressure (CPAP) is the treatment of choice for obstructive sleep apnoea syndrome (OSAS), but side-effects are common. No validated self-rating scale measuring side-effects to CPAP treatment exists today. The aim was to develop the side-effects to CPAP treatment inventory (SECI), and investigate the validity and reliability of the instrument among patients with OSAS. SECI was developed on the basis of: (1) in-depth interviews with 23 patients; (2) examination of the scientific literature and (3) consensus agreement of a multi-professional expert panel. This yielded 15 different types of side-effects related to CPAP treatment. Each side-effect has three sub-questions (scales): perceived frequency (a) and magnitude (b) of the side-effect, as well as its perceived impact on CPAP use (c). A cross-sectional descriptive design was used. A total of 329 patients with OSAS with an average use of CPAP treatment for 39 months (2 weeks to 182 months) were recruited. Data were collected with SECI, and obtained from medical records (clinical variables and data related to CPAP treatment). Construct validity was confirmed with factor analysis (principal component analysis with orthogonal rotation). A logical two-factor solution, the device subscale and symptom subscale, emerged across all three scales. The symptom subscale describing physical and psychological side-effects and the device subscale described mask and device-related side-effects. Internal consistency reliability of the three scales was good (Cronbach's α = 0.74-0.86) and acceptable for the subscales (Cronbach's α = 0.62-0.86). The satisfactory measurement properties of this new instrument are promising and indicate that SECI can be used to measure side-effects to CPAP treatment. © 2010 European Sleep Research Society.
Validation of A One-Dimensional Snow-Land Surface Model at the Sleepers River Watershed
NASA Astrophysics Data System (ADS)
Sun, Wen-Yih; Chern, Jiun-Dar
A one-dimensional land surface model, based on conservations of heat and water substance inside the soil and snow, is presented. To validate the model, a stand-alone experiment is carried out with five years of meteorological and hydrological observations collected from the NOAA-ARS Cooperative Snow Research Project (1966-1974) at the Sleepers River watershed in Danville, Vermont, U.S.A. The numerical results show that the model is capable of reproducing the observed soil temperature at different depths during the winter as well as a rapid increase of soil temperature after snow melts in the spring. The model also simulates the density, temperature, thickness, and equivalent water depth of snow reasonably well. The numerical results are sensitive to the fresh snow density and the soil properties used in the model, which affect the heat exchange between the snowpack and the soil.
Koumpouros, Yiannis; Papageorgiou, Effie; Karavasili, Alexandra; Alexopoulou, Despoina
2017-07-01
To examine the Assistive Technology Device Predisposition Assessment scale and provide evidence of validity and reliability of the Greek version. We translated and adapted the original instrument in Greek according to the most well-known guidelines recommendations. Field test studies were conducted in a rehabilitation hospital to validate the appropriateness of the final results. Ratings of the different items were statistically analyzed. We recruited 115 subjects who were administered the Form E of the original questionnaire. The experimental analysis conducted revealed a three subscales structure: (i) Adaptability, (ii) Fit to Use, and (iii) Socializing. According to the results of our study the three subscales measure different constructs. Reliability measures (ICC = 0.981, Pearson's correlation = 0.963, Cronbach's α = 0.701) yielded high values. Test-retest outcome showed great stability. This is the first study, at least to the knowledge of the authors, which focuses merely on measuring the satisfaction of the users from the used assistive device, while exploring the Assistive Technology Device Predisposition Assessment - Device Form in such depth. According to the results, it is a stable, valid and reliable instrument and applicable to the Greek population. Thus, it can be used to measure the satisfaction of patients with assistive devices. Implications for Rehabilitation The paper explores the cultural adaptability and applicability of ATD PA - Device Form. ATD PA - Device Form can be used to assess user satisfaction by the selected assistive device. ATD PA - Device Form is a valid and reliable instrument in measuring users' satisfaction in Greekreality.
Fearon, A M; Ganderton, C; Scarvell, J M; Smith, P N; Neeman, T; Nash, C; Cook, J L
2015-12-01
Greater trochanteric pain syndrome (GTPS) is common, resulting in significant pain and disability. There is no condition specific outcome score to evaluate the degree of severity of disability associated with GTPS in patients with this condition. To develop a reliable and valid outcome measurement capable of evaluating the severity of disability associated with GTPS. A phenomenological framework using in-depth semi structured interviews of patients and medical experts, and focus groups of physiotherapists was used in the item generation. Item and format clarification was undertaken via piloting. Multivariate analysis provided the basis for item reduction. The resultant VISA-G was tested for reliability with the inter class co-efficient (ICC), internal consistency (Cronbach's Alpha), and construct validity (correlation co-efficient) on 52 naïve participants with GTPS and 31 asymptomatic participants. The resultant outcome measurement tool is consistent in style with existing tendinopathy outcome measurement tools, namely the suite of VISA scores. The VISA-G was found to be have a test-retest reliability of ICC2,1 (95% CI) of 0.827 (0.638-0.923). Internal consistency was high with a Cronbach's Alpha of 0.809. Construct validity was demonstrated: the VISA-G measures different constructs than tools previously used in assessing GTPS, the Harris Hip Score and the Oswestry Disability Index (Spearman Rho:0.020 and 0.0205 respectively). The VISA-G did not demonstrate any floor or ceiling effect in symptomatic participants. The VISA-G is a reliable and valid score for measuring the severity of disability associated GTPS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development of a brief instrument to measure smartphone addiction among nursing students.
Cho, Sumi; Lee, Eunjoo
2015-05-01
Interruptions and distractions due to smartphone use in healthcare settings pose potential risks to patient safety. Therefore, it is important to assess smartphone use at work, to encourage nursing students to review their relevant behaviors, and to recognize these potential risks. This study's aim was to develop a scale to measure smartphone addiction and test its validity and reliability. We investigated nursing students' experiences of distractions caused by smartphones in the clinical setting and their opinions about smartphone use policies. Smartphone addiction and the need for a scale to measure it were identified through a literature review and in-depth interviews with nursing students. This scale showed reliability and validity with exploratory and confirmatory factor analysis. In testing the discriminant and convergent validity of the selected (18) items with four factors, the smartphone addiction model explained approximately 91% (goodness-of-fit index = 0.909) of the variance in the data. Pearson correlation coefficients among addiction level, distractions in the clinical setting, and attitude toward policies on smartphone use were calculated. Addiction level and attitude toward policies of smartphone use were negatively correlated. This study suggests that healthcare organizations in Korea should create practical guidelines and policies for the appropriate use of smartphones in clinical practice.
Verification and Validation of the Coastal Modeling System. Report 1: Summary Report
2011-12-01
Information Program ( CDIP ) Buoy 036 in a water depth of 40 m (relative to Mean Tide Level, MTL) and from the National Data Buoy Center (NDBC) Buoy...August to 14 September 2005, offshore wave data from a CDIP Buoy 098, the ocean surface wind from NDBC Buoy 51001, and water level data from NOAA station...buoy at 26-m depth was maintained by CDIP (Buoy 430), and data are available online at http://cdip.ucsd.edu. The wind measurements are available
Zhao, Hansheng; Sun, Huayu; Li, Lichao; Lou, Yongfeng; Li, Rongsheng; Qi, Lianghua; Gao, Zhimin
2017-01-01
Rattan is an important group of regenerating non-wood climbing palm in tropical forests. The cirrus is an essential climbing organ and provides morphological evidence for evolutionary and taxonomic studies. However, limited data are available on the molecular mechanisms underlying the development of the cirrus. Thus, we performed in-depth transcriptomic sequencing analyses to characterize the cirrus development at different developmental stages of Daemonorops jenkinsiana. The result showed 404,875 transcripts were assembled, including 61,569 high-quality unigenes were identified, of which approximately 76.16% were annotated and classified by seven authorized databases. Moreover, a comprehensive analysis of the gene expression profiles identified differentially expressed genes (DEGs) concentrated in developmental pathways, cell wall metabolism, and hook formation between the different stages of the cirri. Among them, 37 DEGs were validated by qRT-PCR. Furthermore, 14,693 transcriptome-based microsatellites were identified. Of the 168 designed SSR primer pairs, 153 were validated and 16 pairs were utilized for the polymorphic analysis of 25 rattan accessions. These findings can be used to interpret the molecular mechanisms of cirrus development, and the developed microsatellites markers provide valuable data for assisting rattan taxonomy and expanding the understanding of genomic study in rattan. PMID:28383053
Revision and psychometric testing of the City of Hope Quality of Life-Ostomy Questionnaire.
Grant, Marcia; Ferrell, Betty; Dean, Grace; Uman, Gwen; Chu, David; Krouse, Robert
2004-10-01
Ostomies may be performed for bowel or urinary diversion, and occur in both cancer and non-cancer patients. Impact on physical, psychological, social and spiritual well-being is not unexpected, but has been minimally described in the literature. The City of Hope Quality of Life (COH-QOL)-Ostomy Questionnaire is an adult patient self-report instrument designed to assess quality of life. This report focuses on the revision and psychometric testing of this questionnaire. The revised COH-QOL-Ostomy Questionnaire involved in-depth patient interviews and expert panel review. The format consisted of a 13-item disease and demographic section, a 34-item forced-choice section, and a 41-item linear analogue scaled section. A mailed survey to California members of the United Ostomy Association resulted in a 62% response rate (n = 1513). Factor analysis was conducted to refine the instrument. Construct validity involved testing a number of hypotheses identifying contrasting groups. Factor analysis confirmed the conceptual framework. Reliability of subscales ranged from 0.77 to 0.90. The questionnaire discriminated between subpopulations with specific concerns. Overall, the analyses provide evidence for the validity and reliability of the COH-QOL-Ostomy Questionnaire as a comprehensive, multidimensional self-report questionnaire for measuring quality of life in patients with intestinal ostomies.
NASA Astrophysics Data System (ADS)
Yoon, Mijin; Jee, Myungkook James; Tyson, Tony
2018-01-01
The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 sq. deg survey carried out with NOAO’s Blanco and Mayall telescopes. The strength of the survey lies in its depth reaching down to ~27th mag in BVRz bands. This enables a broad redshift baseline study and allows us to investigate cosmological evolution of the large-scale structure. In this poster, we present the first cosmological analysis from the DLS using galaxy-shear correlations and galaxy clustering signals. Our DLS shear calibration accuracy has been validated through the most recent public weak-lensing data challenge. Photometric redshift systematic errors are tested by performing lens-source flip tests. Instead of real-space correlations, we reconstruct band-limited power spectra for cosmological parameter constraints. Our analysis puts a tight constraint on the matter density and the power spectrum normalization parameters. Our results are highly consistent with our previous cosmic shear analysis and also with the Planck CMB results.
NASA Astrophysics Data System (ADS)
Jethva, H. T.; Torres, O.; Remer, L. A.; Redemann, J.; Dunagan, S. E.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Segal-Rosenhaimer, M.
2014-12-01
Absorbing aerosols produced from biomass burning and dust outbreaks are often found to overlay the lower level cloud decks as evident in the satellite images. In contrast to the cloud-free atmosphere, in which aerosols generally tend to cool the atmosphere, the presence of absorbing aerosols above cloud poses greater potential of exerting positive radiative effects (warming) whose magnitude directly depends on the aerosol loading above cloud, optical properties of clouds and aerosols, and cloud fraction. In recent years, development of algorithms that exploit satellite-based passive measurements of ultraviolet (UV), visible, and polarized light as well as lidar-based active measurements constitute a major breakthrough in the field of remote sensing of aerosols. While the unprecedented quantitative information on aerosol loading above cloud is now available from NASA's A-train sensors, a greater question remains ahead: How to validate the satellite retrievals of above-cloud aerosols (ACA)? Direct measurements of ACA such as carried out by the NASA Ames Airborne Tracking Sunphotometer (AATS) and Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) can be of immense help in validating ACA retrievals. In this study, we validate the ACA optical depth retrieved using the 'color ratio' (CR) method applied to the MODIS cloudy-sky reflectance by using the airborne AATS and 4STAR measurements. A thorough search of the historic AATS-4STAR database collected during different field campaigns revealed five events where biomass burning, dust, and wildfire-emitted aerosols were found to overlay lower level cloud decks observed during SAFARI-2000, ACE-ASIA 2001, and SEAC4RS-2013, respectively. The co-located satellite-airborne measurements revealed a good agreement (root-mean-square-error<0.1 for Aerosol Optical Depth (AOD) at 500 nm) with most matchups falling within the estimated uncertainties in the MODIS retrievals (-10% to +50%). An extensive validation of satellite-based ACA retrievals requires equivalent field measurements particularly over the regions where ACA are often observed from satellites, i.e., south-eastern Atlantic Ocean, tropical Atlantic Ocean, northern Arabian Sea, South-East and North-East Asia.
Improved Hydrology over Peatlands in a Global Land Modeling System
NASA Technical Reports Server (NTRS)
Bechtold, M.; Delannoy, G.; Reichle, R.; Koster, R.; Mahanama, S.; Roose, Dirk
2018-01-01
Peatlands of the Northern Hemisphere represent an important carbon pool that mainly accumulated since the last ice age under permanently wet conditions in specific geological and climatic settings. The carbon balance of peatlands is closely coupled to water table dynamics. Consequently, the future carbon balance over peatlands is strongly dependent on how hydrology in peatlands will react to changing boundary conditions, e.g. due to climate change or regional water level drawdown of connected aquifers or streams. Global land surface modeling over organic-rich regions can provide valuable global-scale insights on where and how peatlands are in transition due to changing boundary conditions. However, the current global land surface models are not able to reproduce typical hydrological dynamics in peatlands well. We implemented specific structural and parametric changes to account for key hydrological characteristics of peatlands into NASA's GEOS-5 Catchment Land Surface Model (CLSM, Koster et al. 2000). The main modifications pertain to the modeling of partial inundation, and the definition of peatland-specific runoff and evapotranspiration schemes. We ran a set of simulations on a high performance cluster using different CLSM configurations and validated the results with a newly compiled global in-situ dataset of water table depths in peatlands. The results demonstrate that an update of soil hydraulic properties for peat soils alone does not improve the performance of CLSM over peatlands. However, structural model changes for peatlands are able to improve the skill metrics for water table depth. The validation results for the water table depth indicate a reduction of the bias from 2.5 to 0.2 m, and an improvement of the temporal correlation coefficient from 0.5 to 0.65, and from 0.4 to 0.55 for the anomalies. Our validation data set includes both bogs (rain-fed) and fens (ground and/or surface water influence) and reveals that the metrics improved less for fens. In addition, a comparison of evapotranspiration and soil moisture estimates over peatlands will be presented, albeit only with limited ground-based validation data. We will discuss strengths and weaknesses of the new model by focusing on time series of specific validation sites.
NASA Astrophysics Data System (ADS)
Wei, Hui; Gong, Guanghong; Li, Ni
2017-10-01
Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.
Geant4 beam model for boron neutron capture therapy: investigation of neutron dose components.
Moghaddasi, Leyla; Bezak, Eva
2018-03-01
Boron neutron capture therapy (BNCT) is a biochemically-targeted type of radiotherapy, selectively delivering localized dose to tumour cells diffused in normal tissue, while minimizing normal tissue toxicity. BNCT is based on thermal neutron capture by stable [Formula: see text]B nuclei resulting in emission of short-ranged alpha particles and recoil [Formula: see text]Li nuclei. The purpose of the current work was to develop and validate a Monte Carlo BNCT beam model and to investigate contribution of individual dose components resulting of neutron interactions. A neutron beam model was developed in Geant4 and validated against published data. The neutron beam spectrum, obtained from literature for a cyclotron-produced beam, was irradiated to a water phantom with boron concentrations of 100 μg/g. The calculated percentage depth dose curves (PDDs) in the phantom were compared with published data to validate the beam model in terms of total and boron depth dose deposition. Subsequently, two sensitivity studies were conducted to quantify the impact of: (1) neutron beam spectrum, and (2) various boron concentrations on the boron dose component. Good agreement was achieved between the calculated and measured neutron beam PDDs (within 1%). The resulting boron depth dose deposition was also in agreement with measured data. The sensitivity study of several boron concentrations showed that the calculated boron dose gradually converged beyond 100 μg/g boron concentration. This results suggest that 100μg/g tumour boron concentration may be optimal and above this value limited increase in boron dose is expected for a given neutron flux.
Brinkkemper, Tijn; van Norel, Arjanne M; Szadek, Karolina M; Loer, Stephan A; Zuurmond, Wouter W A; Perez, Roberto S G M
2013-01-01
Palliative sedation is the intentional lowering of consciousness of a patient in the last phase of life to relieve suffering from refractory symptoms such as pain, delirium and dyspnoea. In this systematic review, we evaluated the use of monitoring scales to assess the degree of control of refractory symptoms and/or the depth of the sedation. A database search of PubMed and Embase was performed up to January 2010 using the search terms 'palliative sedation' OR 'terminal sedation'. Retro- and prospective studies as well as reviews and guidelines containing information about monitoring of palliative sedation, written in the English, German or Dutch language were included. The search yielded 264 articles of which 30 were considered relevant. Most studies focused on monitoring refractory symptoms (pain, fatigue or delirium) or the level of awareness to control the level of sedation. Four prospective and one retrospective study used scales validated in other settings: the Numeric Pain Rating Scale, the Visual Analogue Scale, the Memorial Delirium Assessment Scale, the Communication Capacity Scale and Agitation Distress Scale. Only the Community Capacity Scale was partially validated for use in a palliative sedation setting. One guideline described the use of a scale validated in another setting. A minority of studies reported the use of observational scales to monitor the effect of palliative sedation. Future studies should be focused on establishing proper instruments, most adequate frequency and timing of assessment, and interdisciplinary evaluation of sedation depth and symptom control for palliative sedation.
Design of experiments in medical physics: Application to the AAA beam model validation.
Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D
2017-09-01
The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Miller, Kevin C.; Hughes, Lexie E.; Long, Blaine C.; Adams, William M.; Casa, Douglas J.
2017-01-01
Context: No evidence-based recommendation exists regarding how far clinicians should insert a rectal thermistor to obtain the most valid estimate of core temperature. Knowing the validity of temperatures at different rectal depths has implications for exertional heat-stroke (EHS) management. Objective: To determine whether rectal temperature (Trec) taken at 4 cm, 10 cm, or 15 cm from the anal sphincter provides the most valid estimate of core temperature (as determined by esophageal temperature [Teso]) during similar stressors an athlete with EHS may experience. Design: Cross-sectional study. Setting: Laboratory. Patients or Other Participants: Seventeen individuals (14 men, 3 women: age = 23 ± 2 years, mass = 79.7 ± 12.4 kg, height = 177.8 ± 9.8 cm, body fat = 9.4% ± 4.1%, body surface area = 1.97 ± 0.19 m2). Intervention(s): Rectal temperatures taken at 4 cm, 10 cm, and 15 cm from the anal sphincter were compared with Teso during a 10-minute rest period; exercise until the participant's Teso reached 39.5°C; cold-water immersion (∼10°C) until all temperatures were ≤38°C; and a 30-minute postimmersion recovery period. The Teso and Trec were compared every minute during rest and recovery. Because exercise and cooling times varied, we compared temperatures at 10% intervals of total exercise and cooling durations for these periods. Main Outcome Measure(s): The Teso and Trec were used to calculate bias (ie, the difference in temperatures between sites). Results: Rectal depth affected bias (F2,24 = 6.8, P = .008). Bias at 4 cm (0.85°C ± 0.78°C) was higher than at 15 cm (0.65°C ± 0.68°C, P < .05) but not higher than at 10 cm (0.75°C ± 0.76°C, P > .05). Bias varied over time (F2,34 = 79.5, P < .001). Bias during rest (0.42°C ± 0.27°C), exercise (0.23°C ± 0.53°C), and recovery (0.65°C ± 0.35°C) was less than during cooling (1.72°C ± 0.65°C, P < .05). Bias during exercise was less than during postimmersion recovery (0.65°C ± 0.35°C, P < .05). Conclusions: When EHS is suspected, clinicians should insert the flexible rectal thermistor to 15 cm (6 in) because it is the most valid depth. The low level of bias during exercise suggests Trec is valid for diagnosing hyperthermia. Rectal temperature is a better indicator of pelvic organ temperature during cold-water immersion than is Teso. PMID:28207294
Miller, Kevin C; Hughes, Lexie E; Long, Blaine C; Adams, William M; Casa, Douglas J
2017-04-01
No evidence-based recommendation exists regarding how far clinicians should insert a rectal thermistor to obtain the most valid estimate of core temperature. Knowing the validity of temperatures at different rectal depths has implications for exertional heat-stroke (EHS) management. To determine whether rectal temperature (T rec ) taken at 4 cm, 10 cm, or 15 cm from the anal sphincter provides the most valid estimate of core temperature (as determined by esophageal temperature [T eso ]) during similar stressors an athlete with EHS may experience. Cross-sectional study. Laboratory. Seventeen individuals (14 men, 3 women: age = 23 ± 2 years, mass = 79.7 ± 12.4 kg, height = 177.8 ± 9.8 cm, body fat = 9.4% ± 4.1%, body surface area = 1.97 ± 0.19 m 2 ). Rectal temperatures taken at 4 cm, 10 cm, and 15 cm from the anal sphincter were compared with T eso during a 10-minute rest period; exercise until the participant's T eso reached 39.5°C; cold-water immersion (∼10°C) until all temperatures were ≤38°C; and a 30-minute postimmersion recovery period. The T eso and T rec were compared every minute during rest and recovery. Because exercise and cooling times varied, we compared temperatures at 10% intervals of total exercise and cooling durations for these periods. The T eso and T rec were used to calculate bias (ie, the difference in temperatures between sites). Rectal depth affected bias (F 2,24 = 6.8, P = .008). Bias at 4 cm (0.85°C ± 0.78°C) was higher than at 15 cm (0.65°C ± 0.68°C, P < .05) but not higher than at 10 cm (0.75°C ± 0.76°C, P > .05). Bias varied over time (F 2,34 = 79.5, P < .001). Bias during rest (0.42°C ± 0.27°C), exercise (0.23°C ± 0.53°C), and recovery (0.65°C ± 0.35°C) was less than during cooling (1.72°C ± 0.65°C, P < .05). Bias during exercise was less than during postimmersion recovery (0.65°C ± 0.35°C, P < .05). When EHS is suspected, clinicians should insert the flexible rectal thermistor to 15 cm (6 in) because it is the most valid depth. The low level of bias during exercise suggests T rec is valid for diagnosing hyperthermia. Rectal temperature is a better indicator of pelvic organ temperature during cold-water immersion than is T eso .
NASA Astrophysics Data System (ADS)
Chae, B.-G.; Lee, J.-H.; Park, H.-J.; Choi, J.
2015-08-01
Most landslides in Korea are classified as shallow landslides with an average depth of less than 2 m. These shallow landslides are associated with the advance of a wetting front in the unsaturated soil due to rainfall infiltration, which results in an increase in water content and a reduction in the matric suction in the soil. Therefore, this study presents a modified equation of infinite slope stability analysis based on the concept of the saturation depth ratio to analyze the slope stability change associated with the rainfall on a slope. A rainfall infiltration test in unsaturated soil was performed using a column to develop an understanding of the effect of the saturation depth ratio following rainfall infiltration. The results indicated that the rainfall infiltration velocity due to the increase in rainfall in the soil layer was faster when the rainfall intensity increased. In addition, the rainfall infiltration velocity tends to decrease with increases in the unit weight of soil. The proposed model was applied to assess its feasibility and to develop a regional landslide susceptibility map using a geographic information system (GIS). For that purpose, spatial databases for input parameters were constructed and landslide locations were obtained. In order to validate the proposed approach, the results of the proposed approach were compared with the landslide inventory using a ROC (receiver operating characteristics) graph. In addition, the results of the proposed approach were compared with the previous approach used: a steady-state hydrological model. Consequently, the approach proposed in this study displayed satisfactory performance in classifying landslide susceptibility and showed better performance than the steady-state approach.
NASA Astrophysics Data System (ADS)
Chasmer, L.; Flade, L.; Virk, R.; Montgomery, J. S.; Hopkinson, C.; Thompson, D. K.; Petrone, R. M.; Devito, K.
2017-12-01
Landscape changes in the hydrological characteristics of wetlands in some parts of the Boreal region of Canada are occurring as a result of climate-induced feedbacks and anthropogenic disturbance. Wetlands are largely resilient to wildfire, however, natural, climatic and anthropogenic disturbances can change surface water regimes and predispose wetlands to greater depth of peat burn. Over broad areas, peat loss contributes to significant pollution emissions, which can affect community health. In this study, we a) quantify depth of peat burn and relationships to antecedent conditions (species type, topography, surficial geology) within three classified wetlands found in the Boreal Plains ecoregion of western Canada; and b) examine the impacts of wildfire on post-fire ground surface energy balance to determine how peat loss might affect local hydro-climatology and surface water feedbacks. High-resolution optical imagery, pre- and post-burn multi-spectral Light Detection And Ranging (LiDAR), airborne thermal infrared imagery, and field validation data products are integrated to identify multiple complex interactions within the study wetlands. LiDAR-derived depth of peat burn is within 1 cm (average) compared with measured (RMSE = 9 cm over the control surface), demonstrating the utility of LiDAR with high point return density. Depth of burn also correlates strongly with variations in Normalised Burn Ratio (NBR) determined for ground surfaces only. Antecedent conditions including topographic position, soil moisture, soil type and wetland species also have complex interactions with depth of peat loss within wetlands observed in other studies. However, while field measurements are important for validation and understanding eco-hydrological processes, results from remote sensing are spatially continuous. Temporal LiDAR data illustrate the full range of variability in depth of burn and wetland characteristics following fire. Finally, measurements of instantaneous surface temperature indicate that the temperatures of burned wetlands are significantly warmer by up to 10oC compared to non-burned wetlands, altering locally variable sensible vs. latent energy exchanges and implications for further post-fire evaporative losses.
Cloud Optical Depth Retrievals from Solar Background "signal" of Micropulse Lidars
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Marshak, A.; Wiscombe, W.; Valencia, S.; Welton, E. J.
2007-01-01
Pulsed lidars are commonly used to retrieve vertical distributions of cloud and aerosol layers. It is widely believed that lidar cloud retrievals (other than cloud base altitude) are limited to optically thin clouds. Here we demonstrate that lidars can retrieve optical depths of thick clouds using solar background light as a signal, rather than (as now) merely a noise to be subtracted. Validations against other instruments show that retrieved cloud optical depths agree within 10-15% for overcast stratus and broken clouds. In fact, for broken cloud situations one can retrieve not only the aerosol properties in clear-sky periods using lidar signals, but also the optical depth of thick clouds in cloudy periods using solar background signals. This indicates that, in general, it may be possible to retrieve both aerosol and cloud properties using a single lidar. Thus, lidar observations have great untapped potential to study interactions between clouds and aerosols.
3D endoscopic imaging using structured illumination technique (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.
2017-02-01
Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments
Design and Validation of a Breathing Detection System for Scuba Divers.
Altepe, Corentin; Egi, S Murat; Ozyigit, Tamer; Sinoplu, D Ruzgar; Marroni, Alessandro; Pierleoni, Paola
2017-06-09
Drowning is the major cause of death in self-contained underwater breathing apparatus (SCUBA) diving. This study proposes an embedded system with a live and light-weight algorithm which detects the breathing of divers through the analysis of the intermediate pressure (IP) signal of the SCUBA regulator. A system composed mainly of two pressure sensors and a low-power microcontroller was designed and programmed to record the pressure sensors signals and provide alarms in absence of breathing. An algorithm was developed to analyze the signals and identify inhalation events of the diver. A waterproof case was built to accommodate the system and was tested up to a depth of 25 m in a pressure chamber. To validate the system in the real environment, a series of dives with two different types of workload requiring different ranges of breathing frequencies were planned. Eight professional SCUBA divers volunteered to dive with the system to collect their IP data in order to participate to validation trials. The subjects underwent two dives, each of 52 min on average and a maximum depth of 7 m. The algorithm was optimized for the collected dataset and proved a sensitivity of inhalation detection of 97.5% and a total number of 275 false positives (FP) over a total recording time of 13.9 h. The detection algorithm presents a maximum delay of 5.2 s and requires only 800 bytes of random-access memory (RAM). The results were compared against the analysis of video records of the dives by two blinded observers and proved a sensitivity of 97.6% on the data set. The design includes a buzzer to provide audible alarms to accompanying dive buddies which will be triggered in case of degraded health conditions such as near drowning (absence of breathing), hyperventilation (breathing frequency too high) and skip-breathing (breathing frequency too low) measured by the improper breathing frequency. The system also measures the IP at rest before the dive and indicates with flashing light-emitting diodes and audible alarm the regulator malfunctions due to high or low IP that may cause fatal accidents during the dive by preventing natural breathing. It is also planned to relay the alarm signal to underwater and surface rescue authorities by means of acoustic communication.
NASA Astrophysics Data System (ADS)
Zheng, Guiqiu; He, Lingfeng; Carpenter, David; Sridharan, Kumar
2016-12-01
The microstructural developments in the near-surface regions of AISI 316 stainless steel during exposure to molten Li2BeF4 (FLiBe) salt have been investigated with the goal of using this material for the construction of the fluoride salt-cooled high-temperature reactor (FHR), a leading nuclear reactor concept for the next generation nuclear plants (NGNP). Tests were conducted in molten FLiBe salt (melting point: 459 °C) at 700 °C in graphite crucibles and 316 stainless steel crucibles for exposure duration of up to 3000 h. Corrosion-induced microstructural changes in the near-surface regions of the samples were characterized using scanning electron microscopy (SEM) in conjunction with energy dispersive x-ray spectroscopy (EDS) and electron backscatter diffraction (EBSD), and scanning transmission electron microscopy (STEM) with EDS capabilities. Intergranular corrosion attack in the near-surface regions was observed with associated Cr depletion along the grain boundaries. High-angle grain boundaries (15-180°) were particularly prone to intergranular attack and Cr depletion. The depth of attack extended to the depths of 22 μm after 3000-h exposure for the samples tested in graphite crucible, while similar exposure in 316 stainless steel crucible led to the attack depths of only about 11 μm. Testing in graphite crucibles led to the formation of nanometer-scale Mo2C, Cr7C3 and Al4C3 particle phases in the near-surface regions of the material. The copious depletion of Cr in the near-surface regions induced a γ-martensite to α-ferrite phase (FeNix) transformation. Based on the microstructural analysis, a thermal diffusion controlled corrosion model was developed and experimentally validated for predicting long-term corrosion attack depth.
NASA Astrophysics Data System (ADS)
Minnis, Patrick; Hong, Gang; Sun-Mack, Szedung; Smith, William L.; Chen, Yan; Miller, Steven D.
2016-05-01
Retrieval of ice cloud properties using IR measurements has a distinct advantage over the visible and near-IR techniques by providing consistent monitoring regardless of solar illumination conditions. Historically, the IR bands at 3.7, 6.7, 11.0, and 12.0 µm have been used to infer ice cloud parameters by various methods, but the reliable retrieval of ice cloud optical depth τ is limited to nonopaque cirrus with τ < 8. The Ice Cloud Optical Depth from Infrared using a Neural network (ICODIN) method is developed in this paper by training Moderate Resolution Imaging Spectroradiometer (MODIS) radiances at 3.7, 6.7, 11.0, and 12.0 µm against CloudSat-estimated τ during the nighttime using 2 months of matched global data from 2007. An independent data set comprising observations from the same 2 months of 2008 was used to validate the ICODIN. One 4-channel and three 3-channel versions of the ICODIN were tested. The training and validation results show that IR channels can be used to estimate ice cloud τ up to 150 with correlations above 78% and 69% for all clouds and only opaque ice clouds, respectively. However, τ for the deepest clouds is still underestimated in many instances. The corresponding RMS differences relative to CloudSat are ~100 and ~72%. If the opaque clouds are properly identified with the IR methods, the RMS differences in the retrieved optical depths are ~62%. The 3.7 µm channel appears to be most sensitive to optical depth changes but is constrained by poor precision at low temperatures. A method for estimating total optical depth is explored for estimation of cloud water path in the future. Factors affecting the uncertainties and potential improvements are discussed. With improved techniques for discriminating between opaque and semitransparent ice clouds, the method can ultimately improve cloud property monitoring over the entire diurnal cycle.
NASA Astrophysics Data System (ADS)
Martinez, I. A.; Eisenmann, D.
2012-12-01
Ground Penetrating Radar (GPR) has been used for many years in successful subsurface detection of conductive and non-conductive objects in all types of material including different soils and concrete. Typical defect detection is based on subjective examination of processed scans using data collection and analysis software to acquire and analyze the data, often requiring a developed expertise or an awareness of how a GPR works while collecting data. Processing programs, such as GSSI's RADAN analysis software are then used to validate the collected information. Iowa State University's Center for Nondestructive Evaluation (CNDE) has built a test site, resembling a typical levee used near rivers, which contains known sub-surface targets of varying size, depth, and conductivity. Scientist at CNDE have developed software with the enhanced capabilities, to decipher a hyperbola's magnitude and amplitude for GPR signal processing. With this enhanced capability, the signal processing and defect detection capabilities for GPR have the potential to be greatly enhanced. This study will examine the effects of test parameters, antenna frequency (400MHz), data manipulation methods (which include data filters and restricting the range of depth in which the chosen antenna's signal can reach), and real-world conditions using this test site (such as varying weather conditions) , with the goal of improving GPR tests sensitivity for differing soil conditions.
Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András
2018-02-01
As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Application of an adaptive neuro-fuzzy inference system to ground subsidence hazard mapping
NASA Astrophysics Data System (ADS)
Park, Inhye; Choi, Jaewon; Jin Lee, Moung; Lee, Saro
2012-11-01
We constructed hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok City, Korea, using an adaptive neuro-fuzzy inference system (ANFIS) and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, and ground subsidence maps. An attribute database was also constructed from field investigations and reports on existing ground subsidence areas at the study site. Five major factors causing ground subsidence were extracted: (1) depth of drift; (2) distance from drift; (3) slope gradient; (4) geology; and (5) land use. The adaptive ANFIS model with different types of membership functions (MFs) was then applied for ground subsidence hazard mapping in the study area. Two ground subsidence hazard maps were prepared using the different MFs. Finally, the resulting ground subsidence hazard maps were validated using the ground subsidence test data which were not used for training the ANFIS. The validation results showed 95.12% accuracy using the generalized bell-shaped MF model and 94.94% accuracy using the Sigmoidal2 MF model. These accuracy results show that an ANFIS can be an effective tool in ground subsidence hazard mapping. Analysis of ground subsidence with the ANFIS model suggests that quantitative analysis of ground subsidence near AUCMs is possible.
Passive optical remote sensing of Congo River bathymetry using Landsat
NASA Astrophysics Data System (ADS)
Ache Rocha Lopes, V.; Trigg, M. A.; O'Loughlin, F.; Laraque, A.
2014-12-01
While there have been notable advances in deriving river characteristics such as width, using satellite remote sensing datasets, deriving river bathymetry remains a significant challenge. Bathymetry is fundamental to hydrodynamic modelling of river systems and being able to estimate this parameter remotely would be of great benefit, especially when attempting to model hard to access areas where the collection of field data is difficult. One such region is the Congo Basin, where due to past political instability and large scale there are few studies that characterise river bathymetry. In this study we test whether it is possible to use passive optical remote sensing to estimate the depth of the Congo River using Landsat 8 imagery in the region around Malebo Pool, located just upstream of the Kinshasa gauging station. Methods of estimating bathymetry using remotely sensed datasets have been used extensively for coastal regions and now more recently have been demonstrated as feasible for optically shallow rivers. Previous river bathymetry studies have focused on shallow rivers and have generally used aerial imagery with a finer spatial resolution than Landsat. While the Congo River has relatively low suspended sediment concentration values the application of passive bathymetry estimation to a river of this scale has not been attempted before. Three different analysis methods are tested in this study: 1) a single band algorithm; 2) a log ratio method; and 3) a linear transform method. All three methods require depth data for calibration and in this study area bathymetry measurements are available for three cross-sections resulting in approximately 300 in-situ measurements of depth, which are used in the calibration and validation. The performance of each method is assessed, allowing the feasibility of passive depth measurement in the Congo River to be determined. Considering the scarcity of in-situ bathymetry measurements on the Congo River, even an approximate estimate of depths from these methods will be of considerable value in its hydraulic characterisation.
Minimalistic models of the vertical distribution of roots under stochastic hydrological forcing
NASA Astrophysics Data System (ADS)
Laio, Francesco
2014-05-01
The assessment of the vertical root profile can be useful for multiple purposes: the partition of water fluxes between evaporation and transpiration, the evaluation of root soil reinforcement for bioengineering applications, the influence of roots on biogeochemical and microbial processes in the soil, etc. In water-controlled ecosystems the shape of the root profile is mainly determined by the soil moisture availability at different depths. The long term soil water balance in the root zone can be assessed by modeling the stochastic incoming and outgoing water fluxes, influenced by the stochastic rainfall pulses and/or by the water table fluctuations. Through an ecohydrological analysis one obtains that in water-controlled ecosystems the vertical root distribution is a decreasing function with depth, whose parameters depend on pedologic and climatic factors. The model can be extended to suitably account for the influence of the water table fluctuations, when the water table is shallow enough to exert an influence on root development, in which case the vertical root distribution tends to assume a non-monotonic form. In order to evaluate the validity of the ecohydrological estimation of the root profile we have tested it on a case study in the north of Tuscany (Italy). We have analyzed data from 17 landslide-prone sites: in each of these sites we have assessed the pedologic and climatic descriptors necessary to apply the model, and we have measured the mean rooting depth. The results show a quite good matching between observed and modeled mean root depths. The merit of this minimalistic approach to the modeling of the vertical root distribution relies on the fact that it allows a quantitative estimation of the main features of the vertical root distribution without resorting to time- and money-demanding measuring surveys.
Isaac, Marney E; Anglaaere, Luke C N
2013-01-01
Tree root distribution and activity are determinants of belowground competition. However, studying root response to environmental and management conditions remains logistically challenging. Methodologically, nondestructive in situ tree root ecology analysis has lagged. In this study, we tested a nondestructive approach to determine tree coarse root architecture and function of a perennial tree crop, Theobroma cacao L., at two edaphically contrasting sites (sandstone and phyllite–granite derived soils) in Ghana, West Africa. We detected coarse root vertical distribution using ground-penetrating radar and root activity via soil water acquisition using isotopic matching of δ18O plant and soil signatures. Coarse roots were detected to a depth of 50 cm, however, intraspecifc coarse root vertical distribution was modified by edaphic conditions. Soil δ18O isotopic signature declined with depth, providing conditions for plant–soil δ18O isotopic matching. This pattern held only under sandstone conditions where water acquisition zones were identifiably narrow in the 10–20 cm depth but broader under phyllite–granite conditions, presumably due to resource patchiness. Detected coarse root count by depth and measured fine root density were strongly correlated as were detected coarse root count and identified water acquisition zones, thus validating root detection capability of ground-penetrating radar, but exclusively on sandstone soils. This approach was able to characterize trends between intraspecific root architecture and edaphic-dependent resource availability, however, limited by site conditions. This study successfully demonstrates a new approach for in situ root studies that moves beyond invasive point sampling to nondestructive detection of root architecture and function. We discuss the transfer of such an approach to answer root ecology questions in various tree-based landscapes. PMID:23762519
NASA Astrophysics Data System (ADS)
Torresani, Loris; Prosdocimi, Massimo; Masin, Roberta; Penasa, Mauro; Tarolli, Paolo
2017-04-01
Grassland and pasturelands cover a vast portion of the Earth surface and are vital for biodiversity richness, environmental protection and feed resources for livestock. Overgrazing is considered one of the major causes of soil degradation worldwide, mainly in pasturelands grazed by domestic animals. Therefore, an in-depth investigation to better quantify the effects of overgrazing in terms of soil loss is needed. At this regard, this work aims to estimate the volume of eroded materials caused by mismanagement of grazing areas in the whole Autonomous Province of Trento (Northern Italy). To achieve this goal, the first step dealt with the analysis of the entire provincial area by means of freely available aerial images, which allowed the identification and accurate mapping of every eroded area caused by grazing animals. The terrestrial digital photogrammetric technique, namely Structure from Motion (SfM), was then applied to obtain high-resolution Digital Surface Models (DSMs) of two representative eroded areas. By having the pre-event surface conditions, DSMs of difference, namely DoDs, was computed to estimate the erosion volume and the average depth of erosion for both areas. The average depths obtained from the DoDs were compared and validated by measures taken in the field. A large amount of depth measures from different sites were then collected to obtain a reference value for the whole province. This value was used as reference depth for calculating the eroded volume in the whole province. In the final stage, the Connectivity Index (CI) was adopted to analyse the existing connection between the eroded areas and the channel network. This work highlighted that SfM can be a solid low-cost technique for the low-cost and fast quantification of eroded soil due to grazing. It can also be used as a strategic instrument for improving the grazing management system at large scales, with the goal of reducing the risk of pastureland degradation.
Molnar, S.; Cassidy, J. F.; Castellaro, S.; Cornou, C.; Crow, H.; Hunter, J. A.; Matsushima, S.; Sanchez-Sesma, F. J.; Yong, Alan
2018-01-01
Nakamura (Q Rep Railway Tech Res Inst 30:25–33, 1989) popularized the application of the horizontal-to-vertical spectral ratio (HVSR) analysis of microtremor (seismic noise or ambient vibration) recordings to estimate the predominant frequency and amplification factor of earthquake shaking. During the following quarter century, popularity in the microtremor HVSR (MHVSR) method grew; studies have verified the stability of a site’s MHVSR response over time and validated the MHVSR response with that of earthquake HVSR response. Today, MHVSR analysis is a popular reconnaissance tool used worldwide for seismic microzonation and earthquake site characterization in numerous regions, specifically, in the mapping of site period or fundamental frequency and inverted for shear-wave velocity depth profiles, respectively. However, the ubiquity of MHVSR analysis is predominantly a consequence of its ease in application rather than our full understanding of its theory. We present the state of the art in MHVSR analyses in terms of the development of its theoretical basis, current state of practice, and we comment on its future for applications in earthquake site characterization.
NASA Astrophysics Data System (ADS)
Molnar, S.; Cassidy, J. F.; Castellaro, S.; Cornou, C.; Crow, H.; Hunter, J. A.; Matsushima, S.; Sánchez-Sesma, F. J.; Yong, A.
2018-03-01
Nakamura (Q Rep Railway Tech Res Inst 30:25-33, 1989) popularized the application of the horizontal-to-vertical spectral ratio (HVSR) analysis of microtremor (seismic noise or ambient vibration) recordings to estimate the predominant frequency and amplification factor of earthquake shaking. During the following quarter century, popularity in the microtremor HVSR (MHVSR) method grew; studies have verified the stability of a site's MHVSR response over time and validated the MHVSR response with that of earthquake HVSR response. Today, MHVSR analysis is a popular reconnaissance tool used worldwide for seismic microzonation and earthquake site characterization in numerous regions, specifically, in the mapping of site period or fundamental frequency and inverted for shear-wave velocity depth profiles, respectively. However, the ubiquity of MHVSR analysis is predominantly a consequence of its ease in application rather than our full understanding of its theory. We present the state of the art in MHVSR analyses in terms of the development of its theoretical basis, current state of practice, and we comment on its future for applications in earthquake site characterization.
NASA Astrophysics Data System (ADS)
Molnar, S.; Cassidy, J. F.; Castellaro, S.; Cornou, C.; Crow, H.; Hunter, J. A.; Matsushima, S.; Sánchez-Sesma, F. J.; Yong, A.
2018-07-01
Nakamura (Q Rep Railway Tech Res Inst 30:25-33, 1989) popularized the application of the horizontal-to-vertical spectral ratio (HVSR) analysis of microtremor (seismic noise or ambient vibration) recordings to estimate the predominant frequency and amplification factor of earthquake shaking. During the following quarter century, popularity in the microtremor HVSR (MHVSR) method grew; studies have verified the stability of a site's MHVSR response over time and validated the MHVSR response with that of earthquake HVSR response. Today, MHVSR analysis is a popular reconnaissance tool used worldwide for seismic microzonation and earthquake site characterization in numerous regions, specifically, in the mapping of site period or fundamental frequency and inverted for shear-wave velocity depth profiles, respectively. However, the ubiquity of MHVSR analysis is predominantly a consequence of its ease in application rather than our full understanding of its theory. We present the state of the art in MHVSR analyses in terms of the development of its theoretical basis, current state of practice, and we comment on its future for applications in earthquake site characterization.
NASA Astrophysics Data System (ADS)
Webster, C.; Bühler, Y.; Schirmer, M.; Stoffel, A.; Giulia, M.; Jonas, T.
2017-12-01
Snow depth distribution in forests exhibits strong spatial heterogeneity compared to adjacent open sites. Measurement of snow depths in forests is currently limited to a) manual point measurements, which are sparse and time-intensive, b) ground-penetrating radar surveys, which have limited spatial coverage, or c) airborne LiDAR acquisition, which are expensive and may deteriorate in denser forests. We present the application of unmanned aerial vehicles in combination with structure-from-motion (SfM) methods to photogrammetrically map snow depth distribution in forested terrain. Two separate flights were carried out 10 days apart across a heterogeneous forested area of 900 x 500 m. Corresponding snow depth maps were derived using both, LiDAR-based and SfM-based DTM data, obtained during snow-off conditions. Manual measurements collected following each flight were used to validate the snow depth maps. Snow depths were resolved at 5cm resolution and forest snow depth distribution structures such as tree wells and other areas of preferential melt were represented well. Differential snow depth maps showed maximum ablation in the exposed south sides of trees and smaller differences in the centre of gaps and on the north side of trees. This new application of SfM to map snow depth distribution in forests demonstrates a straightforward method for obtaining information that was previously only available through manual spatially limited ground-based measurements. These methods could therefore be extended to more frequent observation of snow depths in forests as well as estimating snow accumulation and depletion rates.
Peterson, S W; Polf, J; Bues, M; Ciangaru, G; Archambault, L; Beddar, S; Smith, A
2009-05-21
The purpose of this study is to validate the accuracy of a Monte Carlo calculation model of a proton magnetic beam scanning delivery nozzle developed using the Geant4 toolkit. The Monte Carlo model was used to produce depth dose and lateral profiles, which were compared to data measured in the clinical scanning treatment nozzle at several energies. Comparisons were also made between measured and simulated off-axis profiles to test the accuracy of the model's magnetic steering. Comparison of the 80% distal dose fall-off values for the measured and simulated depth dose profiles agreed to within 1 mm for the beam energies evaluated. Agreement of the full width at half maximum values for the measured and simulated lateral fluence profiles was within 1.3 mm for all energies. The position of measured and simulated spot positions for the magnetically steered beams agreed to within 0.7 mm of each other. Based on these results, we found that the Geant4 Monte Carlo model of the beam scanning nozzle has the ability to accurately predict depth dose profiles, lateral profiles perpendicular to the beam axis and magnetic steering of a proton beam during beam scanning proton therapy.
Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Zhang, Hanqing
2014-05-01
Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Real-time depth measurement for micro-holes drilled by lasers
NASA Astrophysics Data System (ADS)
Lin, Cheng-Hsiang; Powell, Rock A.; Jiang, Lan; Xiao, Hai; Chen, Shean-Jen; Tsai, Hai-Lung
2010-02-01
An optical system based on the confocal principle has been developed for real-time precision measurements of the depth of micro-holes during the laser drilling process. The capability of the measuring system is theoretically predicted by the Gaussian lens formula and experimentally validated to achieve a sensitivity of 0.5 µm. A nanosecond laser system was used to drill holes, and the hole depths were measured by the proposed measuring system and by the cut-and-polish method. The differences between these two measurements are found to be 5.0% for hole depths on the order of tens of microns and 11.2% for hundreds of microns. The discrepancies are caused mainly by the roughness of the bottom surface of the hole and by the existence of debris in the hole. This system can be easily implemented in a laser workstation for the fabrication of 3D microstructures.
Study of a scanning HIFU therapy protocol, Part II: Experiment and results
NASA Astrophysics Data System (ADS)
Andrew, Marilee A.; Kaczkowski, Peter; Cunitz, Bryan W.; Brayman, Andrew A.; Kargl, Steven G.
2003-04-01
Instrumentation and protocols for creating scanned HIFU lesions in freshly excised bovine liver were developed in order to study the in vitro HIFU dose response and validate models. Computer-control of the HIFU transducer and 3-axis positioning system provided precise spatial placement of the thermal lesions. Scan speeds were selected in the range of 1 to 8 mm/s, and the applied electrical power was varied from 20 to 60 W. These parameters were chosen to hold the thermal dose constant. A total of six valid scans of 15 mm length were created in each sample; a 3.5 MHz single-element, spherically focused transducer was used. Treated samples were frozen, then sliced in 1.27 mm increments. Digital photographs of slices were downloaded to computer for image processing and analysis. Lesion characteristics, including the depth within the tissue, axial length, and radial width, were computed. Results were compared with those generated from modified KZK and BHTE models, and include a comparison of the statistical variation in the across-scan lesion radial width. [Work supported by USAMRMC.
NASA Astrophysics Data System (ADS)
Tinker, M. T.; Costa, D. P.; Estes, J. A.; Wieringa, N.
2007-02-01
The existence of individual prey specializations has been reported for an ever-growing number of taxa, and has important ramifications for our understanding of predator-prey dynamics. We use the California sea otter population as a case study to validate the use of archival time-depth data to detect and measure differences in foraging behaviour and diet. We collected observational foraging data from radio-tagged sea otters that had been equipped with Mk9 time depth recorders (TDRs, Wildlife Computers, Redmond, WA). After recapturing the study animals and retrieving the TDRs it was possible to compare the two data types, by matching individual dives from the TDR record with observational data and thus examining behavioural correlates of capture success and prey species. Individuals varied with respect to prey selection, aggregating into one of three distinct dietary specializations. A number of TDR-derived parameters, particularly dive depth and post-dive surface interval, differed predictably between specialist types. A combination of six dive parameters was particularly useful for discriminating between specialist types, and when incorporated into a multivariate cluster analysis, these six parameters resulted in classification of 13 adult female sea otters into three clusters that corresponded almost perfectly to the diet-based classification (1 out of 13 animals was misclassified). Thus based solely on quantifiable traits of time-depth data that have been collected over an appropriate period (in this case 1 year per animal), it was possible to assign female sea otters to diet type with >90% accuracy. TDR data can thus be used as a tool to measure the degree of individual specialization in sea otter populations, a conclusion that will likely apply to other diving marine vertebrates as well. Our ultimate goals must be both to understand the causes of individual specialization, and to incorporate such variation into models of population- and community-level food web dynamics.
Tinker, M.T.; Costa, D.P.; Estes, J.A.; Wieringa, N.
2007-01-01
The existence of individual prey specializations has been reported for an ever-growing number of taxa, and has important ramifications for our understanding of predator-prey dynamics. We use the California sea otter population as a case study to validate the use of archival time-depth data to detect and measure differences in foraging behaviour and diet. We collected observational foraging data from radio-tagged sea otters that had been equipped with Mk9 time depth recorders (TDRs, Wildlife Computers, Redmond, WA). After recapturing the study animals and retrieving the TDRs it was possible to compare the two data types, by matching individual dives from the TDR record with observational data and thus examining behavioural correlates of capture success and prey species. Individuals varied with respect to prey selection, aggregating into one of three distinct dietary specializations. A number of TDR-derived parameters, particularly dive depth and post-dive surface interval, differed predictably between specialist types. A combination of six dive parameters was particularly useful for discriminating between specialist types, and when incorporated into a multivariate cluster analysis, these six parameters resulted in classification of 13 adult female sea otters into three clusters that corresponded almost perfectly to the diet-based classification (1 out of 13 animals was misclassified). Thus based solely on quantifiable traits of time-depth data that have been collected over an appropriate period (in this case 1 year per animal), it was possible to assign female sea otters to diet type with >90% accuracy. TDR data can thus be used as a tool to measure the degree of individual specialization in sea otter populations, a conclusion that will likely apply to other diving marine vertebrates as well. Our ultimate goals must be both to understand the causes of individual specialization, and to incorporate such variation into models of population- and community-level food web dynamics. ?? 2007 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shokralla, Shaddy Samir Zaki
Multi-frequency eddy current measurements are employed in estimating pressure tube (PT) to calandria tube (CT) gap in CANDU fuel channels, a critical inspection activity required to ensure fitness for service of fuel channels. In this thesis, a comprehensive characterization of eddy current gap data is laid out, in order to extract further information on fuel channel condition, and to identify generalized applications for multi-frequency eddy current data. A surface profiling technique, generalizable to multiple probe and conductive material configurations has been developed. This technique has allowed for identification of various pressure tube artefacts, has been independently validated (using ultrasonic measurements), and has been deployed and commissioned at Ontario Power Generation. Dodd and Deeds solutions to the electromagnetic boundary value problem associated with the PT to CT gap probe configuration were experimentally validated for amplitude response to changes in gap. Using the validated Dodd and Deeds solutions, principal components analysis (PCA) has been employed to identify independence and redundancies in multi-frequency eddy current data. This has allowed for an enhanced visualization of factors affecting gap measurement. Results of the PCA of simulation data are consistent with the skin depth equation, and are validated against PCA of physical experiments. Finally, compressed data acquisition has been realized, allowing faster data acquisition for multi-frequency eddy current systems with hardware limitations, and is generalizable to other applications where real time acquisition of large data sets is prohibitive.
Design and Evaluation of the Psychometric Properties of a Paternal Adaptation Questionnaire.
Eskandari, Narges; Simbar, Masoumeh; Vadadhir, AbouAli; Baghestani, Ahmad Reza
2016-07-25
The present study aimed to design and evaluate the psychometric properties of the Paternal Adaptation Questionnaire (PAQ). The study was a mixed (qualitative and quantitative) sequential exploratory study. In the qualitative phase, a preliminary questionnaire with 210 items emerged from in-depth interviews with 17 fathers and 15 key informants. In the quantitative phase, psychometric properties of the PAQ were assessed. Considering cutoff points as 1.5 for item impact, 0.49 for content validity ratio (CVR), and 0.7 for content validity index (CVI), items of the questionnaire were reduced from 210 to 132. Assessment of the content validity of the questionnaire demonstrated S-CVR = 0.68 and S-CVI = 0.92. Exploratory factor analysis resulted in the development of a PAQ with 38 items classified under five factors (ability in performing the roles and responsibilities; perceiving the parental development; stabilization in paternal position; spiritual stability and internal satisfaction; and challenges and concerns), which explained 52.19% of cumulative variance. Measurement of internal consistency reported a Cronbach's α of .89 for PAQ (.61-.86 for subscales), and stability assessment of the PAQ through the test-retest demonstrated Spearman's correlation coefficients and intraclass correlation coefficient of .96 (.81-.97 for subscales). It was identified that the PAQ is a valid and reliable instrument that could be used to assess fatherhood adaptation with the paternal roles and fathers' needs, as well as to design appropriate interventions and to evaluate their effectiveness. © The Author(s) 2016.
Park, B Hyle; Pierce, Mark C; Cense, Barry; de Boer, Johannes F
2004-11-01
We present an analysis for polarization-sensitive optical coherence tomography that facilitates the unrestricted use of fiber and fiber-optic components throughout an interferometer and yields sample birefringence, diattenuation, and relative optic axis orientation. We use a novel Jones matrix approach that compares the polarization states of light reflected from the sample surface with those reflected from within a biological sample for pairs of depth scans. The incident polarization alternated between two states that are perpendicular in a Poincaré sphere representation to ensure proper detection of tissue birefringence regardless of optical fiber contributions. The method was validated by comparing the calculated diattenuation of a polarizing sheet, chicken tendon, and muscle with that obtained by independent measurement. The relative importance of diattenuation versus birefringence to angular displacement of Stokes vectors on a Poincaré sphere was quantified.
Validation of Cloud Properties From Multiple Satellites Using CALIOP Data
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Bedka, Kristopher M.; Heck, Patrick W.; Palikonda, Rabindra; Sun-Mack, Sunny; Trepte, Qing
2016-01-01
The NASA Langley Satellite ClOud and Radiative Property retrieval System (SatCORPS) is routinely applied to multispectral imagery from several geostationary and polar-orbiting imagers to retrieve cloud properties for weather and climate applications. Validation of the retrievals with independent datasets is continuously ongoing in order to understand differences caused by calibration, spatial resolution, viewing geometry, and other factors. The CALIOP instrument provides a decade of detailed cloud observations which can be used to evaluate passive imager retrievals of cloud boundaries, thermodynamic phase, cloud optical depth, and water path on a global scale. This paper focuses on comparisons of CALIOP retrievals to retrievals from MODIS, VIIRS, AVHRR, GOES, SEVIRI, and MTSAT. CALIOP is particularly skilled at detecting weakly-scattering cirrus clouds with optical depths less than approx. 0.5. These clouds are often undetected by passive imagers and the effect this has on the property retrievals is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galiova, Michaela; Kaiser, Jozef; Fortes, Francisco J.
2010-05-01
Laser-induced breakdown spectroscopy (LIBS) and laser ablation (LA) inductively coupled plasma (ICP) mass spectrometry (MS) were utilized for microspatial analyses of a prehistoric bear (Ursus arctos) tooth dentine. The distribution of selected trace elements (Sr, Ba, Fe) was measured on a 26 mmx15 mm large and 3 mm thick transverse cross section of a canine tooth. The Na and Mg content together with the distribution of matrix elements (Ca, P) was also monitored within this area. The depth of the LIBS craters was measured with an optical profilometer. As shown, both LIBS and LA-ICP-MS can be successfully used for themore » fast, spatially resolved analysis of prehistoric teeth samples. In addition to microchemical analysis, the sample hardness was calculated using LIBS plasma ionic-to-atomic line intensity ratios of Mg (or Ca). To validate the sample hardness calculations, the hardness was also measured with a Vickers microhardness tester.« less
Serious injury prediction algorithm based on large-scale data and under-triage control.
Nishimoto, Tetsuya; Mukaigawa, Kosuke; Tominaga, Shigeru; Lubbe, Nils; Kiuchi, Toru; Motomura, Tomokazu; Matsumoto, Hisashi
2017-01-01
The present study was undertaken to construct an algorithm for an advanced automatic collision notification system based on national traffic accident data compiled by Japanese police. While US research into the development of a serious-injury prediction algorithm is based on a logistic regression algorithm using the National Automotive Sampling System/Crashworthiness Data System, the present injury prediction algorithm was based on comprehensive police data covering all accidents that occurred across Japan. The particular focus of this research is to improve the rescue of injured vehicle occupants in traffic accidents, and the present algorithm assumes the use of an onboard event data recorder data from which risk factors such as pseudo delta-V, vehicle impact location, seatbelt wearing or non-wearing, involvement in a single impact or multiple impact crash and the occupant's age can be derived. As a result, a simple and handy algorithm suited for onboard vehicle installation was constructed from a sample of half of the available police data. The other half of the police data was applied to the validation testing of this new algorithm using receiver operating characteristic analysis. An additional validation was conducted using in-depth investigation of accident injuries in collaboration with prospective host emergency care institutes. The validated algorithm, named the TOYOTA-Nihon University algorithm, proved to be as useful as the US URGENCY and other existing algorithms. Furthermore, an under-triage control analysis found that the present algorithm could achieve an under-triage rate of less than 10% by setting a threshold of 8.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Booth, Robert K.; Hotchkiss, Sara C.; Wilcox, Douglas A.
2005-01-01
Summary: 1. Discoloration of polyvinyl chloride (PVC) tape has been used in peatland ecological and hydrological studies as an inexpensive way to monitor changes in water-table depth and reducing conditions. 2. We investigated the relationship between depth of PVC tape discoloration and measured water-table depth at monthly time steps during the growing season within nine kettle peatlands of northern Wisconsin. Our specific objectives were to: (1) determine if PVC discoloration is an accurate method of inferring water-table depth in Sphagnum-dominated kettle peatlands of the region; (2) assess seasonal variability in the accuracy of the method; and (3) determine if systematic differences in accuracy occurred among microhabitats, PVC tape colour and peatlands. 3. Our results indicated that PVC tape discoloration can be used to describe gradients of water-table depth in kettle peatlands. However, accuracy differed among the peatlands studied, and was systematically biased in early spring and late summer/autumn. Regardless of the month when the tape was installed, the highest elevations of PVC tape discoloration showed the strongest correlation with midsummer (around July) water-table depth and average water-table depth during the growing season. 4. The PVC tape discoloration method should be used cautiously when precise estimates are needed of seasonal changes in the water-table.
Control of Prosthetic Hands via the Peripheral Nervous System
Ciancio, Anna Lisa; Cordella, Francesca; Barone, Roberto; Romeo, Rocco Antonio; Bellingegni, Alberto Dellacasa; Sacchetti, Rinaldo; Davalli, Angelo; Di Pino, Giovanni; Ranieri, Federico; Di Lazzaro, Vincenzo; Guglielmelli, Eugenio; Zollo, Loredana
2016-01-01
This paper intends to provide a critical review of the literature on the technological issues on control and sensorization of hand prostheses interfacing with the Peripheral Nervous System (i.e., PNS), and their experimental validation on amputees. The study opens with an in-depth analysis of control solutions and sensorization features of research and commercially available prosthetic hands. Pros and cons of adopted technologies, signal processing techniques and motion control solutions are investigated. Special emphasis is then dedicated to the recent studies on the restoration of tactile perception in amputees through neural interfaces. The paper finally proposes a number of suggestions for designing the prosthetic system able to re-establish a bidirectional communication with the PNS and foster the prosthesis natural control. PMID:27092041
NASA Astrophysics Data System (ADS)
Rudrapati, R.; Sahoo, P.; Bandyopadhyay, A.
2016-09-01
The main aim of the present work is to analyse the significance of turning parameters on surface roughness in computer numerically controlled (CNC) turning operation while machining of aluminium alloy material. Spindle speed, feed rate and depth of cut have been considered as machining parameters. Experimental runs have been conducted as per Box-Behnken design method. After experimentation, surface roughness is measured by using stylus profile meter. Factor effects have been studied through analysis of variance. Mathematical modelling has been done by response surface methodology, to made relationships between the input parameters and output response. Finally, process optimization has been made by teaching learning based optimization (TLBO) algorithm. Predicted turning condition has been validated through confirmatory experiment.
Solar Spectral Radiative Forcing Due to Dust Aerosol During the Puerto Rico Dust Experiment
NASA Technical Reports Server (NTRS)
Pilewskie, P.; Bergstrom, R.; Rabbette, M.; Livingston, J.; Russell, P.; Gore, Warren J. (Technical Monitor)
2000-01-01
During the Puerto Rico Dust Experiment (PRIDE) upwelling and downwelling solar spectral irradiance was measured on board the SPAWAR Navajo and downwelling solar spectral flux was measured at a surface site using the NASA Ames Solar Spectral Flux Radiometer. These data will be used to determine the net solar radiative forcing of dust aerosol and to quantify the solar spectral radiative energy budget in the presence of elevated aerosol loading. We will assess the variability in spectral irradiance using formal principal component analysis procedures and relate the radiative variability to aerosol microphysical properties. Finally, we will characterize the sea surface reflectance to improve aerosol optical depth retrievals from the AVHRR satellite and to validate SeaWiFS ocean color products.
A compact targeted drug delivery mechanism for a next generation wireless capsule endoscope.
Woods, Stephen P; Constandinou, Timothy G
2016-01-01
This paper reports a novel medication release and delivery mechanism as part of a next generation wireless capsule endoscope (WCE) for targeted drug delivery. This subsystem occupies a volume of only 17.9mm 3 for the purpose of delivering a 1 ml payload to a target site of interest in the small intestinal tract. An in-depth analysis of the method employed to release and deliver the medication is described and a series of experiments is presented which validates the drug delivery system. The results show that a variable pitch conical compression spring manufactured from stainless steel can deliver 0.59 N when it is fully compressed and that this would be sufficient force to deliver the onboard medication.
Experimental Seismic Event-screening Criteria at the Prototype International Data Center
NASA Astrophysics Data System (ADS)
Fisk, M. D.; Jepsen, D.; Murphy, J. R.
- Experimental seismic event-screening capabilities are described, based on the difference of body-and surface-wave magnitudes (denoted as Ms:mb) and event depth. These capabilities have been implemented and tested at the prototype International Data Center (PIDC), based on recommendations by the IDC Technical Experts on Event Screening in June 1998. Screening scores are presented that indicate numerically the degree to which an event meets, or does not meet, the Ms:mb and depth screening criteria. Seismic events are also categorized as onshore, offshore, or mixed, based on their 90% location error ellipses and an onshore/offshore grid with five-minute resolution, although this analysis is not used at this time to screen out events.Results are presented of applications to almost 42,000 events with mb>=3.5 in the PIDC Standard Event Bulletin (SEB) and to 121 underground nuclear explosions (UNE's) at the U.S. Nevada Test Site (NTS), the Semipalatinsk and Novaya Zemlya test sites in the Former Soviet Union, the Lop Nor test site in China, and the Indian, Pakistan, and French Polynesian test sites. The screening criteria appear to be quite conservative. None of the known UNE's are screened out, while about 41 percent of the presumed earthquakes in the SEB with mb>=3.5 are screened out. UNE's at the Lop Nor, Indian, and Pakistan test sites on 8 June 1996, 11 May 1998, and 28 May 1998, respectively, have among the lowest Ms:mb scores of all events in the SEB.To assess the validity of the depth screening results, comparisons are presented of SEB depth solutions to those in other bulletins that are presumed to be reliable and independent. Using over 1600 events, the comparisons indicate that the SEB depth confidence intervals are consistent with or shallower than over 99.8 percent of the corresponding depth estimates in the other bulletins. Concluding remarks are provided regarding the performance of the experimental event-screening criteria, and plans for future improvements, based on recent recommendations by the IDC Technical Experts on Event Screening in May 1999.
Validating electromagnetic walking stick rail surface crack measuring systems : final report.
DOT National Transportation Integrated Search
2016-06-01
A series of field studies were undertaken to evaluate electromagnetic walking stick systems and their ability to measure the depth : of damage from surface breaking cracks. In total, four railroads, and four suppliers participated in the project. The...
Hippophae rhamnoides N-glycoproteome analysis: a small step towards sea buckthorn proteome mining.
Sougrakpam, Yaiphabi; Deswal, Renu
2016-10-01
Hippophae rhamnoides is a hardy shrub capable of growing under extreme environmental conditions namely, high salt, drought and cold. Its ability to grow under extreme conditions and its wide application in pharmaceutical and nutraceutical industry calls for its in-depth analysis. N-glycoproteome mining by con A affinity chromatography from seedling was attempted. The glycoproteome was resolved on first and second dimension gel electrophoresis. A total of 48 spots were detected and 10 non-redundant proteins were identified by MALDI-TOF/TOF. Arabidopsis thaliana protein disulfide isomerase-like 1-4 (ATPDIL1-4) electron transporter, protein disulphide isomerase, calreticulin 1 (CRT1), glycosyl hydrolase family 38 (GH 38) protein, phantastica, maturase k, Arabidopsis trithorax related protein 6 (ATXR 6), cysteine protease inhibitor were identified out of which ATXR 6, phantastica and putative ATPDIL1-4 electron transporter are novel glycoproteins. Calcium binding protein CRT1 was validated for its calcium binding by stains all staining. GO analysis showed involvement of GH 38 and ATXR 6 in glycan and lysine degradation pathways. This is to our knowledge the first report of glycoproteome analysis for any Elaeagnaceae member.
Koga, Shunsaku; Barstow, Thomas J; Okushima, Dai; Rossiter, Harry B; Kondo, Narihiko; Ohmae, Etsuko; Poole, David C
2015-06-01
Near-infrared assessment of skeletal muscle is restricted to superficial tissues due to power limitations of spectroscopic systems. We reasoned that understanding of muscle deoxygenation may be improved by simultaneously interrogating deeper tissues. To achieve this, we modified a high-power (∼8 mW), time-resolved, near-infrared spectroscopy system to increase depth penetration. Precision was first validated using a homogenous optical phantom over a range of inter-optode spacings (OS). Coefficients of variation from 10 measurements were minimal (0.5-1.9%) for absorption (μa), reduced scattering, simulated total hemoglobin, and simulated O2 saturation. Second, a dual-layer phantom was constructed to assess depth sensitivity, and the thickness of the superficial layer was varied. With a superficial layer thickness of 1, 2, 3, and 4 cm (μa = 0.149 cm(-1)), the proportional contribution of the deep layer (μa = 0.250 cm(-1)) to total μa was 80.1, 26.9, 3.7, and 0.0%, respectively (at 6-cm OS), validating penetration to ∼3 cm. Implementation of an additional superficial phantom to simulate adipose tissue further reduced depth sensitivity. Finally, superficial and deep muscle spectroscopy was performed in six participants during heavy-intensity cycle exercise. Compared with the superficial rectus femoris, peak deoxygenation of the deep rectus femoris (including the superficial intermedius in some) was not significantly different (deoxyhemoglobin and deoxymyoglobin concentration: 81.3 ± 20.8 vs. 78.3 ± 13.6 μM, P > 0.05), but deoxygenation kinetics were significantly slower (mean response time: 37 ± 10 vs. 65 ± 9 s, P ≤ 0.05). These data validate a high-power, time-resolved, near-infrared spectroscopy system with large OS for measuring the deoxygenation of deep tissues and reveal temporal and spatial disparities in muscle deoxygenation responses to exercise. Copyright © 2015 the American Physiological Society.
Teran, Anthony; Ghebremedhin, Abiel; Johnson, Matt; Patyal, Baldev
2015-01-01
Radiographic film dosimetry suffers from its energy dependence in proton dosimetry. This study sought to develop a method of measuring proton beams by the film and to evaluate film response to proton beams for the constancy check of depth dose (DD). It also evaluated the film for profile measurements. To achieve this goal, from DDs measured by film and ion chamber (IC), calibration factors (ratios of dose measured by IC to film responses) as a function of depth in a phantom were obtained. These factors imply variable slopes (with proton energy and depth) of linear characteristic curves that relate film response to dose. We derived a calibration method that enables utilization of the factors for acquisition of dose from film density measured at later dates by adapting to a potentially altered processor condition. To test this model, the characteristic curve was obtained by using EDR2 film and in‐phantom film dosimetry in parallel with a 149.65 MeV proton beam, using the method. An additional validation of the model was performed by concurrent film and IC measurement perpendicular to the beam at various depths. Beam profile measurements by the film were also evaluated at the center of beam modulation. In order to interpret and ascertain the film dosimetry, Monte Carlos simulation of the beam was performed, calculating the proton fluence spectrum along depths and off‐axis distances. By multiplying respective stopping powers to the spectrum, doses to film and water were calculated. The ratio of film dose to water dose was evaluated. Results are as follows. The characteristic curve proved the assumed linearity. The measured DD approached that of IC, but near the end of the spread‐out Bragg peak (SOBP), a spurious peak was observed due to the mismatch of distal edge between the calibration and measurement films. The width of SOBP and the proximal edge were both reproducible within a maximum of 5 mm; the distal edge was reproducible within 1 mm. At 5 cm depth, the dose was reproducible within 10%. These large discrepancies were identified to have been contributed by film processor uncertainty across a layer of film and the misalignment of film edge to the frontal phantom surface. The deviations could drop from 5 to 2 mm in SOBP and from 10% to 4.5% at 5 cm depth in a well‐controlled processor condition (i.e., warm up). In addition to the validation of the calibration method done by the DD measurements, the concurrent film and IC measurement independently validated the model by showing the constancy of depth‐dependent calibration factors. For profile measurement, the film showed good agreement with ion chamber measurement. In agreement with the experimental findings, computationally obtained ratio of film dose to water dose assisted understanding of the trend of the film response by revealing relatively large and small variances of the response for DD and beam profile measurements, respectively. Conclusions are as follows. For proton beams, radiographic film proved to offer accurate beam profile measurements. The adaptive calibration method proposed in this study was validated. Using the method, film dosimetry could offer reasonably accurate DD constancy checks, when provided with a well‐controlled processor condition. Although the processor warming up can promote a uniform processing across a single layer of the film, the processing remains as a challenge. PACS number: 87 PMID:26103499
EOS Aqua AMSR-E Arctic Sea Ice Validation Program: Arctic2003 Aircraft Campaign Flight Report
NASA Technical Reports Server (NTRS)
Cavalieri, D. J.; Markus,T.
2003-01-01
In March 2003 a coordinated Arctic sea ice validation field campaign using the NASA Wallops P-3B aircraft was successfully completed. This campaign was part of the program for validating the Earth Observing System (EOS) Aqua Advanced Microwave Scanning Radiometer (AMSR-E) sea ice products. The AMSR-E, designed and built by the Japanese National Space Development Agency for NASA, was launched May 4, 2002 on the EOS Aqua spacecraft. The AMSR-E sea ice products to be validated include sea ice concentration, sea ice temperature, and snow depth on sea ice. This flight report describes the suite of instruments flown on the P-3, the objectives of each of the seven flights, the Arctic regions overflown, and the coordination among satellite, aircraft, and surface-based measurements. Two of the seven aircraft flights were coordinated with scientists making surface measurements of snow and ice properties including sea ice temperature and snow depth on sea ice at a study area near Barrow, AK and at a Navy ice camp located in the Beaufort Sea. Two additional flights were dedicated to making heat and moisture flux measurements over the St. Lawrence Island polynya to support ongoing air-sea-ice processes studies of Arctic coastal polynyas. The remaining flights covered portions of the Bering Sea ice edge, the Chukchi Sea, and Norton Sound.
Landscape scale estimation of soil carbon stock using 3D modelling.
Veronesi, F; Corstanje, R; Mayr, T
2014-07-15
Soil C is the largest pool of carbon in the terrestrial biosphere, and yet the processes of C accumulation, transformation and loss are poorly accounted for. This, in part, is due to the fact that soil C is not uniformly distributed through the soil depth profile and most current landscape level predictions of C do not adequately account the vertical distribution of soil C. In this study, we apply a method based on simple soil specific depth functions to map the soil C stock in three-dimensions at landscape scale. We used soil C and bulk density data from the Soil Survey for England and Wales to map an area in the West Midlands region of approximately 13,948 km(2). We applied a method which describes the variation through the soil profile and interpolates this across the landscape using well established soil drivers such as relief, land cover and geology. The results indicate that this mapping method can effectively reproduce the observed variation in the soil profiles samples. The mapping results were validated using cross validation and an independent validation. The cross-validation resulted in an R(2) of 36% for soil C and 44% for BULKD. These results are generally in line with previous validated studies. In addition, an independent validation was undertaken, comparing the predictions against the National Soil Inventory (NSI) dataset. The majority of the residuals of this validation are between ± 5% of soil C. This indicates high level of accuracy in replicating topsoil values. In addition, the results were compared to a previous study estimating the carbon stock of the UK. We discuss the implications of our results within the context of soil C loss factors such as erosion and the impact on regional C process models. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes
Sampaio, Renato Coral; Vargas, José A. R.
2018-01-01
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments. PMID:29570698
Sensor Fusion to Estimate the Depth and Width of the Weld Bead in Real Time in GMAW Processes.
Bestard, Guillermo Alvarez; Sampaio, Renato Coral; Vargas, José A R; Alfaro, Sadek C Absi
2018-03-23
The arc welding process is widely used in industry but its automatic control is limited by the difficulty in measuring the weld bead geometry and closing the control loop on the arc, which has adverse environmental conditions. To address this problem, this work proposes a system to capture the welding variables and send stimuli to the Gas Metal Arc Welding (GMAW) conventional process with a constant voltage power source, which allows weld bead geometry estimation with an open-loop control. Dynamic models of depth and width estimators of the weld bead are implemented based on the fusion of thermographic data, welding current and welding voltage in a multilayer perceptron neural network. The estimators were trained and validated off-line with data from a novel algorithm developed to extract the features of the infrared image, a laser profilometer was implemented to measure the bead dimensions and an image processing algorithm that measures depth by making a longitudinal cut in the weld bead. These estimators are optimized for embedded devices and real-time processing and were implemented on a Field-Programmable Gate Array (FPGA) device. Experiments to collect data, train and validate the estimators are presented and discussed. The results show that the proposed method is useful in industrial and research environments.
Miller, M; Hamilton, J; Scupham, R; Matwiejczyk, L; Prichard, I; Farrer, O; Yaxley, A
2018-01-01
Food service staff are integral to delivery of quality food in aged care homes yet measurement of their satisfaction is unable to be performed due to an absence of a valid and reliable questionnaire. The aim of this study was to develop and perform psychometric testing for a new Food Service Satisfaction Questionnaire developed in Australia specifically for use by food service staff working in residential aged care homes (Flinders FSSQFSAC). A mixed methods design utilizing both a qualitative (in-depth interviews, focus groups) and a quantitative approach (cross sectional survey) was used. Content validity was determined from focus groups and interviews with food service staff currently working in aged care homes, related questionnaires from the literature and consultation with an expert panel. The questionnaire was tested for construct validity and internal consistency using data from food service staff currently working in aged care homes that responded to an electronic invitation circulated to Australian aged care homes using a national database of email addresses. Construct validity was tested via principle components analysis and internal consistency through Cronbach's alpha. Temporal stability of the questionnaire was determined from food service staff undertaking the Flinders FSSQFSAC on two occasions, two weeks apart, and analysed using Pearson's correlations. Content validity for the Flinders FSSQFSAC was established from a panel of experts and stakeholders. Principle components analysis revealed food service staff satisfaction was represented by 61-items divided into eight domains: job satisfaction (α=0.832), food quality (α=0.871), staff training (α=0.922), consultation (α=0.840), eating environment (α=0.777), reliability (α=0.695), family expectations (α=0.781) and resident relationships (α=0.429), establishing construct validity in all domains, and internal consistency in all (α>0.5) except for "resident relationships" (α=0.429). Test-retest reliability coefficients ranged from 0.276 to 0.826 dependent on domain, with test-retest reliability established in seven domains at r>0.4; an exception was "reliability" at r=0.276. The newly developed Flinders FSSQFSAC has acceptable validity and reliability and thereby the potential to measure satisfaction of food service staff working in residential aged care homes, identify areas for strategic change, measure improvements and in turn, improve the satisfaction and quality of life of both food service staff and residents of aged care homes.
Olson, Scott A.; Weber, Matthew A.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Ayotte, Joseph D.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Boehmler, Erick M.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Boehmler, Erick M.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Olson, Scott A.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Ayotte, Joseph D.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Ayotte, Joseph D.
1996-01-01
Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. The scour analysis results are presented in tables 1 and 2 and a graph of the scour depths is presented in figure 8.
Tritthart, Michael; Welti, Nina; Bondar-Kunze, Elisabeth; Pinay, Gilles; Hein, Thomas; Habersack, Helmut
2011-01-01
The hydrological exchange conditions strongly determine the biogeochemical dynamics in river systems. More specifically, the connectivity of surface waters between main channels and floodplains is directly controlling the delivery of organic matter and nutrients into the floodplains, where biogeochemical processes recycle them with high rates of activity. Hence, an in-depth understanding of the connectivity patterns between main channel and floodplains is important for the modelling of potential gas emissions in floodplain landscapes. A modelling framework that combines steady-state hydrodynamic simulations with long-term discharge hydrographs was developed to calculate water depths as well as statistical probabilities and event durations for every node of a computation mesh being connected to the main river. The modelling framework was applied to two study sites in the floodplains of the Austrian Danube River, East of Vienna. Validation of modelled flood events showed good agreement with gauge readings. Together with measured sediment properties, results of the validated connectivity model were used as basis for a predictive model yielding patterns of potential microbial respiration based on the best fit between characteristics of a number of sampling sites and the corresponding modelled parameters. Hot spots of potential microbial respiration were found in areas of lower connectivity if connected during higher discharges and areas of high water depths. PMID:27667961
Relevance of motion-related assessment metrics in laparoscopic surgery.
Oropesa, Ignacio; Chmarra, Magdalena K; Sánchez-González, Patricia; Lamata, Pablo; Rodrigues, Sharon P; Enciso, Silvia; Sánchez-Margallo, Francisco M; Jansen, Frank-Willem; Dankelman, Jenny; Gómez, Enrique J
2013-06-01
Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills and their correlation with the different abilities sought to measure. A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, 3 novel tasks for surgical assessment were designed. Face and construct validation was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents, and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. Time, path length, and depth showed construct validity for all 3 tasks. Motion smoothness and idle time also showed validity for tasks involving bimanual coordination and tasks requiring a more tactical approach, respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.
Atmospheric Science Data Center
2018-02-19
MISR Level 2 First Look Aerosol Data (MIL2ASAF.002) MISR Level 2 First Look Aerosol Data Stage 2 & 3 Validated Project Title: ... MISR Browse Tool Parameters: Aerosol optical depth Aerosol compositional model Ancillary ...
Vertical characterization of soil contamination using multi-way modeling--a case study.
Singh, Kunwar P; Malik, Amrita; Basant, Ankita; Ojha, Priyanka
2008-11-01
This study describes application of chemometric multi-way modeling approach to analyze the dataset pertaining to soils of industrial area with a view to assess the soil/sub-soil contamination, accumulation pathways and mobility of contaminants in the soil profiles. The three-way (sampling depths, chemical variables, sampling sites) dataset on heavy metals in soil samples collected from three different sites in an industrial area, up to a depth of 60 m each was analyzed using three-way Tucker3 model validated for stability and goodness of fit. A two component Tucker3 model, explaining 66.6% of data variance, allowed interpretation of the data information in all the three modes. The interpretation of core elements revealing interactions among the components of different modes (depth, variables, sites) allowed inferring more realistic information about the contamination pattern of soils both along the horizontal and vertical coordinates, contamination pathways, and mobility of contaminants through soil profiles, as compared to the traditional data analysis techniques. It concluded that soils at site-1 and site-2 are relatively more contaminated with heavy metals of both the natural as well as anthropogenic origins, as compared to the soil of site-3. Moreover, the accumulation pathways of metals for upper shallow layers and deeper layers of soils in the area were differentiated. The information generated would be helpful in developing strategies for remediation of the contaminated soils for reducing the subsequent risk of ground-water contamination in the study region.
Validation plays the role of a "bridge" in connecting remote sensing research and applications
NASA Astrophysics Data System (ADS)
Wang, Zhiqiang; Deng, Ying; Fan, Yida
2018-07-01
Remote sensing products contribute to improving earth observations over space and time. Uncertainties exist in products of different levels; thus, validation of these products before and during their applications is critical. This study discusses the meaning of validation in depth and proposes a new definition of reliability for use with such products. In this context, validation should include three aspects: a description of the relevant uncertainties, quantitative measurement results and a qualitative judgment that considers the needs of users. A literature overview is then presented evidencing improvements in the concepts associated with validation. It shows that the root mean squared error (RMSE) is widely used to express accuracy; increasing numbers of remote sensing products have been validated; research institutes contribute most validation efforts; and sufficient validation studies encourage the application of remote sensing products. Validation plays a connecting role in the distribution and application of remote sensing products. Validation connects simple remote sensing subjects with other disciplines, and it connects primary research with practical applications. Based on the above findings, it is suggested that validation efforts that include wider cooperation among research institutes and full consideration of the needs of users should be promoted.
Testate amoeba transfer function performance along localised hydrological gradients.
Tsyganov, Andrey N; Mityaeva, Olga A; Mazei, Yuri A; Payne, Richard J
2016-09-01
Testate amoeba transfer functions are widely used for reconstruction of palaeo-hydrological regime in peatlands. However, the limitations of this approach have become apparent with increasing attention to validation and assessing sources of uncertainty. This paper investigates effects of peatland type and sampling depth on the performance of a transfer function using an independent test-set from four Sphagnum-dominated sites in European Russia (Penza Region). We focus on transfer function performance along localised hydrological gradients, which is a useful analogue for predictive ability through time. The performance of the transfer function with the independent test-set was generally weaker than for the leave-one-out or bootstrap cross-validations. However, the transfer function was robust for the reconstruction of relative changes in water-table depth, provided the presence of good modern analogues and overlap in water-table depth ranges. When applied to subsurface samples, the performance of the transfer function was reduced due to selective decomposition, the presence of deep-dwelling taxa or vertical transfer of shells. Our results stress the importance of thorough testing of transfer functions, and highlight the role of taphonomic processes in determining results. Further studies of stratification, taxonomy and taphonomy of testate amoebae will be needed to improve the robustness of transfer function output. Copyright © 2015 Elsevier GmbH. All rights reserved.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes
Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V.; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J.; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wiśniewski, Jacek R.; Jun, Wang; Mann, Matthias
2007-01-01
Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools. PMID:17090601
O'Reilly, Martin; Caulfield, Brian; Ward, Tomas; Johnston, William; Doherty, Cailbhe
2018-05-01
Analysis of lower limb exercises is traditionally completed with four distinct methods: (1) 3D motion capture; (2) depth-camera-based systems; (3) visual analysis from a qualified exercise professional; and (4) self-assessment. Each method is associated with a number of limitations. The aim of this systematic review is to synthesise and evaluate studies which have investigated the capacity for inertial measurement unit (IMU) technologies to assess movement quality in lower limb exercises. A systematic review of studies identified through the databases of PubMed, ScienceDirect and Scopus was conducted. Articles written in English and published in the last 10 years which investigated an IMU system for the analysis of repetition-based targeted lower limb exercises were included. The quality of included studies was measured using an adapted version of the STROBE assessment criteria for cross-sectional studies. The studies were categorised into three groupings: exercise detection, movement classification or measurement validation. Each study was then qualitatively summarised. From the 2452 articles that were identified with the search strategies, 47 papers are included in this review. Twenty-six of the 47 included studies were deemed as being of high quality. Wearable inertial sensor systems for analysing lower limb exercises is a rapidly growing field of research. Research over the past 10 years has predominantly focused on validating measurements that the systems produce and classifying users' exercise quality. There have been very few user evaluation studies and no clinical trials in this field to date.
Braun, Tobias; Rieckmann, Alina; Grüneberg, Christian; Marks, Detlef; Thiel, Christian
2016-07-01
The hierarchical assessment of balance and mobility (HABAM) is an internationally established clinical mobility test which has good psychometric properties, allows an easy assessment of mobility and the graphical illustration of change over time in geriatric patients. The aims of this study were the translation and cross-cultural adaptation of the English original HABAM into the German language as well as a preliminary analysis of the practicability and construct validity of the HABAM. The HABAM was translated into German following international guidelines. A prefinal version was clinically tested by physiotherapists in two geriatric hospitals over a period of 5 weeks. In order to make a final revision of the German HABAM version, structured in-depth feedback was obtained from the seven therapists who had used the HABAM most often. A total of 18 physiotherapists used the HABAM for 47 elderly inpatients for the initial assessment. The translated items and instructions seemed comprehensible but problems occurred concerning the conducting and documentation of the HABAM. Modifications led to the final German HABAM version and 85 % of the HABAM assessments were performed within ≤ 10 min. There was a correlation of rs= 0.71 with the Tinetti test and of rs = 0.68 with the Barthel index. A German HABAM version is now accessible for use in clinical practice. The results of a preliminary psychometric analysis indicate a potentially good practicability and sufficient construct validity. A comprehensive analysis of psychometric properties is pending.
NASA Astrophysics Data System (ADS)
Kern, S.; Khvorostovsky, K.; Skourup, H.; Rinne, E.; Parsakhoo, Z. S.; Djepa, V.; Wadhams, P.; Sandven, S.
2014-03-01
One goal of the European Space Agency Climate Change Initiative sea ice Essential Climate Variable project is to provide a quality controlled 20 year long data set of Arctic Ocean winter-time sea ice thickness distribution. An important step to achieve this goal is to assess the accuracy of sea ice thickness retrieval based on satellite radar altimetry. For this purpose a data base is created comprising sea ice freeboard derived from satellite radar altimetry between 1993 and 2012 and collocated observations of snow and sea ice freeboard from Operation Ice Bridge (OIB) and CryoSat Validation Experiment (CryoVEx) air-borne campaigns, of sea ice draft from moored and submarine Upward Looking Sonar (ULS), and of snow depth from OIB campaigns, Advanced Microwave Scanning Radiometer aboard EOS (AMSR-E) and the Warren Climatology (Warren et al., 1999). An inter-comparison of the snow depth data sets stresses the limited usefulness of Warren climatology snow depth for freeboard-to-thickness conversion under current Arctic Ocean conditions reported in other studies. This is confirmed by a comparison of snow freeboard measured during OIB and CryoVEx and snow freeboard computed from radar altimetry. For first-year ice the agreement between OIB and AMSR-E snow depth within 0.02 m suggests AMSR-E snow depth as an appropriate alternative. Different freeboard-to-thickness and freeboard-to-draft conversion approaches are realized. The mean observed ULS sea ice draft agrees with the mean sea ice draft computed from radar altimetry within the uncertainty bounds of the data sets involved. However, none of the realized approaches is able to reproduce the seasonal cycle in sea ice draft observed by moored ULS satisfactorily. A sensitivity analysis of the freeboard-to-thickness conversion suggests: in order to obtain sea ice thickness as accurate as 0.5 m from radar altimetry, besides a freeboard estimate with centimetre accuracy, an ice-type dependent sea ice density is as mandatory as a snow depth with centimetre accuracy.
NASA Astrophysics Data System (ADS)
Sun, Yang; Liao, Kuo-Chih; Sun, Yinghua; Park, Jesung; Marcu, Laura
2008-02-01
A unique tissue phantom is reported here that mimics the optical and acoustical properties of biological tissue and enables testing and validation of a dual-modality clinical diagnostic system combining time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) and ultrasound backscatter microscopy (UBM). The phantom consisted of contrast agents including silicon dioxide particles with a range of diameters from 0.5 to 10 μm acting as optical and acoustical scatterers, and FITC-conjugated dextran mimicking the endogenous fluorophore in tissue. The agents were encapsulated in a polymer bead attached to the end of an optical fiber with a 200 μm diameter using a UV-induced polymerization technique. A set of beads with fibers were then implanted into a gel-based matrix with controlled patterns including a design with lateral distribution and a design with successively changing depth. The configuration presented here allowed the validation of the hybrid fluorescence spectroscopic and ultrasonic system by detecting the lateral and depth distribution of the contrast agents, as well as for coregistration of the ultrasonic image with spectroscopic data. In addition, the depth of the beads in the gel matrix was changed to explore the effect of different concentration ratio of the mixture on the fluorescence signal emitted.
Assessment of 10 Year Record of Aerosol Optical Depth from OMI UV Observations
NASA Technical Reports Server (NTRS)
Ahn, Changwoo; Torres, Omar; Jethva, Hiren
2014-01-01
The Ozone Monitoring Instrument (OMI) onboard the EOS-Aura satellite provides information on aerosol optical properties by making use of the large sensitivity to aerosol absorption in the near-ultraviolet (UV) spectral region. Another important advantage of using near UV observations for aerosol characterization is the low surface albedo of all terrestrial surfaces in this spectral region that reduces retrieval errors associated with land surface reflectance characterization. In spite of the 13 × 24 square kilometers coarse sensor footprint, the OMI near UV aerosol algorithm (OMAERUV) retrieves aerosol optical depth (AOD) and single-scattering albedo under cloud-free conditions from radiance measurements at 354 and 388 nanometers. We present validation results of OMI AOD against space and time collocated Aerosol Robotic Network measured AOD values over multiple stations representing major aerosol episodes and regimes. OMAERUV's performance is also evaluated with respect to those of the Aqua-MODIS Deep Blue and Terra-MISR AOD algorithms over arid and semi-arid regions in Northern Africa. The outcome of the evaluation analysis indicates that in spite of the "row anomaly" problem, affecting the sensor since mid-2007, the long-term aerosol record shows remarkable sensor stability.
Correlation induced localization of lattice trapped bosons coupled to a Bose–Einstein condensate
NASA Astrophysics Data System (ADS)
Keiler, Kevin; Krönke, Sven; Schmelcher, Peter
2018-03-01
We investigate the ground state properties of a lattice trapped bosonic system coupled to a Lieb–Liniger type gas. Our main goal is the description and in depth exploration and analysis of the two-species many-body quantum system including all relevant correlations beyond the standard mean-field approach. To achieve this, we use the multi-configuration time-dependent Hartree method for mixtures (ML-MCTDHX). Increasing the lattice depth and the interspecies interaction strength, the wave function undergoes a transition from an uncorrelated to a highly correlated state, which manifests itself in the localization of the lattice atoms in the latter regime. For small interspecies couplings, we identify the process responsible for this cross-over in a single-particle-like picture. Moreover, we give a full characterization of the wave function’s structure in both regimes, using Bloch and Wannier states of the lowest band, and we find an order parameter, which can be exploited as a corresponding experimental signature. To deepen the understanding, we use an effective Hamiltonian approach, which introduces an induced interaction and is valid for small interspecies interaction. We finally compare the ansatz of the effective Hamiltonian with the results of the ML-MCTDHX simulations.
Development and Validation of a Gender Ideology Scale for Family Planning Services in Rural China
Yang, Xueyan; Li, Shuzhuo; Feldman, Marcus W.
2013-01-01
The objectives of this study are to develop a scale of gender role ideology appropriate for assessing Quality of Care in family planning services for rural China. Literature review, focus-group discussions and in-depth interviews with service providers and clients from two counties in eastern and western China, as well as experts’ assessments, were used to develop a scale for family planning services. Psychometric methodologies were applied to samples of 601 service clients and 541 service providers from a survey in a district in central China to validate its internal consistency, reliability, and construct validity with realistic and strategic dimensions. This scale is found to be reliable and valid, and has prospects for application both academically and practically in the field. PMID:23573222
Item generation in the development of an inpatient experience questionnaire: a qualitative study
2013-01-01
Background Patient experience is a key feature of quality improvement in modern health-care delivery. Measuring patient experience is one of several tools used to assess and monitor the quality of health services. This study aims to develop a tool for assessing patient experience with inpatient care in public hospitals in Hong Kong. Methods Based on the General Inpatient Questionnaire (GIQ) framework of the Care Quality Commission as a discussion guide, a qualitative study involving focus group discussions and in-depth individual interviews with patients was employed to develop a tool for measuring inpatient experience in Hong Kong. Results All participants agreed that a patient satisfaction survey is an important platform for collecting patients’ views on improving the quality of health-care services. Findings of the focus group discussions and in-depth individual interviews identified nine key themes as important hospital quality indicators: prompt access, information provision, care and involvement in decision making, physical and emotional needs, coordination of care, respect and privacy, environment and facilities, handling of patient feedback, and overall care from health-care professionals and quality of care. Privacy, complaint mechanisms, patient involvement, and information provision were further highlighted as particularly important areas for item revision by the in-depth individual interviews. Thus, the initial version of the Hong Kong Inpatient Experience Questionnaire (HKIEQ), comprising 58 core items under nine themes, was developed. Conclusions A set of dimensions and core items of the HKIEQ was developed and the instrument will undergo validity and reliability tests through a validation survey. A valid and reliable tool is important in accurately assessing patient experience with care delivery in hospitals to improve the quality of health-care services. PMID:23835186
Development of two-photon fluorescence microscopy for quantitative imaging in turbid tissues
NASA Astrophysics Data System (ADS)
Coleno, Mariah Lee
Two-photon laser scanning fluorescence microscopy (TPM) is a high resolution, non-invasive biological imaging technique that can be used to image turbid tissues both in vitro and in vivo at depths of several hundred microns. Although TPM has been widely used to image tissue structures, no one has focused on using TPM to extract quantitative information from turbid tissues at depth. As a result, this thesis addresses the quantitative characterization of two-photon signals in turbid media. Initially, a two-photon microscope system is constructed, and two-photon images that validate system performance are obtained. Then TPM is established as an imaging technique that can be used to validate theoretical observations already listed in the literature. In particular, TPM is found to validate the exponential dependence of the fluorescence intensity decay with depth in turbid tissue model systems. Results from these studies next prompted experimental investigation into whether TPM could be used to determine tissue optical properties. Comparing the exponential dependence of the decay with a Monte Carlo model involving tissue optical properties, TPM is shown to be useful for determining the optical properties (total attenuation coefficient) of thick, turbid tissues on a small spatial scale. Next, a role for TPM for studying and optimizing wound healing is demonstrated. In particular, TPM is used to study the effects of perturbations (growth factors, PDT) on extracellular matrix remodeling in artificially engineered skin tissues. Results from these studies combined with tissue contraction studies are shown to demonstrate ways to modulate tissues to optimize the wound healing immune response and reduce scarring. In the end, TPM is shown to be an extremely important quantitative biological imaging technique that can be used to optimize wound repair.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas B.; Jain, Manu; Cordova, Miguel A.; Kose, Kivanc; Rajadhyaksha, Milind; Halpern, Allan C.; Farkas, Daniel L.
2017-02-01
Motivation and background: Melanoma, the fastest growing cancer worldwide, kills more than one person every hour in the United States. Determining the depth and distribution of dermal melanin and hemoglobin adds physio-morphologic information to the current diagnostic standard, cellular morphology, to further develop noninvasive methods to discriminate between melanoma and benign skin conditions. Purpose: To compare the performance of a multimode dermoscopy system (SkinSpect), which is designed to quantify and map in three dimensions, in vivo melanin and hemoglobin in skin, and to validate this with histopathology and three dimensional reflectance confocal microscopy (RCM) imaging. Methods: Sequentially capture SkinSpect and RCM images of suspect lesions and nearby normal skin and compare this with histopathology reports, RCM imaging allows noninvasive observation of nuclear, cellular and structural detail in 1-5 μm-thin optical sections in skin, and detection of pigmented skin lesions with sensitivity of 90-95% and specificity of 70-80%. The multimode imaging dermoscope combines polarization (cross and parallel), autofluorescence and hyperspectral imaging to noninvasively map the distribution of melanin, collagen and hemoglobin oxygenation in pigmented skin lesions. Results: We compared in vivo features of ten melanocytic lesions extracted by SkinSpect and RCM imaging, and correlated them to histopathologic results. We present results of two melanoma cases (in situ and invasive), and compare with in vivo features from eight benign lesions. Melanin distribution at different depths and hemodynamics, including abnormal vascularity, detected by both SkinSpect and RCM will be discussed. Conclusion: Diagnostic features such as dermal melanin and hemoglobin concentration provided in SkinSpect skin analysis for melanoma and normal pigmented lesions can be compared and validated using results from RCM and histopathology.
Snow Depth Mapping at a Basin-Wide Scale in the Western Arctic Using UAS Technology
NASA Astrophysics Data System (ADS)
de Jong, T.; Marsh, P.; Mann, P.; Walker, B.
2015-12-01
Assessing snow depths across the Arctic has proven to be extremely difficult due to the variability of snow depths at scales from metres to 100's of metres. New Unmanned Aerial Systems (UAS) technology provides the possibility to obtain centimeter level resolution imagery (~3cm), and to create Digital Surface Models (DSM) based on the Structure from Motion method. However, there is an ongoing need to quantify the accuracy of this method over different terrain and vegetation types across the Arctic. In this study, we used a small UAS equipped with a high resolution RGB camera to create DSMs over a 1 km2 watershed in the western Canadian Arctic during snow (end of winter) and snow-free periods. To improve the image georeferencing, 15 Ground Control Points were marked across the watershed and incorporated into the DSM processing. The summer DSM was subtracted from the snowcovered DSM to deliver snow depth measurements across the entire watershed. These snow depth measurements were validated by over 2000 snow depth measurements. This technique has the potential to improve larger scale snow depth mapping across watersheds by providing snow depth measurements at a ~3 cm . The ability of mapping both shallow snow (less than 75cm) covering much of the basin and snow patches (up to 5 m in depth) that cover less than 10% of the basin, but contain a significant portion of total basin snowcover, is important for both water resource applications, as well as for testing snow models.
A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.
Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro
2012-06-01
To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.
Testing the MODIS Satellite Retrieval of Aerosol Fine-Mode Fraction
NASA Technical Reports Server (NTRS)
Anderson, Theodore L.; Wu, Yonghua; Chu, D. Allen; Schmid, Beat; Redemann, Jens; Dubovik, Oleg
2005-01-01
Satellite retrievals of the fine-mode fraction (FMF) of midvisible aerosol optical depth, tau, are potentially valuable for constraining chemical transport models and for assessing the global distribution of anthropogenic aerosols. Here we compare satellite retrievals of FMF from the Moderate Resolution Imaging Spectroradiometer (MODIS) to suborbital data on the submicrometer fraction (SMF) of tau. SMF is a closely related parameter that is directly measurable by in situ techniques. The primary suborbital method uses in situ profiling of SMF combined with airborne Sun photometry both to validate the in situ estimate of ambient extinction and to take into account the aerosol above the highest flight level. This method is independent of the satellite retrieval and has well-known accuracy but entails considerable logistical and technical difficulties. An alternate method uses Sun photometer measurements near the surface and an empirical relation between SMF and the Angstrom exponent, A, a measure of the wavelength dependence of optical depth or extinction. Eleven primary and fifteen alternate comparisons are examined involving varying mixtures of dust, sea salt, and pollution in the vicinity of Korea and Japan. MODIS ocean retrievals of FMF are shown to be systematically higher than suborbital estimates of SMF by about 0.2. The most significant cause of this discrepancy involves the relationship between 5 and fine-mode partitioning; in situ measurements indicate a systematically different relationship from what is assumed in the satellite retrievals. Based on these findings, we recommend: (1) satellite programs should concentrate on retrieving and validating since an excellent validation program is in place for doing this, and (2) suborbital measurements should be used to derive relationships between A and fine-mode partitioning to allow interpretation of the satellite data in terms of fine-mode aerosol optical depth.
The UK Earth System Models Marine Biogeochemical Evaluation Toolkit, BGC-val
NASA Astrophysics Data System (ADS)
de Mora, Lee
2017-04-01
The Biogeochemical Validation toolkit, BGC-val, is a model and grid independent python-based marine model evaluation framework that automates much of the validation of the marine component of an Earth System Model. BGC-val was initially developed to be a flexible and extensible system to evaluate the spin up of the marine UK Earth System Model (UKESM). However, the grid-independence and flexibility means that it is straightforward to adapt the BGC-val framework to evaluate other marine models. In addition to the marine component of the UKESM, this toolkit has been adapted to compare multiple models, including models from the CMIP5 and iMarNet inter-comparison projects. The BGC-val toolkit produces multiple levels of analysis which are presented in a simple to use interactive html5 document. Level 1 contains time series analyses, showing the development over time of many important biogeochemical and physical ocean metrics, such as the Global primary production or the Drake passage current. The second level of BGC-val is an in-depth spatial analyses of a single point in time. This is a series of point to point comparison of model and data in various regions, such as a comparison of Surface Nitrate in the model vs data from the world ocean atlas. The third level analyses are specialised ad-hoc packages to go in-depth on a specific question, such as the development of Oxygen minimum zones in the Equatorial Pacific. In additional to the three levels, the html5 document opens with a Level 0 table showing a summary of the status of the model run. The beta version of this toolkit is available via the Plymouth Marine Laboratory Gitlab server and uses the BSD 3 clause license.
NASA Astrophysics Data System (ADS)
Mantecón, Tomás.; del Blanco, Carlos Roberto; Jaureguizar, Fernando; García, Narciso
2014-06-01
New forms of natural interactions between human operators and UAVs (Unmanned Aerial Vehicle) are demanded by the military industry to achieve a better balance of the UAV control and the burden of the human operator. In this work, a human machine interface (HMI) based on a novel gesture recognition system using depth imagery is proposed for the control of UAVs. Hand gesture recognition based on depth imagery is a promising approach for HMIs because it is more intuitive, natural, and non-intrusive than other alternatives using complex controllers. The proposed system is based on a Support Vector Machine (SVM) classifier that uses spatio-temporal depth descriptors as input features. The designed descriptor is based on a variation of the Local Binary Pattern (LBP) technique to efficiently work with depth video sequences. Other major consideration is the especial hand sign language used for the UAV control. A tradeoff between the use of natural hand signs and the minimization of the inter-sign interference has been established. Promising results have been achieved in a depth based database of hand gestures especially developed for the validation of the proposed system.
Resolving the depth of fluorescent light by structured illumination and shearing interferometry
NASA Astrophysics Data System (ADS)
Schindler, Johannes; Elmaklizi, Ahmed; Voit, Florian; Hohmann, Ansgar; Schau, Philipp; Brodhag, Nicole; Krauter, Philipp; Frenner, Karsten; Kienle, Alwin; Osten, Wolfgang
2016-03-01
A method for the depth-sensitive detection of fluorescent light is presented. It relies on a structured illumination restricting the excitation volume and on an interferometric detection of the wave front curvature. The illumination with two intersecting beams of a white-light laser separated in a Sagnac interferometer coupled to the microscope provides a coarse confinement in lateral and axial direction. The depth reconstruction is carried out by evaluating shearing interferograms produced with a Michelson interferometer. This setup can also be used with spatially and temporally incoherent light as emitted by fluorophores. A simulation workflow of the method was developed using a combination of a solution of Maxwell's equations with the Monte Carlo method. These simulations showed the principal feasibility of the method. The method is validated by measurements at reference samples with characterized material properties, locations and sizes of fluorescent regions. It is demonstrated that sufficient signal quality can be obtained for materials with scattering properties comparable to dental enamel while maintaining moderate illumination powers in the milliwatt range. The depth reconstruction is demonstrated for a range of distances and penetration depths of several hundred micrometers.
Development and Validation of an Interactive Liner Design and Impedance Modeling Tool
NASA Technical Reports Server (NTRS)
Howerton, Brian M.; Jones, Michael G.; Buckley, James L.
2012-01-01
The Interactive Liner Impedance Analysis and Design (ILIAD) tool is a LabVIEW-based software package used to design the composite surface impedance of a series of small-diameter quarter-wavelength resonators incorporating variable depth and sharp bends. Such structures are useful for packaging broadband acoustic liners into constrained spaces for turbofan engine noise control applications. ILIAD s graphical user interface allows the acoustic channel geometry to be drawn in the liner volume while the surface impedance and absorption coefficient calculations are updated in real-time. A one-dimensional transmission line model serves as the basis for the impedance calculation and can be applied to many liner configurations. Experimentally, tonal and broadband acoustic data were acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3000 Hz at 120 and 140 dB SPL. Normalized impedance spectra were measured using the Two-Microphone Method for the various combinations of channel configurations. Comparisons between the computed and measured impedances show excellent agreement for broadband liners comprised of multiple, variable-depth channels. The software can be used to design arrays of resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.
Earthquake source properties of a shallow induced seismic sequence in SE Brazil
NASA Astrophysics Data System (ADS)
Agurto-Detzel, Hans; Bianchi, Marcelo; Prieto, Germán. A.; Assumpção, Marcelo
2017-04-01
We study source parameters of a cluster of 21 very shallow (<1 km depth) small-magnitude (Mw < 2) earthquakes induced by percolation of water by gravity in SE Brazil. Using a multiple empirical Green's functions (meGf) approach, we estimate seismic moments, corner frequencies, and static stress drops of these events by inversion of their spectral ratios. For the studied magnitude range (-0.3 < Mw < 1.9), we found an increase of stress drop with seismic moment. We assess associated uncertainties by considering different signal time windows and by performing a jackknife resampling of the spectral ratios. We also calculate seismic moments by full waveform inversion to independently validate our moments from spectral analysis. We propose repeated rupture on a fault patch at shallow depth, following continuous inflow of water, as the cause for the observed low absolute stress drop values (<1 MPa) and earthquake size dependency. To our knowledge, no other study on earthquake source properties of shallow events induced by water injection with no added pressure is available in the literature. Our study suggests that source parameter characterization may provide additional information of induced seismicity by hydraulic stimulation.
Large-scale human skin lipidomics by quantitative, high-throughput shotgun mass spectrometry.
Sadowski, Tomasz; Klose, Christian; Gerl, Mathias J; Wójcik-Maciejewicz, Anna; Herzog, Ronny; Simons, Kai; Reich, Adam; Surma, Michal A
2017-03-07
The lipid composition of human skin is essential for its function; however the simultaneous quantification of a wide range of stratum corneum (SC) and sebaceous lipids is not trivial. We developed and validated a quantitative high-throughput shotgun mass spectrometry-based platform for lipid analysis of tape-stripped SC skin samples. It features coverage of 16 lipid classes; total quantification to the level of individual lipid molecules; high reproducibility and high-throughput capabilities. With this method we conducted a large lipidomic survey of 268 human SC samples, where we investigated the relationship between sampling depth and lipid composition, lipidome variability in samples from 14 different sampling sites on the human body and finally, we assessed the impact of age and sex on lipidome variability in 104 healthy subjects. We found sebaceous lipids to constitute an abundant component of the SC lipidome as they diffuse into the topmost SC layers forming a gradient. Lipidomic variability with respect to sampling depth, site and subject is considerable, and mainly accredited to sebaceous lipids, while stratum corneum lipids vary less. This stresses the importance of sampling design and the role of sebaceous lipids in skin studies.
NASA Astrophysics Data System (ADS)
Rinehart, Matthew T.; LaCroix, Jeffrey; Henderson, Marcus; Katz, David; Wax, Adam
2011-03-01
The effectiveness of microbicidal gels, topical products developed to prevent infection by sexually transmitted diseases including HIV/AIDS, is governed by extent of gel coverage, pharmacokinetics of active pharmaceutical ingredients (APIs), and integrity of vaginal epithelium. While biopsies provide localized information about drug delivery and tissue structure, in vivo measurements are preferable in providing objective data on API and gel coating distribution as well as tissue integrity. We are developing a system combining confocal fluorescence microscopy with optical coherence tomography (OCT) to simultaneously measure local concentrations and diffusion coefficients of APIs during transport from microbicidal gels into tissue, while assessing tissue integrity. The confocal module acquires 2-D images of fluorescent APIs multiple times per second allowing analysis of lateral diffusion kinetics. The custom Fourier domain OCT module has a maximum a-scan rate of 54 kHz and provides depth-resolved tissue integrity information coregistered with the confocal fluorescence measurements. The combined system is validated by imaging phantoms with a surrogate fluorophore. Time-resolved API concentration measured at fixed depths is analyzed for diffusion kinetics. This multimodal system will eventually be implemented in vivo for objective evaluation of microbicide product performance.
Artificial neural network (ANN)-based prediction of depth filter loading capacity for filter sizing.
Agarwal, Harshit; Rathore, Anurag S; Hadpe, Sandeep Ramesh; Alva, Solomon J
2016-11-01
This article presents an application of artificial neural network (ANN) modelling towards prediction of depth filter loading capacity for clarification of a monoclonal antibody (mAb) product during commercial manufacturing. The effect of operating parameters on filter loading capacity was evaluated based on the analysis of change in the differential pressure (DP) as a function of time. The proposed ANN model uses inlet stream properties (feed turbidity, feed cell count, feed cell viability), flux, and time to predict the corresponding DP. The ANN contained a single output layer with ten neurons in hidden layer and employed a sigmoidal activation function. This network was trained with 174 training points, 37 validation points, and 37 test points. Further, a pressure cut-off of 1.1 bar was used for sizing the filter area required under each operating condition. The modelling results showed that there was excellent agreement between the predicted and experimental data with a regression coefficient (R 2 ) of 0.98. The developed ANN model was used for performing variable depth filter sizing for different clarification lots. Monte-Carlo simulation was performed to estimate the cost savings by using different filter areas for different clarification lots rather than using the same filter area. A 10% saving in cost of goods was obtained for this operation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1436-1443, 2016. © 2016 American Institute of Chemical Engineers.
Harris, Heather S; Benson, Scott R; James, Michael C; Martin, Kelly J; Stacy, Brian A; Daoust, Pierre-Yves; Rist, Paul M; Work, Thierry M; Balazs, George H; Seminoff, Jeffrey A
2016-03-01
Leatherback turtles (Dermochelys coriacea) undergo substantial cyclical changes in body condition between foraging and nesting. Ultrasonography has been used to measure subcutaneous fat as an indicator of body condition in many species but has not been applied in sea turtles. To validate this technique in leatherback turtles, ultrasound images were obtained from 36 live-captured and dead-stranded immature and adult turtles from foraging and nesting areas in the Pacific and Atlantic oceans. Ultrasound measurements were compared with direct measurements from surgical biopsy or necropsy. Tissue architecture was confirmed histologically in a subset of turtles. The dorsal shoulder region provided the best site for differentiation of tissues. Maximum fat depth values with the front flipper in a neutral (45-90°) position demonstrated good correlation with direct measurements. Ultrasound-derived fat measurements may be used in the future for quantitative assessment of body condition as an index of health in this critically endangered species.
Harris, Heather S.; Benson, Scott R.; James, Michael C.; Martin, Kelly J.; Stacy, Brian A.; Daoust, Pierre-Yves; Rist, Paul M.; Work, Thierry M.; Balazs, George H.; Seminoff, Jeffrey A.
2016-01-01
Leatherback turtles (Dermochelys coriacea) undergo substantial cyclical changes in body condition between foraging and nesting. Ultrasonography has been used to measure subcutaneous fat as an indicator of body condition in many species but has not been applied in sea turtles. To validate this technique in leatherback turtles, ultrasound images were obtained from 36 live-captured and dead-stranded immature and adult turtles from foraging and nesting areas in the Pacific and Atlantic oceans. Ultrasound measurements were compared with direct measurements from surgical biopsy or necropsy. Tissue architecture was confirmed histologically in a subset of turtles. The dorsal shoulder region provided the best site for differentiation of tissues. Maximum fat depth values with the front flipper in a neutral (45–90°) position demonstrated good correlation with direct measurements. Ultrasound-derived fat measurements may be used in the future for quantitative assessment of body condition as an index of health in this critically endangered species.
Varni, James W; Curtis, Bradley H; Abetz, Linda N; Lasch, Kathryn E; Piault, Elisabeth C; Zeytoonjian, Andrea A
2013-10-01
The content validity of the 28-item PedsQL™ 3.0 Diabetes Module has not been established in research on pediatric and adult patients with newly diagnosed Type 1 diabetes across a broad age range. This study aimed to document the content validity of three age-specific versions (8-12 years, 13-18 years, and 18-45 years) of the PedsQL™ Diabetes Module in a population of newly diagnosed patients with Type 1 diabetes. The study included in-depth interviews with 31 newly diagnosed patients with Type 1 diabetes between the ages of 8 and 45 years, as well as 14 parents and/or caregivers of child and teenage patients between the ages of 8 and 18 years of age; grounded theory data collection and analysis methods; and review by clinical and measurement experts. Following the initial round of interviews, revisions reflecting patient feedback were made to the Child and Teen versions of the Diabetes Module, and an Adult version of the Diabetes Module was drafted. Cognitive interviews of the modified versions of the Diabetes Module were conducted with an additional sample of 11 patients. The results of these interviews support the content validity of the modified 33-item PedsQL™ 3.2 Diabetes Module for pediatric and adult patients, including interpretability, comprehensiveness, and relevance suitable for all patients with Type 1 Diabetes. Qualitative methods support the content validity of the modified PedsQL™ 3.2 Diabetes Module in pediatric and adult patients. It is recommended that the PedsQL™ 3.2 Diabetes Module replaces version 3.0 and is suitable for measuring patient-reported outcomes in all patients with newly diagnosed, stable, or long-standing diabetes in clinical research and practice.
Brace, Christopher L
2011-07-01
Design and validate an efficient dual-slot coaxial microwave ablation antenna that produces an approximately spherical heating pattern to match the shape of most abdominal and pulmonary tumor targets. A dual-slot antenna geometry was utilized for this study. Permutations of the antenna geometry using proximal and distal slot widths from 1 to 10 mm separated by 1-20 mm were analyzed using finite-element electromagnetic simulations. From this series, the most optimal antenna geometry was selected using a two-term sigmoidal objective function to minimize antenna reflection coefficient and maximize the diameter-to-length aspect ratio of heat generation. Sensitivities to variations in tissue properties and insertion depth were also evaluated in numerical models. The most optimal dual-slot geometry of the parametric analysis was then fabricated from semirigid coaxial cable. Antenna reflection coefficients at various insertion depths were recorded in ex vivo bovine livers and compared to numerical results. Ablation zones were then created by applying 50 W for 2-10 min in simulations and ex vivo livers. Mean zone diameter, length, aspect ratio, and reflection coefficients before and after heating were then compared to a conventional monopole antenna using ANOVA with post-hoc t-tests. Statistical significance was indicated for P <0.05. Antenna performance was highly sensitive to dual-slot geometry. The best-performing designs utilized a proximal slot width of 1 mm, distal slot width of 4 mm +/- 1 mm and separation of 8 mm +/- 1 mm. These designs were characterized by an active choking mechanism that focused heating to the distal tip of the antenna. A dual-band resonance was observed in the most optimal design, with a minimum reflection coefficient of -20.9 dB at 2.45 and 1.25 GHz. Total operating bandwidth was greater than 1 GHz, but the desired heating pattern was achieved only near 2.45 GHz. As a result, antenna performance was robust to changes in insertion depth and variations in relative permittivity of the surrounding tissue medium. In both simulations and ex vivo liver, the dual-slot antenna created ablations greater in diameter than a coaxial monopole (35 mm +/- 2 mm versus 31 mm +/- 2 mm; P<0.05), while also shorter in length (49 mm +/- 2 mm versus 60 mm +/- 6 mm; P < 0.001) after 10 min. Similar results were obtained after 2 and 5 min as well. Dual-slot antennas can produce more spherical ablation zones while retaining low reflection coefficients. These benefits are obtained without adding to the antenna diameter. Further evaluation for clinical microwave ablation appears warranted.
Perroud, Thomas D; Meagher, Robert J; Kanouff, Michael P; Renzi, Ronald F; Wu, Meiye; Singh, Anup K; Patel, Kamlesh D
2009-02-21
To enable several on-chip cell handling operations in a fused-silica substrate, small shallow micropores are radially embedded in larger deeper microchannels using an adaptation of single-level isotropic wet etching. By varying the distance between features on the photolithographic mask (mask distance), we can precisely control the overlap between two etch fronts and create a zero-thickness semi-elliptical micropore (e.g. 20 microm wide, 6 microm deep). Geometrical models derived from a hemispherical etch front show that micropore width and depth can be expressed as a function of mask distance and etch depth. These models are experimentally validated at different etch depths (25.03 and 29.78 microm) and for different configurations (point-to-point and point-to-edge). Good reproducibility confirms the validity of this approach to fabricate micropores with a desired size. To illustrate the wide range of cell handling operations enabled by micropores, we present three on-chip functionalities: continuous-flow particle concentration, immobilization of single cells, and picoliter droplet generation. (1) Using pressure differentials, particles are concentrated by removing the carrier fluid successively through a series of 44 shunts terminated by 31 microm wide, 5 microm deep micropores. Theoretical values for the concentration factor determined by a flow circuit model in conjunction with finite volume modeling are experimentally validated. (2) Flowing macrophages are individually trapped in 20 microm wide, 6 microm deep micropores by hydrodynamic confinement. The translocation of transcription factor NF-kappaB into the nucleus upon lipopolysaccharide stimulation is imaged by fluorescence microscopy. (3) Picoliter-sized droplets are generated at a 20 microm wide, 7 microm deep micropore T-junction in an oil stream for the encapsulation of individual E. coli bacteria cells.
In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis
2016-01-01
1 AWARD NUMBER: W81XWH-15-1-0003 TITLE: In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis PRINCIPAL INVESTIGATOR...Annual 3. DATES COVERED 10 Dec 2014 – 09 Dec 2015 4. TITLE AND SUBTITLE In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis ...cells present in rheumatoid arthritis (RA) patients exhibit a distinct cell surface phenotype and transcriptional signature that could be used to
NASA Technical Reports Server (NTRS)
Clarke, Antony D.; Porter, John N.
1997-01-01
Our research effort is focused on improving our understanding of aerosol properties needed for optical models for remote marine regions. This includes in-situ and vertical column optical closure and involves a redundancy of approaches to measure and model optical properties that must be self consistent. The model is based upon measured in-situ aerosol properties and will be tested and constrained by the vertically measured spectral differential optical depth of the marine boundary layer, MBL. Both measured and modeled column optical properties for the boundary layer, when added to the free-troposphere and stratospheric optical depth, will be used to establish spectral optical depth over the entire atmospheric column for comparison to and validation of satellite derived radiances (AVHRR).
Neumann, Guenter; Schaadt, Anna-Katharina; Reinhart, Stefan; Kerkhoff, Georg
2016-03-01
Cerebral vision disorders (CVDs) are frequent after brain damage and impair the patient's outcome. Yet clinically and psychometrically validated procedures for the anamnesis of CVD are lacking. To evaluate the clinical validity and psychometric qualities of the Cerebral Vision Screening Questionnaire (CVSQ) for the anamnesis of CVD in individuals poststroke. Analysis of the patients' subjective visual complaints in the 10-item CVSQ in relation to objective visual perimetry, tests of reading, visual scanning, visual acuity, spatial contrast sensitivity, light/dark adaptation, and visual depth judgments. Psychometric analyses of concurrent validity, specificity, sensitivity, positive/negative predictive value, and interrater reliability were also done. Four hundred sixty-one patients with unilateral (39.5% left, 47.5% right) or bilateral stroke (13.0%) were included. Most patients were assessed in the chronic stage, on average 36.7 (range = 1-620) weeks poststroke. The majority of all patients (96.4%) recognized their visual symptoms within 1 week poststroke when asked for specifically. Mean concurrent validity of the CVSQ with objective tests was 0.64 (0.54-0.79, P < .05). The mean positive predictive value was 80.1%, mean negative predictive value 82.9%, mean specificity 81.7%, and mean sensitivity 79.8%. The mean interrater reliability was 0.76 for a 1-week interval between both assessments (all P < .05). The CVSQ is suitable for the anamnesis of CVD poststroke because of its brevity (10 minute), clinical validity, and good psychometric qualities. It, thus, improves neurovisual diagnosis and guides the clinician in the selection of necessary assessments and appropriate neurovisual therapies for the patient. © The Author(s) 2015.
Satalkar, Priya; Elger, Bernice Simone; Shaw, David
2016-04-01
Obtaining valid informed consent (IC) can be challenging in first-in-human (FIH) trials in nanomedicine due to the complex interventions, the hype and hope concerning potential benefits, and fear of harms attributed to 'nano' particles. We describe and analyze the opinions of expert stakeholders involved in translational nanomedicine regarding explicit use of 'nano' terminology in IC documents. We draw on content analysis of 46 in-depth interviews with European and North American stakeholders. We received a spectrum of responses (reluctance, ambivalence, absolute insistence) on explicit mention of 'nano' in IC forms with underlying reasons. We conclude that consistent, clear and honest communication regarding the 'nano' dimension of investigational product is critical in IC forms of FIH trials.
Simplified modelling and analysis of a rotating Euler-Bernoulli beam with a single cracked edge
NASA Astrophysics Data System (ADS)
Yashar, Ahmed; Ferguson, Neil; Ghandchi-Tehrani, Maryam
2018-04-01
The natural frequencies and mode shapes of the flapwise and chordwise vibrations of a rotating cracked Euler-Bernoulli beam are investigated using a simplified method. This approach is based on obtaining the lateral deflection of the cracked rotating beam by subtracting the potential energy of a rotating massless spring, which represents the crack, from the total potential energy of the intact rotating beam. With this new method, it is assumed that the admissible function which satisfies the geometric boundary conditions of an intact beam is valid even in the presence of a crack. Furthermore, the centrifugal stiffness due to rotation is considered as an additional stiffness, which is obtained from the rotational speed and the geometry of the beam. Finally, the Rayleigh-Ritz method is utilised to solve the eigenvalue problem. The validity of the results is confirmed at different rotational speeds, crack depth and location by comparison with solid and beam finite element model simulations. Furthermore, the mode shapes are compared with those obtained from finite element models using a Modal Assurance Criterion (MAC).
Monitoring visitor satisfaction: a comparison of comment cards and more in-depth surveys
Alan R. Graefe; James D. Absher; Robert C. Burns
2001-01-01
This paper compares responses to comment cards and more detailed on-site surveys at selected Corps of Engineers lakes. The results shed light on the validity, reliability, and usefulness of these alternative methods of monitoring customer satisfaction.
NASA Astrophysics Data System (ADS)
Rebelo Kornmeier, Joana; Gibmeier, Jens; Hofmann, Michael
2011-06-01
Neutron strain measurements are critical at the surface. When scanning close to a sample surface, aberration peak shifts arise due to geometrical and divergence effects. These aberration peak shifts can be of the same order as the peak shifts related to residual strains. In this study it will be demonstrated that by optimizing the horizontal bending radius of a Si (4 0 0) monochromator, the aberration peak shifts from surface effects can be strongly reduced. A stress-free sample of fine-grained construction steel, S690QL, was used to find the optimal instrumental conditions to minimize aberration peak shifts. The optimized Si (4 0 0) monochromator and instrument settings were then applied to measure the residual stress depth gradient of a shot-peened SAE 4140 steel sample to validate the effectiveness of the approach. The residual stress depth profile is in good agreement with results obtained by x-ray diffraction measurements from an international round robin test (BRITE-EURAM-project ENSPED). The results open very promising possibilities to bridge the gap between x-ray diffraction and conventional neutron diffraction for non-destructive residual stress analysis close to surfaces.
The Algorithm Theoretical Basis Document for the GLAS Atmospheric Data Products
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Hart, William D.; Hlavka, Dennis L.; Welton, Ellsworth J.; Spinhirne, James D.
2012-01-01
The purpose of this document is to present a detailed description of the algorithm theoretical basis for each of the GLAS data products. This will be the final version of this document. The algorithms were initially designed and written based on the authors prior experience with high altitude lidar data on systems such as the Cloud and Aerosol Lidar System (CALS) and the Cloud Physics Lidar (CPL), both of which fly on the NASA ER-2 high altitude aircraft. These lidar systems have been employed in many field experiments around the world and algorithms have been developed to analyze these data for a number of atmospheric parameters. CALS data have been analyzed for cloud top height, thin cloud optical depth, cirrus cloud emittance (Spinhirne and Hart, 1990) and boundary layer depth (Palm and Spinhirne, 1987, 1998). The successor to CALS, the CPL, has also been extensively deployed in field missions since 2000 including the validation of GLAS and CALIPSO. The CALS and early CPL data sets also served as the basis for the construction of simulated GLAS data sets which were then used to develop and test the GLAS analysis algorithms.
Lee, Kang-Woo; Kim, Sang-Hwan; Gil, Young-Chun; Hu, Kyung-Seok; Kim, Hee-Jin
2017-10-01
Three-dimensional (3 D)-scanning-based morphological studies of the face are commonly included in various clinical procedures. This study evaluated validity and reliability of a 3 D scanning system by comparing the ultrasound (US) imaging system versus the direct measurement of facial skin. The facial skin thickness at 19 landmarks was measured using the three different methods in 10 embalmed adult Korean cadavers. Skin thickness was first measured using the ultrasound device, then 3 D scanning of the facial skin surface was performed. After the skin on the left half of face was gently dissected, deviating slightly right of the midline, to separate it from the subcutaneous layer, and the harvested facial skin's thickness was measured directly using neck calipers. The dissected specimen was then scanned again, then the scanned images of undissected and dissected faces were superimposed using Morpheus Plastic Solution (version 3.0) software. Finally, the facial skin thickness was calculated from the superimposed images. The ICC value for the correlations between the 3 D scanning system and direct measurement showed excellent reliability (0.849, 95% confidence interval = 0.799-0.887). Bland-Altman analysis showed a good level of agreement between the 3 D scanning system and direct measurement (bias = 0.49 ± 0.49 mm, mean±SD). These results demonstrate that the 3 D scanning system precisely reflects structural changes before and after skin dissection. Therefore, an in-depth morphological study using this 3 D scanning system could provide depth data about the main anatomical structures of face, thereby providing crucial anatomical knowledge for utilization in various clinical applications. Clin. Anat. 30:878-886, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Validation of MODIS aerosol optical depth over the Mediterranean Coast
NASA Astrophysics Data System (ADS)
Díaz-Martínez, J. Vicente; Segura, Sara; Estellés, Víctor; Utrillas, M. Pilar; Martínez-Lozano, J. Antonio
2013-04-01
Atmospheric aerosols, due to their high spatial and temporal variability, are considered one of the largest sources of uncertainty in different processes affecting visibility, air quality, human health, and climate. Among their effects on climate, they play an important role in the energy balance of the Earth. On one hand they have a direct effect by scattering and absorbing solar radiation; on the other, they also have an impact in precipitation, modifying clouds, or affecting air quality. The application of remote sensing techniques to investigate aerosol effects on climate has advanced significatively over last years. In this work, the products employed have been obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS). MODIS is a sensor located onboard both Earth Observing Systems (EOS) Terra and Aqua satellites, which provide almost complete global coverage every day. These satellites have been acquiring data since early 2000 (Terra) and mid 2002 (Aqua) and offer different products for land, ocean and atmosphere. Atmospheric aerosol products are presented as level 2 products with a pixel size of 10 x 10 km2 in nadir. MODIS aerosol optical depth (AOD) is retrieved by different algorithms depending on the pixel surface, distinguishing between land and ocean. For its validation, ground based sunphotometer data from AERONET (Aerosol Robotic Network) has been employed. AERONET is an international operative network of Cimel CE318 sky-sunphotometers that provides the most extensive aerosol data base globally available of ground-based measurements. The ground sunphotometric technique is considered the most accurate for the retrieval of radiative properties of aerosols in the atmospheric column. In this study we present a validation of MODIS C051 AOD employing AERONET measurements over different Mediterranean coastal sites centered over an area of 50 x 50 km2, which includes both pixels over land and ocean. The validation is done comparing spatial statistics from MODIS with corresponding temporal statistics from AERONET, as proposed by Ichoku et al. (2002). Eight Mediterranean coastal sites (in Spain, France, Italy, Crete, Turkey and Israel) with available AERONET and MODIS data have been used. These stations have been selected following QA criteria (minimum 1000 days of level 2.0 data) and a maximum distance of 8 km from the coast line. Results of the validation over each site show analogous behaviour, giving similar results regarding to the accuracy of the algorithms. Greatest differences are found for the AOD obtained over land, especially for drier regions, where the surface tends to be brighter. In general, the MODIS AOD has better a agreement with AERONET retrievals for the ocean algorithm than the land algorithm when validated over coastal sites, and the agreement is within the expected uncertainty estimated for MODIS data. References: - C. Ichoku et al., "A spatio-temporal approach for global validation and analysis of MODIS aerosol products", Geophysical Research Letters, 219, 12, 10.1029/2001GL013206, 2002.
The Locus analytical framework for indoor localization and tracking applications
NASA Astrophysics Data System (ADS)
Segou, Olga E.; Thomopoulos, Stelios C. A.
2015-05-01
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system's performance limitations (in a best-case/worst-case scenario basis).
NASA Astrophysics Data System (ADS)
Oh, Seboong; Achmad Zaky, Fauzi; Mog Park, Young
2016-04-01
The hydraulic behaviors in the soil layer are crucial to the transient infiltration analysis into natural slopes, in which unsaturated hydraulic conductivity (HC) can be evaluated theoretically from soil water retention curves (SWRC) by Mualem's equation. In the nonlinear infiltration analysis, the solution by some of smooth SWRCs is not converge for heavy rainfall condition, since the gradient of HCs is extremely steep near saturation. The van Genuchten's SWRC model has been modified near saturation and subsequently an analytical HC function was proposed to improve the van Genuchten-Mualem HC. Using the examples on 1-D infiltration analysis by the modified HC model, it is validated that any solutions can be converged for various rainfall conditions to keep numerical stability. Stability analysis based on unsaturated effective stress could simulate the infinite slope failure by the proposed HC model. The pore water pressure and the ratio of saturation increased from the surface to shallow depth (˜1m) and the factor of safety decreased gradually due to infiltration. Acknowledgements This research is supported by grants from Korean NRF (2012M3A2A1050974 and 2015R1A2A2A01), which are greatly appreciated.
NASA Astrophysics Data System (ADS)
Habas, Piotr A.; Kim, Kio; Chandramohan, Dharshan; Rousseau, Francois; Glenn, Orit A.; Studholme, Colin
2009-02-01
Recent advances in MR and image analysis allow for reconstruction of high-resolution 3D images from clinical in utero scans of the human fetal brain. Automated segmentation of tissue types from MR images (MRI) is a key step in the quantitative analysis of brain development. Conventional atlas-based methods for adult brain segmentation are limited in their ability to accurately delineate complex structures of developing tissues from fetal MRI. In this paper, we formulate a novel geometric representation of the fetal brain aimed at capturing the laminar structure of developing anatomy. The proposed model uses a depth-based encoding of tissue occurrence within the fetal brain and provides an additional anatomical constraint in a form of a laminar prior that can be incorporated into conventional atlas-based EM segmentation. Validation experiments are performed using clinical in utero scans of 5 fetal subjects at gestational ages ranging from 20.5 to 22.5 weeks. Experimental results are evaluated against reference manual segmentations and quantified in terms of Dice similarity coefficient (DSC). The study demonstrates that the use of laminar depth-encoded tissue priors improves both the overall accuracy and precision of fetal brain segmentation. Particular refinement is observed in regions of the parietal and occipital lobes where the DSC index is improved from 0.81 to 0.82 for cortical grey matter, from 0.71 to 0.73 for the germinal matrix, and from 0.81 to 0.87 for white matter.
Development of PAOT tool kit for work improvements in clinical nursing.
Jung, Moon-Hee
2014-01-01
The aim of this study was to develop an action checklist for educational training of clinical nurses. The study used qualitative and quantitative methods. Questionnaire items were extracted through in-depth interviews and a questionnaire survey. PASW version 19 and AMOS version 19 were used for data analyses. Reliability and validity were tested with both exploratory and confirmative factor analysis. The levels of the indicators related to goodness-of-fit were acceptable. Thus, a model kit of work improvements in clinical nursing was developed. It comprises 5 domains (16 action points): health promotion (5 action points), work management (3 action points), ergonomic work methods (3 action points), managerial policies and mutual support among staff members (3 action points), and welfare in the work area (2 action points).
Retrieval and Validation of Aerosol Optical Depth by using the GF-1 Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, S.; Wang, L.; Cai, K.; Ge, Q.
2017-05-01
Based on the characteristics of GF-1 remote sensing data, the method and data processing procedure to retrieve the Aerosol Optical Depth (AOD) are developed in this study. The surface contribution over dense vegetation and urban bright target areas are respectively removed by using the dark target and deep blue algorithms. Our method is applied for the three serious polluted Beijing-Tianjin-Hebei (BTH), Yangtze River Delta (YRD) and Pearl River Delta (PRD) regions. The retrieved AOD are validated by ground-based AERONET data from Beijing, Hangzhou, Hong Kong sites. Our results show that, 1) the heavy aerosol loadings are usually distributed in high industrial emission and dense populated cities, with the AOD value near 1. 2) There is a good agreement between satellite-retrievals and in-site observations, with the coefficient factors of 0.71 (BTH), 0.55 (YRD) and 0.54(PRD). 3) The GF-1 retrieval uncertainties are mainly from the impact of cloud contamination, high surface reflectance and assumed aerosol model.
The OncoSim model: development and use for better decision-making in Canadian cancer control.
Gauvreau, C L; Fitzgerald, N R; Memon, S; Flanagan, W M; Nadeau, C; Asakawa, K; Garner, R; Miller, A B; Evans, W K; Popadiuk, C M; Wolfson, M; Coldman, A J
2017-12-01
The Canadian Partnership Against Cancer was created in 2007 by the federal government to accelerate cancer control across Canada. Its OncoSim microsimulation model platform, which consists of a suite of specific cancer models, was conceived as a tool to augment conventional resources for population-level policy- and decision-making. The Canadian Partnership Against Cancer manages the OncoSim program, with funding from Health Canada and model development by Statistics Canada. Microsimulation modelling allows for the detailed capture of population heterogeneity and health and demographic history over time. Extensive data from multiple Canadian sources were used as inputs or to validate the model. OncoSim has been validated through expert consultation; assessments of face validity, internal validity, and external validity; and model fit against observed data. The platform comprises three in-depth cancer models (lung, colorectal, cervical), with another in-depth model (breast) and a generalized model (25 cancers) being in development. Unique among models of its class, OncoSim is available online for public sector use free of charge. Users can customize input values and output display, and extensive user support is provided. OncoSim has been used to support decision-making at the national and jurisdictional levels. Although simulation studies are generally not included in hierarchies of evidence, they are integral to informing cancer control policy when clinical studies are not feasible. OncoSim can evaluate complex intervention scenarios for multiple cancers. Canadian decision-makers thus have a powerful tool to assess the costs, benefits, cost-effectiveness, and budgetary effects of cancer control interventions when faced with difficult choices for improvements in population health and resource allocation.
Uludag, K; Kohl, M; Steinbrink, J; Obrig, H; Villringer, A
2002-01-01
Using the modified Lambert-Beer law to analyze attenuation changes measured noninvasively during functional activation of the brain might result in an insufficient separation of chromophore changes ("cross talk") due to the wavelength dependence of the partial path length of photons in the activated volume of the head. The partial path length was estimated by performing Monte Carlo simulations on layered head models. When assuming cortical activation (e.g., in the depth of 8-12 mm), we determine negligible cross talk when considering changes in oxygenated and deoxygenated hemoglobin. But additionally taking changes in the redox state of cytochrome-c-oxidase into account, this analysis results in significant artifacts. An analysis developed for changes in mean time of flight--instead of changes in attenuation--reduces the cross talk for the layers of cortical activation. These results were validated for different oxygen saturations, wavelength combinations and scattering coefficients. For the analysis of changes in oxygenated and deoxygenated hemoglobin only, low cross talk was also found when the activated volume was assumed to be a 4-mm-diam sphere.
In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis
2017-01-01
AWARD NUMBER: W81XWH-15-1-0003 TITLE: In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis PRINCIPAL INVESTIGATOR...TITLE AND SUBTITLE In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-15-1-0003...NOTES 14. ABSTRACT The goal of this project is to test the hypothesis that cit-specific CD4 T cells present in rheumatoid arthritis (RA) patients
In-Depth Analysis of Citrulline Specific CD4 T-Cells in Rheumatoid Arthritis
2017-01-01
AWARD NUMBER: W81XWH-15-1-0004 TITLE: In-Depth Analysis of Citrulline-Specific CD4 T-Cells in Rheumatoid Arthritis PRINCIPAL INVESTIGATOR...2016 4. TITLE AND SUBTITLE In-Depth Analysis of Citrulline-Specific CD4 T Cells in Rheumatoid Arthritis 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH...NOTES 14. ABSTRACT The goal of this project is to test the hypothesis that cit-specific CD4 T cells present in rheumatoid arthritis (RA) patients
Clark, Ross A; Pua, Yong-Hao; Oliveira, Cristino C; Bower, Kelly J; Thilarajah, Shamala; McGaw, Rebekah; Hasanki, Ksaniel; Mentiplay, Benjamin F
2015-07-01
The Microsoft Kinect V2 for Windows, also known as the Xbox One Kinect, includes new and potentially far improved depth and image sensors which may increase its accuracy for assessing postural control and balance. The aim of this study was to assess the concurrent validity and reliability of kinematic data recorded using a marker-based three dimensional motion analysis (3DMA) system and the Kinect V2 during a variety of static and dynamic balance assessments. Thirty healthy adults performed two sessions, separated by one week, consisting of static standing balance tests under different visual (eyes open vs. closed) and supportive (single limb vs. double limb) conditions, and dynamic balance tests consisting of forward and lateral reach and an assessment of limits of stability. Marker coordinate and joint angle data were concurrently recorded using the Kinect V2 skeletal tracking algorithm and the 3DMA system. Task-specific outcome measures from each system on Day 1 and 2 were compared. Concurrent validity of trunk angle data during the dynamic tasks and anterior-posterior range and path length in the static balance tasks was excellent (Pearson's r>0.75). In contrast, concurrent validity for medial-lateral range and path length was poor to modest for all trials except single leg eyes closed balance. Within device test-retest reliability was variable; however, the results were generally comparable between devices. In conclusion, the Kinect V2 has the potential to be used as a reliable and valid tool for the assessment of some aspects of balance performance. Copyright © 2015 Elsevier B.V. All rights reserved.
Teleseismic depth estimation of the 2015 Gorkha-Nepal aftershocks
NASA Astrophysics Data System (ADS)
Letort, Jean; Bollinger, Laurent; Lyon-Caen, Helene; Guilhem, Aurélie; Cano, Yoann; Baillard, Christian; Adhikari, Lok Bijaya
2016-12-01
The depth of 61 aftershocks of the 2015 April 25 Gorkha, Nepal earthquake, that occurred within the first 20 d following the main shock, is constrained using time delays between teleseismic P phases and depth phases (pP and sP). The detection and identification of these phases are automatically processed using the cepstral method developed by Letort et al., and are validated with computed radiation patterns from the most probable focal mechanisms. The events are found to be relatively shallow (13.1 ± 3.9 km). Because depth estimations could potentially be biased by the method, velocity model or selected data, we also evaluate the depth resolution of the events from local catalogues by extracting 138 events with assumed well-constrained depth estimations. Comparison between the teleseismic depths and the depths from local and regional catalogues helps decrease epistemic uncertainties, and shows that the seismicity is clustered in a narrow band between 10 and 15 km depth. Given the geometry and depth of the major tectonic structures, most aftershocks are probably located in the immediate vicinity of the Main Himalayan Thrust (MHT) shear zone. The mid-crustal ramp of the flat/ramp MHT system is not resolved indicating that its height is moderate (less than 5-10 km) in the trace of the sections that ruptured on April 25. However, the seismicity depth range widens and deepens through an adjacent section to the east, a region that failed on 2015 May 12 during an Mw 7.3 earthquake. This deeper seismicity could reflect a step-down of the basal detachment of the MHT, a lateral structural variation which probably acted as a barrier to the dynamic rupture propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X. R.; Poenisch, F.; Lii, M.
2013-04-15
Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm{sup 2}/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateralmore » dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.« less
Zhu, X. R.; Poenisch, F.; Lii, M.; Sawakuchi, G. O.; Titt, U.; Bues, M.; Song, X.; Zhang, X.; Li, Y.; Ciangaru, G.; Li, H.; Taylor, M. B.; Suzuki, K.; Mohan, R.; Gillin, M. T.; Sahoo, N.
2013-01-01
Purpose: To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). Methods: The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm2/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. Results: We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. Conclusions: We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future. PMID:23556893
Zhu, X R; Poenisch, F; Lii, M; Sawakuchi, G O; Titt, U; Bues, M; Song, X; Zhang, X; Li, Y; Ciangaru, G; Li, H; Taylor, M B; Suzuki, K; Mohan, R; Gillin, M T; Sahoo, N
2013-04-01
To present our method and experience in commissioning dose models in water for spot scanning proton therapy in a commercial treatment planning system (TPS). The input data required by the TPS included in-air transverse profiles and integral depth doses (IDDs). All input data were obtained from Monte Carlo (MC) simulations that had been validated by measurements. MC-generated IDDs were converted to units of Gy mm(2)/MU using the measured IDDs at a depth of 2 cm employing the largest commercially available parallel-plate ionization chamber. The sensitive area of the chamber was insufficient to fully encompass the entire lateral dose deposited at depth by a pencil beam (spot). To correct for the detector size, correction factors as a function of proton energy were defined and determined using MC. The fluence of individual spots was initially modeled as a single Gaussian (SG) function and later as a double Gaussian (DG) function. The DG fluence model was introduced to account for the spot fluence due to contributions of large angle scattering from the devices within the scanning nozzle, especially from the spot profile monitor. To validate the DG fluence model, we compared calculations and measurements, including doses at the center of spread out Bragg peaks (SOBPs) as a function of nominal field size, range, and SOBP width, lateral dose profiles, and depth doses for different widths of SOBP. Dose models were validated extensively with patient treatment field-specific measurements. We demonstrated that the DG fluence model is necessary for predicting the field size dependence of dose distributions. With this model, the calculated doses at the center of SOBPs as a function of nominal field size, range, and SOBP width, lateral dose profiles and depth doses for rectangular target volumes agreed well with respective measured values. With the DG fluence model for our scanning proton beam line, we successfully treated more than 500 patients from March 2010 through June 2012 with acceptable agreement between TPS calculated and measured dose distributions. However, the current dose model still has limitations in predicting field size dependence of doses at some intermediate depths of proton beams with high energies. We have commissioned a DG fluence model for clinical use. It is demonstrated that the DG fluence model is significantly more accurate than the SG fluence model. However, some deficiencies in modeling the low-dose envelope in the current dose algorithm still exist. Further improvements to the current dose algorithm are needed. The method presented here should be useful for commissioning pencil beam dose algorithms in new versions of TPS in the future.
Single particle tracking through highly scattering media with multiplexed two-photon excitation
NASA Astrophysics Data System (ADS)
Perillo, Evan; Liu, Yen-Liang; Liu, Cong; Yeh, Hsin-Chih; Dunn, Andrew K.
2015-03-01
3D single-particle tracking (SPT) has been a pivotal tool to furthering our understanding of dynamic cellular processes in complex biological systems, with a molecular localization accuracy (10-100 nm) often better than the diffraction limit of light. However, current SPT techniques utilize either CCDs or a confocal detection scheme which not only suffer from poor temporal resolution but also limit tracking to a depth less than one scattering mean free path in the sample (typically <15μm). In this report we highlight our novel design for a spatiotemporally multiplexed two-photon microscope which is able to reach sub-diffraction-limit tracking accuracy and sub-millisecond temporal resolution, but with a dramatically extended SPT range of up to 200 μm through dense cell samples. We have validated our microscope by tracking (1) fluorescent nanoparticles in a prescribed motion inside gelatin gel (with 1% intralipid) and (2) labeled single EGFR complexes inside skin cancer spheroids (at least 8 layers of cells thick) for ~10 minutes. Furthermore we discuss future capabilities of our multiplexed two-photon microscope design, specifically to the extension of (1) simultaneous multicolor tracking (i.e. spatiotemporal co-localization analysis) and (2) FRET studies (i.e. lifetime analysis). The high resolution, high depth penetration, and multicolor features of this microscope make it well poised to study a variety of molecular scale dynamics in the cell, especially related to cellular trafficking studies with in vitro tumor models and in vivo.
Nadeau, Christopher P.; Conway, Courtney J.
2015-01-01
Securing water for wetland restoration efforts will be increasingly difficult as human populations demand more water and climate change alters the hydrologic cycle. Minimizing water use at a restoration site could help justify water use to competing users, thereby increasing future water security. Moreover, optimizing water depth for focal species will increase habitat quality and the probability that the restoration is successful. We developed and validated spatial habitat models to optimize water depth within wetland restoration projects along the lower Colorado River intended to benefit California black rails (Laterallus jamaicensis coturniculus). We observed a 358% increase in the number of black rails detected in the year after manipulating water depth to maximize the amount of predicted black rail habitat in two wetlands. The number of black rail detections in our restoration sites was similar to those at our reference site. Implementing the optimal water depth in each wetland decreased water use while simultaneously increasing habitat suitability for the focal species. Our results also provide experimental confirmation of past descriptive accounts of black rail habitat preferences and provide explicit water depth recommendations for future wetland restoration efforts for this species of conservation concern; maintain surface water depths between saturated soil and 100 mm. Efforts to optimize water depth in restored wetlands around the world would likely increase the success of wetland restorations for the focal species while simultaneously minimizing and justifying water use.
SWATS: Diurnal Trends in the Soil Temperature Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, David; Theisen, Adam
During the processing of data for the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility ARMBE2D Value-Added Product (VAP), the developers noticed that the SWATS soil temperatures did not show a decreased temporal variability with increased depth with the new E30+ Extended Facilities (EFs), unlike the older EFs at ARM’s Southern Great Plains (SGP) site. The instrument mentor analyzed the data and reported that all SWATS locations have shown this behavior but that the magnitude of the problem was greatest at EFs E31-E38. The data were analyzed to verify the initial assessments of: 1. 5 cmmore » SWATS data were valid for all EFs and 15 cm soil temperature measurements were valid at all EFs other than E31-E38, 2. Use only nighttime SWATS soil temperature measurements to calculate daily average soil temperatures, 3. Since it seems likely that the soil temperature measurements below 15cm were affected by the solar heating of the enclosure at all but E31-38, and at all depths below 5cm at E31-38, individual measurements of soil temperature at these depths during daylight hours, and daily averages of the same, can ot be trusted on most (particularly sunny) days.« less
Measurement of stream channel habitat using sonar
Flug, Marshall; Seitz, Heather; Scott, John
1998-01-01
An efficient and low cost technique using a sonar system was evaluated for describing channel geometry and quantifying inundated area in a large river. The boat-mounted portable sonar equipment was used to record water depths and river width measurements for direct storage on a laptop computer. The field data collected from repeated traverses at a cross-section were evaluated to determine the precision of the system and field technique. Results from validation at two different sites showed average sample standard deviations (S.D.s) of 0.12 m for these complete cross-sections, with coefficient of variations of 10%. Validation using only the mid-channel river cross-section data yields an average sample S.D. of 0.05 m, with a coefficient of variation below 5%, at a stable and gauged river site using only measurements of water depths greater than 0.6 m. Accuracy of the sonar system was evaluated by comparison to traditionally surveyed transect data from a regularly gauged site. We observed an average mean squared deviation of 46.0 cm2, considering only that portion of the cross-section inundated by more than 0.6 m of water. Our procedure proved to be a reliable, accurate, safe, quick, and economic method to record river depths, discharges, bed conditions, and substratum composition necessary for stream habitat studies.
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
High-throughput 3D whole-brain quantitative histopathology in rodents
Vandenberghe, Michel E.; Hérard, Anne-Sophie; Souedet, Nicolas; Sadouni, Elmahdi; Santin, Mathieu D.; Briet, Dominique; Carré, Denis; Schulz, Jocelyne; Hantraye, Philippe; Chabrier, Pierre-Etienne; Rooney, Thomas; Debeir, Thomas; Blanchard, Véronique; Pradier, Laurent; Dhenain, Marc; Delzescaux, Thierry
2016-01-01
Histology is the gold standard to unveil microscopic brain structures and pathological alterations in humans and animal models of disease. However, due to tedious manual interventions, quantification of histopathological markers is classically performed on a few tissue sections, thus restricting measurements to limited portions of the brain. Recently developed 3D microscopic imaging techniques have allowed in-depth study of neuroanatomy. However, quantitative methods are still lacking for whole-brain analysis of cellular and pathological markers. Here, we propose a ready-to-use, automated, and scalable method to thoroughly quantify histopathological markers in 3D in rodent whole brains. It relies on block-face photography, serial histology and 3D-HAPi (Three Dimensional Histology Analysis Pipeline), an open source image analysis software. We illustrate our method in studies involving mouse models of Alzheimer’s disease and show that it can be broadly applied to characterize animal models of brain diseases, to evaluate therapeutic interventions, to anatomically correlate cellular and pathological markers throughout the entire brain and to validate in vivo imaging techniques. PMID:26876372
Reliability and validity: Part II.
Davis, Debora Winders
2004-01-01
Determining measurement reliability and validity involves complex processes. There is usually room for argument about most instruments. It is important that the researcher clearly describes the processes upon which she made the decision to use a particular instrument, and presents the evidence available showing that the instrument is reliable and valid for the current purposes. In some cases, the researcher may need to conduct pilot studies to obtain evidence upon which to decide whether the instrument is valid for a new population or a different setting. In all cases, the researcher must present a clear and complete explanation for the choices, she has made regarding reliability and validity. The consumer must then judge the degree to which the researcher has provided adequate and theoretically sound rationale. Although I have tried to touch on most of the important concepts related to measurement reliability and validity, it is beyond the scope of this column to be exhaustive. There are textbooks devoted entirely to specific measurement issues if readers require more in-depth knowledge.
Ayotte, Joseph D.; Hammond, Robert E.
1996-01-01
bridge consisting of one 27-foot clear-span concrete-encased steel beam deck superstructure (Vermont Agency of Transportation, written commun., August 25, 1994). The bridge is supported by vertical, concrete abutments with wingwalls. The channel is skewed approximately 10 degrees to the opening while the opening-skew-to-roadway is 5 degrees. Both abutment footings were reported as exposed and the left abutment was reported to be undermined by 0.5 ft at the time of the Level I assessment. The only scour protection measure at the site was type-1 stone fill (less than 12 inches diameter) along the left abutment which was reported as failed. Additional details describing conditions at the site are included in the Level II Summary and Appendices D and E. Scour depths and rock rip-rap sizes were computed using the general guidelines described in Hydraulic Engineering Circular 18 (Richardson and others, 1993). Total scour at a highway crossing is comprised of three components: 1) long-term streambed degradation; 2) contraction scour (due to accelerated flow caused by a reduction in flow area at a bridge) and; 3) local scour (caused by accelerated flow around piers and abutments). Total scour is the sum of the three components. Equations are available to compute depths for contraction and local scour and a summary of the results of these computations follows. Contraction scour for all modelled flows ranged from 0.4 to 5.1 ft. with the worst-case occurring at the 500-year discharge. Abutment scour ranged from 9.9 to 20.3 ft. The worst-case abutment scour also occurred at the 500-year discharge. Additional information on scour depths and depths to armoring are included in the section titled “Scour Results”. Scoured-streambed elevations, based on the calculated scour depths, are presented in tables 1 and 2. A cross-section of the scour computed at the bridge is presented in figure 8. Scour depths were calculated assuming an infinite depth of erosive material and a homogeneous particle-size distribution. It is generally accepted that the Froehlich equation (abutment scour) gives “excessively conservative estimates of scour depths” (Richardson and others, 1993, p. 48). Many factors, including historical performance during flood events, the geomorphic assessment, scour protection measures, and the results of the hydraulic analyses, must be considered to properly assess the validity of abutment scour results. Therefore, scour depths adopted by VTAOT may differ from the computed values documented herein, based on the consideration of additional contributing factors and experienced engineering judgement.
Blystad, Astrid; Rortveit, Guri; Gjerde, Janne Lillelid; Muleta, Mulu; Moland, Karen Marie
2018-05-01
This formative qualitative follow-up study addresses validity concerns in the Dabat Incontinence and Prolapse (DABINCOP) study, which aimed to determine the prevalence of pelvic floor disorders in north-west Ethiopia. A pilot study using a questionnaire validated by pelvic exam showed severe underreporting of clinically relevant pelvic organ prolapse (POP). The objective of the follow-up study was to explore the reasons behind the underreporting and to gather information to strengthen the sensitivity and local relevance of the questionnaire to be employed in the main study. A qualitative formative study nested within the DABINCOP study was carried out in rural and semiurban communities using an interpretive approach and in-depth qualitative interviews. Women (5) who had not self-reported POP in the pilot but were diagnosed with severe prolapse after pelvic examination, and health-care workers in the research team (7) were interviewed individually within 1 year of the pilot. Systematic text condensation was used in the analysis. The women explained that shame and fear of social exclusion, lack of trust in the study and data collectors, and lack of hope for cure prevented them from disclosing. The health-care workers reported weaknesses in the questionnaire and the research approach. Time pressure and competition among data collectors may have compromised women's motivation to disclose. The study indicates that qualitative research may fruitfully be employed in the formative phase of an epidemiological study on sensitive reproductive health problems to enhance local relevance of the tool and overall validity of the study.
Lucchetti, Giancarlo; Lucchetti, Alessandra Lamas Granero; Vallada, Homero
2013-01-01
Despite numerous spirituality and/or religiosity (S/R) measurement tools for use in research worldwide, there is little information on S/R instruments in the Portuguese language. The aim of the present study was to map out the S/R scales available for research in the Portuguese language. Systematic review of studies found in databases. A systematic review was conducted in three phases. Phases 1 and 2: articles in Portuguese, Spanish and English, published up to November 2011, dealing with the Portuguese translation and/or validation of S/R measurement tools for clinical research, were selected from six databases. Phase 3: the instruments were grouped according to authorship, cross-cultural adaptation, internal consistency, concurrent and discriminative validity and test-retest procedures. Twenty instruments were found. Forty-five percent of these evaluated religiosity, 40% spirituality, 10% religious/spiritual coping and 5% S/R. Among these, 90% had been produced in (n = 3) or translated to (n = 15) Brazilian Portuguese and two (10%) solely to European Portuguese. Nevertheless, the majority of the instruments had not undergone in-depth psychometric analysis. Only 40% of the instruments presented concurrent validity, 45% discriminative validity and 15% a test-retest procedure. The characteristics of each instrument were analyzed separately, yielding advantages, disadvantages and psychometric properties. Currently, 20 instruments for measuring S/R are available in the Portuguese language. Most have been translated (n = 15) or developed (n = 3) in Brazil and present good internal consistency. Nevertheless, few instruments have been assessed regarding all their psychometric qualities.
Integrating Character Education Model With Spiral System In Chemistry Subject
NASA Astrophysics Data System (ADS)
Hartutik; Rusdarti; Sumaryanto; Supartono
2017-04-01
Integrating character education is the responsibility of all subject teachers including chemistry teacher. The integration of character education is just administrative requirements so that the character changes are not measurable. The research objective 1) describing the actual conditions giving character education, 2) mapping the character integration of chemistry syllabus with a spiral system, and 3) producing syllabus and guide system integrating character education in chemistry lessons. Of the eighteen value character, each character is mapped to the material chemistry value concepts of class X and repeated the system in class XI and class XII. Spiral system integration means integrating the character values of chemistry subjects in steps from class X to XII repeatedly at different depth levels. Besides developing the syllabus, also made the integration of characters in a learning guide. This research was designed with research and development [3] with the scope of 20 chemistry teachers in Semarang. The focus of the activities is the existence of the current character study, mapping the character values in the syllabus, and assessment of the integration guides of character education. The validity test of Syllabus and Lesson Plans by experts in FGD. The data were taken with questionnaire and interviews, then processed by descriptive analysis. The result shows 1) The factual condition, in general, the teachers designed learning one-time face-to-face with the integration of more than four characters so that behaviour changes and depth of character is poorly controlled, 2) Mapping each character values focused in the syllabus. Meaning, on one or two basic competence in four or five times, face to face, enough integrated with the value of one character. In this way, there are more noticeable changes in students behaviour. Guidance is needed to facilitate the integration of character education for teachers integrating systems. Product syllabus and guidelines validated by experts and the syllabus results averaging 4.37; guidebooks integrating character education in chemistry learning 4.36 with a maximum score of 5. Thus the device is declared valid. Through focus group discussions, each expert gave input for the improvement of learning modules of character education.
Increased depth-diameter ratios in the Medusae Fossae Formation deposits of Mars
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1993-01-01
Depth to diameter ratios for fresh impact craters on Mars are commonly cited as approximately 0.2 for simple craters and 0.1 for complex craters. Recent computation of depth-diameter ratios in the Amazonis-Memnonia region of Mars indicates that craters within the Medusae Fossae Formation deposits found in this region display greater depth-diameter ratios than expected for both simple and complex craters. Photoclinometric and shadow length techniques have been used to obtain depths of craters within the Amazonis-Memnonia region. The 37 craters in the 2 to 29 km diameter range and displaying fresh impact morphologies were identified in the area of study. This region includes the Amazonian aged upper and middle members of the Medusae Fossae Formation and Noachian aged cratered and hilly units. The Medusae Fossae Formation is characterized by extensive, flat to gently undulating deposits of controversial origin. These deposits appear to vary from friable to indurated. Early analysis of crater degradation in the Medusae Fossae region suggested that simple craters excavated to greater depths than expected based on the general depth-diameter relationships derived for Mars. However, too few craters were available in the initial analysis to estimate the actual depth-diameter ratios within this region. Although the analysis is continuing, we are now beginning to see a convergence towards specific values for the depth-diameter ratio depending on geologic unit.
A Fiber Bragg Grating Sensor for Radial Artery Pulse Waveform Measurement.
Jia, Dagong; Chao, Jing; Li, Shuai; Zhang, Hongxia; Yan, Yingzhan; Liu, Tiegen; Sun, Ye
2018-04-01
In this paper, we report the design and experimental validation of a novel optical sensor for radial artery pulse measurement based on fiber Bragg grating (FBG) and lever amplification mechanism. Pulse waveform analysis is a diagnostic tool for clinical examination and disease diagnosis. High fidelity radial artery pulse waveform has been investigated in clinical studies for estimating central aortic pressure, which is proved to be predictors of cardiovascular diseases. As a three-dimensional cylinder, the radial artery needs to be examined from different locations to achieve optimal pulse waveform for estimation and diagnosis. The proposed optical sensing system is featured as high sensitivity and immunity to electromagnetic interference for multilocation radial artery pulse waveform measurement. The FBG sensor can achieve the sensitivity of 8.236 nm/N, which is comparable to a commonly used electrical sensor. This FBG-based system can provide high accurate measurement, and the key characteristic parameters can be then extracted from the raw signals for clinical applications. The detecting performance is validated through experiments guided by physicians. In the experimental validation, we applied this sensor to measure the pulse waveforms at various positions and depths of the radial artery in the wrist according to the diagnostic requirements. The results demonstrate the high feasibility of using optical systems for physiological measurement and using this FBG sensor for radial artery pulse waveform in clinical applications.
A sediment resuspension and water quality model of Lake Okeechobee
James, R.T.; Martin, J.; Wool, T.; Wang, P.-F.
1997-01-01
The influence of sediment resuspension on the water quality of shallow lakes is well documented. However, a search of the literature reveals no deterministic mass-balance eutrophication models that explicitly include resuspension. We modified the Lake Okeeehobee water quality model - which uses the Water Analysis Simulation Package (WASP) to simulate algal dynamics and phosphorus, nitrogen, and oxygen cycles - to include inorganic suspended solids and algorithms that: (1) define changes in depth with changes in volume; (2) compute sediment resuspension based on bottom shear stress; (3) compute partition coefficients for ammonia and ortho-phosphorus to solids; and (4) relate light attenuation to solids concentrations. The model calibration and validation were successful with the exception of dissolved inorganic nitrogen species which did not correspond well to observed data in the validation phase. This could be attributed to an inaccurate formulation of algal nitrogen preference and/or the absence of nitrogen fixation in the model. The model correctly predicted that the lake is lightlimited from resuspended solids, and algae are primarily nitrogen limited. The model simulation suggested that biological fluxes greatly exceed external loads of dissolved nutrients; and sedimentwater interactions of organic nitrogen and phosphorus far exceed external loads. A sensitivity analysis demonstrated that parameters affecting resuspension, settling, sediment nutrient and solids concentrations, mineralization, algal productivity, and algal stoichiometry are factors requiring further study to improve our understanding of the Lake Okeechobee ecosystem.
NASA Astrophysics Data System (ADS)
Luo, Hong-Wei; Chen, Jie-Jie; Sheng, Guo-Ping; Su, Ji-Hu; Wei, Shi-Qiang; Yu, Han-Qing
2014-11-01
Interactions between metals and activated sludge microorganisms substantially affect the speciation, immobilization, transport, and bioavailability of trace heavy metals in biological wastewater treatment plants. In this study, the interaction of Cu(II), a typical heavy metal, onto activated sludge microorganisms was studied in-depth using a multi-technique approach. The complexing structure of Cu(II) on microbial surface was revealed by X-ray absorption fine structure (XAFS) and electron paramagnetic resonance (EPR) analysis. EPR spectra indicated that Cu(II) was held in inner-sphere surface complexes of octahedral coordination with tetragonal distortion of axial elongation. XAFS analysis further suggested that the surface complexation between Cu(II) and microbial cells was the distorted inner-sphere coordinated octahedra containing four short equatorial bonds and two elongated axial bonds. To further validate the results obtained from the XAFS and EPR analysis, density functional theory calculations were carried out to explore the structural geometry of the Cu complexes. These results are useful to better understand the speciation, immobilization, transport, and bioavailability of metals in biological wastewater treatment plants.
NASA Astrophysics Data System (ADS)
Lei, Chen; Pan, Zhang; Jianxiong, Chen; Tu, Yiliu
2018-04-01
The plasma brightness cannot be used as a direct indicator of ablation depth detection by femtosecond laser was experimentally demonstrated, which led to the difficulty of depth measurement in the maching process. The tests of microchannel milling on the silicon wafer were carried out in the micromachining center in order to obtain the influences of parameters on the ablation depth. The test results showed that the defocusing distance had no significant impact on ablation depth in LAV effective range. Meanwhile, the reason of this was explained in this paper based on the theoretical analysis and simulation calculation. Then it was proven that the ablation depth mainly depends on laser fluence, step distance and scanning velocity. Finally, a research was further carried out to study the laser parameters which relate with the microchannel ablation depth inside the quartz glass for more efficiency and less cost in processing by femtosecond laser.
Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi
2018-04-01
This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.
Jensen, E W; Litvan, H; Struys, M; Martinez Vazquez, P
2004-11-01
The objective of this article was to review the present methods used for validating the depth of hypnosis. We introduce three concepts, the real depth of hypnosis (DHreal), the observed depth of hypnosis (DHobs), and the electronic indices of depth of hypnosis (DHel-ind). The DHreal is the real state of hypnosis that the patient has in a given moment during the general anaesthesia. The DHobs is the subjective assessment of the anaesthesiologist based on clinical signs. The DHel-ind is any estimation of the depth of hypnosis given by an electronic device. The three entities DHreal, DHobs and DHel-ind should in the ideal situation be identical. However, this is rarely the case. The correlation between the DHobs and the DHel-ind can be affected by a number of factors such as the stimuli used for the assessment of the level of consciousness or the administration of analgesic agents or neuro muscular blocking agents. Opioids, for example, can block the response to tactile and noxious stimuli, and even the response to verbal command could vanish, hence deeming the patient in a lower depth of hypnosis than the real patient state. The DHel-ind can be disturbed by the presence of facial muscular activity. In conclusion, although several monitors and clinical scoring scales are available to assess the depth of hypnosis during general anaesthesia, care should be taken when interpreting their results.
NASA Astrophysics Data System (ADS)
Rannou, P.; Pommereau, J.-P.; Sarkissian, A.; Foujols, T.
2012-09-01
The optical depth sensor (ODS) is designed to retrieve the optical depth of the dust layer and to characterize the high altitude clouds on Mars. It was developped initially for the mission MARS 96, and also was included in the payload of several other missions. The sensor was finally built and used for a field experiment in Africa in order to validate the concept and test the performance. In this work we present main principle of the retrieval, the instrumental concept and the result of the tests performed during the 2004-2005 winter field experiment. It is now included in the package DREAM, which is part of the payload of the EDM on Mars 2016 and associated to two terrestrial campaigns, in tropical environment (Brasil) and in the arctic environment.
Monteiro-Soares, M; Martins-Mendes, D; Vaz-Carneiro, A; Sampaio, S; Dinis-Ribeiro, M
2014-10-01
We systematically review the available systems used to classify diabetic foot ulcers in order to synthesize their methodological qualitative issues and accuracy to predict lower extremity amputation, as this may represent a critical point in these patients' care. Two investigators searched, in EBSCO, ISI, PubMed and SCOPUS databases, and independently selected studies published until May 2013 and reporting prognostic accuracy and/or reliability of specific systems for patients with diabetic foot ulcer in order to predict lower extremity amputation. We included 25 studies reporting a prevalence of lower extremity amputation between 6% and 78%. Eight different diabetic foot ulcer descriptions and seven prognostic stratification classification systems were addressed with a variable (1-9) number of factors included, specially peripheral arterial disease (n = 12) or infection at the ulcer site (n = 10) or ulcer depth (n = 10). The Meggitt-Wagner, S(AD)SAD and Texas University Classification systems were the most extensively validated, whereas ten classifications were derived or validated only once. Reliability was reported in a single study, and accuracy measures were reported in five studies with another eight allowing their calculation. Pooled accuracy ranged from 0.65 (for gangrene) to 0.74 (for infection). There are numerous classification systems for diabetic foot ulcer outcome prediction, but only few studies evaluated their reliability or external validity. Studies rarely validated several systems simultaneously and only a few reported accuracy measures. Further studies assessing reliability and accuracy of the available systems and their composing variables are needed. Copyright © 2014 John Wiley & Sons, Ltd.
Ares I-X Post Flight Ignition Overpressure Review
NASA Technical Reports Server (NTRS)
Alvord, David A.
2010-01-01
Ignition Overpressure (IOP) is an unsteady fluid flow and acoustic phenomena caused by the rapid expansion of gas from the rocket nozzle within a ducted launching space resulting in an initially higher amplitude pressure wave. This wave is potentially dangerous to the structural integrity of the vehicle. An in-depth look at the IOP environments resulting from the Ares I-X Solid Rocket Booster configuration showed high correlation between the pre-flight predictions and post-flight analysis results. Correlation between the chamber pressure and IOP transients showed successful acoustic mitigation, containing the strongest IOP waves below the Mobile Launch Pad deck. The flight data allowed subsequent verification and validation of Ares I-X unsteady fluid ducted launcher predictions, computational fluid dynamic models, and strong correlation with historical Shuttle data.
Overview of the DAEDALOS project
NASA Astrophysics Data System (ADS)
Bisagni, Chiara
2015-10-01
The "Dynamics in Aircraft Engineering Design and Analysis for Light Optimized Structures" (DAEDALOS) project aimed to develop methods and procedures to determine dynamic loads by considering the effects of dynamic buckling, material damping and mechanical hysteresis during aircraft service. Advanced analysis and design principles were assessed with the scope of partly removing the uncertainty and the conservatism of today's design and certification procedures. To reach these objectives a DAEDALOS aircraft model representing a mid-size business jet was developed. Analysis and in-depth investigation of the dynamic response were carried out on full finite element models and on hybrid models. Material damping was experimentally evaluated, and different methods for damping evaluation were developed, implemented in finite element codes and experimentally validated. They include a strain energy method, a quasi-linear viscoelastic material model, and a generalized Maxwell viscous material damping. Panels and shells representative of typical components of the DAEDALOS aircraft model were experimentally tested subjected to static as well as dynamic loads. Composite and metallic components of the aircraft model were investigated to evaluate the benefit in terms of weight saving.
Mao, Zhi-Hua; Yin, Jian-Hua; Zhang, Xue-Xi; Wang, Xiao; Xia, Yang
2016-01-01
Fourier transform infrared spectroscopic imaging (FTIRI) technique can be used to obtain the quantitative information of content and spatial distribution of principal components in cartilage by combining with chemometrics methods. In this study, FTIRI combining with principal component analysis (PCA) and Fisher’s discriminant analysis (FDA) was applied to identify the healthy and osteoarthritic (OA) articular cartilage samples. Ten 10-μm thick sections of canine cartilages were imaged at 6.25μm/pixel in FTIRI. The infrared spectra extracted from the FTIR images were imported into SPSS software for PCA and FDA. Based on the PCA result of 2 principal components, the healthy and OA cartilage samples were effectively discriminated by the FDA with high accuracy of 94% for the initial samples (training set) and cross validation, as well as 86.67% for the prediction group. The study showed that cartilage degeneration became gradually weak with the increase of the depth. FTIRI combined with chemometrics may become an effective method for distinguishing healthy and OA cartilages in future. PMID:26977354
Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations
NASA Astrophysics Data System (ADS)
Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.
2015-12-01
Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil Moisture, AMSR2, SMAP, L-Band.
Jaspers, Mariëlle E H; van Haasterecht, Ludo; van Zuijlen, Paul P M; Mokkink, Lidwine B
2018-06-22
Reliable and valid assessment of burn wound depth or healing potential is essential to treatment decision-making, to provide a prognosis, and to compare studies evaluating different treatment modalities. The aim of this review was to critically appraise, compare and summarize the quality of relevant measurement properties of techniques that aim to assess burn wound depth or healing potential. A systematic literature search was performed using PubMed, EMBASE and Cochrane Library. Two reviewers independently evaluated the methodological quality of included articles using an adapted version of the Consensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. A synthesis of evidence was performed to rate the measurement properties for each technique and to draw an overall conclusion on quality of the techniques. Thirty-six articles were included, evaluating various techniques, classified as (1) laser Doppler techniques; (2) thermography or thermal imaging; (3) other measurement techniques. Strong evidence was found for adequate construct validity of laser Doppler imaging (LDI). Moderate evidence was found for adequate construct validity of thermography, videomicroscopy, and spatial frequency domain imaging (SFDI). Only two studies reported on the measurement property reliability. Furthermore, considerable variation was observed among comparator instruments. Considering the evidence available, it appears that LDI is currently the most favorable technique; thereby assessing burn wound healing potential. Additional research is needed into thermography, videomicroscopy, and SFDI to evaluate their full potential. Future studies should focus on reliability and measurement error, and provide a precise description of which construct is aimed to measure. Copyright © 2018 Elsevier Ltd and ISBI. All rights reserved.
Validating a visual version of the metronome response task.
Laflamme, Patrick; Seli, Paul; Smilek, Daniel
2018-02-12
The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.
Cecil, Alexander; Ohlsen, Knut; Menzel, Thomas; François, Patrice; Schrenzel, Jacques; Fischer, Adrien; Dörries, Kirsten; Selle, Martina; Lalk, Michael; Hantzschmann, Julia; Dittrich, Marcus; Liang, Chunguang; Bernhardt, Jörg; Ölschläger, Tobias A; Bringmann, Gerhard; Bruhn, Heike; Unger, Matthias; Ponte-Sucre, Alicia; Lehmann, Leane; Dandekar, Thomas
2015-01-01
Isoquinolines (IQs) are natural substances with an antibiotic potential we aim to optimize. Specifically, IQ-238 is a synthetic analog of the novel-type N,C-coupled naphthylisoquinoline (NIQ) alkaloid ancisheynine. Recently, we developed and tested other IQs such as IQ-143. By utilizing genome-wide gene expression data, metabolic network modelling and Voronoi tessalation based data analysis - as well as cytotoxicity measurements, chemical properties calculations and principal component analysis of the NIQs - we show that IQ-238 has strong antibiotic potential for staphylococci and low cytotoxicity against murine or human cells. Compared to IQ-143, systemic effects are less pronounced. Most enzyme activity changes due to IQ-238 are located in the carbohydrate metabolism. Validation includes metabolite measurements on biological replicates. IQ-238 delineates key properties and a chemical space for a good therapeutic window. The combination of analysis methods allows suggestions for further lead development and yields an in-depth look at staphylococcal adaptation and network changes after antibiosis. Results are compared to eukaryotic host cells. Copyright © 2014 Elsevier GmbH. All rights reserved.
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
NASA Astrophysics Data System (ADS)
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
From theory to experimental design-Quantifying a trait-based theory of predator-prey dynamics.
Laubmeier, A N; Wootton, Kate; Banks, J E; Bommarco, Riccardo; Curtsdotter, Alva; Jonsson, Tomas; Roslin, Tomas; Banks, H T
2018-01-01
Successfully applying theoretical models to natural communities and predicting ecosystem behavior under changing conditions is the backbone of predictive ecology. However, the experiments required to test these models are dictated by practical constraints, and models are often opportunistically validated against data for which they were never intended. Alternatively, we can inform and improve experimental design by an in-depth pre-experimental analysis of the model, generating experiments better targeted at testing the validity of a theory. Here, we describe this process for a specific experiment. Starting from food web ecological theory, we formulate a model and design an experiment to optimally test the validity of the theory, supplementing traditional design considerations with model analysis. The experiment itself will be run and described in a separate paper. The theory we test is that trophic population dynamics are dictated by species traits, and we study this in a community of terrestrial arthropods. We depart from the Allometric Trophic Network (ATN) model and hypothesize that including habitat use, in addition to body mass, is necessary to better model trophic interactions. We therefore formulate new terms which account for micro-habitat use as well as intra- and interspecific interference in the ATN model. We design an experiment and an effective sampling regime to test this model and the underlying assumptions about the traits dominating trophic interactions. We arrive at a detailed sampling protocol to maximize information content in the empirical data obtained from the experiment and, relying on theoretical analysis of the proposed model, explore potential shortcomings of our design. Consequently, since this is a "pre-experimental" exercise aimed at improving the links between hypothesis formulation, model construction, experimental design and data collection, we hasten to publish our findings before analyzing data from the actual experiment, thus setting the stage for strong inference.
NASA Astrophysics Data System (ADS)
Tang, S.; Dong, L.; Lu, P.; Zhou, K.; Wang, F.; Han, S.; Min, M.; Chen, L.; Xu, N.; Chen, J.; Zhao, P.; Li, B.; Wang, Y.
2016-12-01
Due to the lack of observing data which match the satellite pixel size, the inversion accuracy of satellite products in Tibetan Plateau(TP) is difficult to be evaluated. Hence, the in situ observations are necessary to support the calibration and validation activities. Under the support of the Third Tibetan Plateau Atmospheric Scientific Experiment (TIPEX-III) projec a multi-scale automatic observatory of soil moisture and temperature served for satellite product validation (TIPEX-III-SMTN) were established in Tibetan Plateau. The observatory consists of two regional scale networks, including the Naqu network and the Geji network. The Naqu network is located in the north of TP, and characterized by alpine grasslands. The Geji network is located in the west of TP, and characterized by marshes. Naqu network includes 33 stations, which are deployed in a 75KM*75KM region according to a pre-designed pattern. At Each station, soil moisture and temperature are measured by five sensors at five soil depths. One sensor is vertically inserted into 0 2 cm depth to measure the averaged near-surface soil moisture and temperature. The other four sensors are horizontally inserted at 5, 10, 20, and 30 cm depths, respectively. The data are recorded every 10 minutes. A wireless transmission system is applied to transmit the data in real time, and a dual power supply system is adopted to keep the continuity of the observation. The construction of Naqu network has been accomplished in August, 2015, and Geji network will be established before Oct., 2016. Observations acquired from TIPEX-III-SMTN can be used to validate satellite products with different spatial resolution, and TIPEX-III-SMTN can also be used as a complementary of the existing similar networks in this area, such as CTP-SMTMN (the multiscale Soil Moistureand Temperature Monitoring Network on the central TP) . Keywords: multi-scale soil moisture soil temperature, Tibetan Plateau Acknowledgments: This work was jointly supported by CMA Special Fund for Scientific Research in the Public Interest (Grant No. GYHY201406001, GYHY201206008-01), and Climate change special fund (QHBH2014)'
NASA Astrophysics Data System (ADS)
Olurin, Oluwaseun T.; Ganiyu, Saheed A.; Hammed, Olaide S.; Aluko, Taiwo J.
2016-10-01
This study presents the results of spectral analysis of magnetic data over Abeokuta area, Southwestern Nigeria, using fast Fourier transform (FFT) in Microsoft Excel. The study deals with the quantitative interpretation of airborne magnetic data (Sheet No. 260), which was conducted by the Nigerian Geological Survey Agency in 2009. In order to minimise aliasing error, the aeromagnetic data was gridded at spacing of 1 km. Spectral analysis technique was used to estimate the magnetic basement depth distributed at two levels. The result of the interpretation shows that the magnetic sources are mainly distributed at two levels. The shallow sources (minimum depth) range in depth from 0.103 to 0.278 km below ground level and are inferred to be due to intrusions within the region. The deeper sources (maximum depth) range in depth from 2.739 to 3.325 km below ground and are attributed to the underlying basement.
Murphy, Kerry; O'Connor, Denise A; Browning, Colette J; French, Simon D; Michie, Susan; Francis, Jill J; Russell, Grant M; Workman, Barbara; Flicker, Leon; Eccles, Martin P; Green, Sally E
2014-03-03
Dementia is a growing problem, causing substantial burden for patients, their families, and society. General practitioners (GPs) play an important role in diagnosing and managing dementia; however, there are gaps between recommended and current practice. The aim of this study was to explore GPs' reported practice in diagnosing and managing dementia and to describe, in theoretical terms, the proposed explanations for practice that was and was not consistent with evidence-based guidelines. Semi-structured interviews were conducted with GPs in Victoria, Australia. The Theoretical Domains Framework (TDF) guided data collection and analysis. Interviews explored the factors hindering and enabling achievement of 13 recommended behaviours. Data were analysed using content and thematic analysis. This paper presents an in-depth description of the factors influencing two behaviours, assessing co-morbid depression using a validated tool, and conducting a formal cognitive assessment using a validated scale. A total of 30 GPs were interviewed. Most GPs reported that they did not assess for co-morbid depression using a validated tool as per recommended guidance. Barriers included the belief that depression can be adequately assessed using general clinical indicators and that validated tools provide little additional information (theoretical domain of 'Beliefs about consequences'); discomfort in using validated tools ('Emotion'), possibly due to limited training and confidence ('Skills'; 'Beliefs about capabilities'); limited awareness of the need for, and forgetting to conduct, a depression assessment ('Knowledge'; 'Memory, attention and decision processes'). Most reported practising in a manner consistent with the recommendation that a formal cognitive assessment using a validated scale be undertaken. Key factors enabling this were having an awareness of the need to conduct a cognitive assessment ('Knowledge'); possessing the necessary skills and confidence ('Skills'; 'Beliefs about capabilities'); and having adequate time and resources ('Environmental context and resources'). This is the first study to our knowledge to use a theoretical approach to investigate the barriers and enablers to guideline-recommended diagnosis and management of dementia in general practice. It has identified key factors likely to explain GPs' uptake of the guidelines. The results have informed the design of an intervention aimed at supporting practice change in line with dementia guidelines, which is currently being evaluated in a cluster randomised trial.
NASA Astrophysics Data System (ADS)
Buchard, V.; da Silva, A. M.; Colarco, P. R.; Darmenov, A.; Randles, C. A.; Govindaraju, R.; Torres, O.; Campbell, J.; Spurr, R.
2014-12-01
A radiative transfer interface has been developed to simulate the UV Aerosol Index (AI) from the NASA Goddard Earth Observing System version 5 (GEOS-5) aerosol assimilated fields. The purpose of this work is to use the AI and Aerosol Absorption Optical Depth (AAOD) derived from the Ozone Monitoring Instrument (OMI) measurements as independent validation for the Modern Era Retrospective analysis for Research and Applications Aerosol Reanalysis (MERRAero). MERRAero is based on a version of the GEOS-5 model that is radiatively coupled to the Goddard Chemistry, Aerosol, Radiation, and Transport (GOCART) aerosol module and includes assimilation of Aerosol Optical Depth (AOD) from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Since AI is dependent on aerosol concentration, optical properties and altitude of the aerosol layer, we make use of complementary observations to fully diagnose the model, including AOD from the Multi-angle Imaging SpectroRadiometer (MISR), aerosol retrievals from the Aerosol Robotic Network (AERONET) and attenuated backscatter coefficients from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) mission to ascertain potential misplacement of plume height by the model. By sampling dust, biomass burning and pollution events in 2007 we have compared model produced AI and AAOD with the corresponding OMI products, identifying regions where the model representation of absorbing aerosols was deficient. As a result of this study over the Saharan dust region, we have obtained a new set of dust aerosol optical properties that retains consistency with the MODIS AOD data that were assimilated, while resulting in better agreement with aerosol absorption measurements from OMI. The analysis conducted over the South African and South American biomass burning regions indicates that revising the spectrally-dependent aerosol absorption properties in the near-UV region improves the modeled-observed AI comparisons. Finally, during a period where the Asian region was mainly dominated by anthropogenic aerosols, we have performed a qualitative analysis in which the specification of anthropogenic emissions in GEOS-5 is adjusted to provide insight into discrepancies observed in AI comparisons.
NASA Astrophysics Data System (ADS)
Buchard, V.; da Silva, A. M.; Colarco, P. R.; Darmenov, A.; Randles, C. A.; Govindaraju, R.; Torres, O.; Campbell, J.; Spurr, R.
2015-05-01
A radiative transfer interface has been developed to simulate the UV aerosol index (AI) from the NASA Goddard Earth Observing System version 5 (GEOS-5) aerosol assimilated fields. The purpose of this work is to use the AI and aerosol absorption optical depth (AAOD) derived from the Ozone Monitoring Instrument (OMI) measurements as independent validation for the Modern Era Retrospective analysis for Research and Applications Aerosol Reanalysis (MERRAero). MERRAero is based on a version of the GEOS-5 model that is radiatively coupled to the Goddard Chemistry, Aerosol, Radiation, and Transport (GOCART) aerosol module and includes assimilation of aerosol optical depth (AOD) from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. Since AI is dependent on aerosol concentration, optical properties and altitude of the aerosol layer, we make use of complementary observations to fully diagnose the model, including AOD from the Multi-angle Imaging SpectroRadiometer (MISR), aerosol retrievals from the AErosol RObotic NETwork (AERONET) and attenuated backscatter coefficients from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) mission to ascertain potential misplacement of plume height by the model. By sampling dust, biomass burning and pollution events in 2007 we have compared model-produced AI and AAOD with the corresponding OMI products, identifying regions where the model representation of absorbing aerosols was deficient. As a result of this study over the Saharan dust region, we have obtained a new set of dust aerosol optical properties that retains consistency with the MODIS AOD data that were assimilated, while resulting in better agreement with aerosol absorption measurements from OMI. The analysis conducted over the southern African and South American biomass burning regions indicates that revising the spectrally dependent aerosol absorption properties in the near-UV region improves the modeled-observed AI comparisons. Finally, during a period where the Asian region was mainly dominated by anthropogenic aerosols, we have performed a qualitative analysis in which the specification of anthropogenic emissions in GEOS-5 is adjusted to provide insight into discrepancies observed in AI comparisons.
Sensitivity analysis of urban flood flows to hydraulic controls
NASA Astrophysics Data System (ADS)
Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah
2017-04-01
Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.
Sensitivity analysis of non-cohesive sediment transport formulae
NASA Astrophysics Data System (ADS)
Pinto, Lígia; Fortunato, André B.; Freire, Paula
2006-10-01
Sand transport models are often based on semi-empirical equilibrium transport formulae that relate sediment fluxes to physical properties such as velocity, depth and characteristic sediment grain sizes. In engineering applications, errors in these physical properties affect the accuracy of the sediment fluxes. The present analysis quantifies error propagation from the input physical properties to the sediment fluxes, determines which ones control the final errors, and provides insight into the relative strengths, weaknesses and limitations of four total load formulae (Ackers and White, Engelund and Hansen, van Rijn, and Karim and Kennedy) and one bed load formulation (van Rijn). The various sources of uncertainty are first investigated individually, in order to pinpoint the key physical properties that control the errors. Since the strong non-linearity of most sand transport formulae precludes analytical approaches, a Monte Carlo method is validated and used in the analysis. Results show that the accuracy in total sediment transport evaluations is mainly determined by errors in the current velocity and in the sediment median grain size. For the bed load transport using the van Rijn formula, errors in the current velocity alone control the final accuracy. In a final set of tests, all physical properties are allowed to vary simultaneously in order to analyze the combined effect of errors. The combined effect of errors in all the physical properties is then compared to an estimate of the errors due to the intrinsic limitations of the formulae. Results show that errors in the physical properties can be dominant for typical uncertainties associated with these properties, particularly for small depths. A comparison between the various formulae reveals that the van Rijn formula is more sensitive to basic physical properties. Hence, it should only be used when physical properties are known with precision.
Injury patterns of soldiers in the second Lebanon war.
Schwartz, Dagan; Glassberg, Elon; Nadler, Roy; Hirschhorn, Gil; Marom, Ophir Cohen; Aharonson-Daniel, Limor
2014-01-01
In the second Lebanon war in 2006, the Israeli Defense Forces fought against well-prepared and well-equipped paramilitary forces. The conflict took place near the Israeli border and major Israeli medical centers. Good data records were maintained throughout the campaign, allowing accurate analysis of injury characteristics. This study is an in-depth analysis of injury mechanisms, severity, and anatomic locations. Data regarding all injured soldiers were collected from all care points up to the definitive care hospitals and were cross-referenced. In addition, trauma branch physicians and nurses interviewed medical teams to validate data accuracy. Injuries were analyzed using Injury Severity Score (ISS) (when precise anatomic data were available) and multiple injury patterns scoring for all. A total of 833 soldiers sustained combat-related injury during the study period, including 119 fatalities (14.3%). Although most soldiers (361) sustained injury only to one Abbreviated Injury Scale (AIS) region, the average number of regions per soldier was 2.0 but was 1.5 for survivors versus 4.2 for fatalities. Current war injury classifications have limitations that hinder valid comparisons between campaigns and settings. In addition, limitation on full autopsy in war fatalities further hinders data use. To partly compensate for those limitations, we have looked at the correlation between fatality rates and number of involved anatomic regions and found it to be strong. We have also found high fatality rates in some "combined" injuries such as head and chest injuries (71%) or in the abdomen and an extremity (75%). The use of multiinjury patterns analysis may help understand fatality rates and improve the utility of war injury analysis. Epidemiologic study, level III.
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
NASA Technical Reports Server (NTRS)
Foore, Larry; Ida, Nathan
2007-01-01
This study introduces the use of a modified Longley-Rice irregular terrain model and digital elevation data representative of an analogue lunar site for the prediction of RF path loss over the lunar surface. The results are validated by theoretical models and past Apollo studies. The model is used to approximate the path loss deviation from theoretical attenuation over a reflecting sphere. Analysis of the simulation results provides statistics on the fade depths for frequencies of interest, and correspondingly a method for determining the maximum range of communications for various coverage confidence intervals. Communication system engineers and mission planners are provided a link margin and path loss policy for communication frequencies of interest.
Evaluation of various modelling approaches in flood routing simulation and flood area mapping
NASA Astrophysics Data System (ADS)
Papaioannou, George; Loukas, Athanasios; Vasiliades, Lampros; Aronica, Giuseppe
2016-04-01
An essential process of flood hazard analysis and mapping is the floodplain modelling. The selection of the modelling approach, especially, in complex riverine topographies such as urban and suburban areas, and ungauged watersheds may affect the accuracy of the outcomes in terms of flood depths and flood inundation area. In this study, a sensitivity analysis implemented using several hydraulic-hydrodynamic modelling approaches (1D, 2D, 1D/2D) and the effect of modelling approach on flood modelling and flood mapping was investigated. The digital terrain model (DTMs) used in this study was generated from Terrestrial Laser Scanning (TLS) point cloud data. The modelling approaches included 1-dimensional hydraulic-hydrodynamic models (1D), 2-dimensional hydraulic-hydrodynamic models (2D) and the coupled 1D/2D. The 1D hydraulic-hydrodynamic models used were: HECRAS, MIKE11, LISFLOOD, XPSTORM. The 2D hydraulic-hydrodynamic models used were: MIKE21, MIKE21FM, HECRAS (2D), XPSTORM, LISFLOOD and FLO2d. The coupled 1D/2D models employed were: HECRAS(1D/2D), MIKE11/MIKE21(MIKE FLOOD platform), MIKE11/MIKE21 FM(MIKE FLOOD platform), XPSTORM(1D/2D). The validation process of flood extent achieved with the use of 2x2 contingency tables between simulated and observed flooded area for an extreme historical flash flood event. The skill score Critical Success Index was used in the validation process. The modelling approaches have also been evaluated for simulation time and requested computing power. The methodology has been implemented in a suburban ungauged watershed of Xerias river at Volos-Greece. The results of the analysis indicate the necessity of sensitivity analysis application with the use of different hydraulic-hydrodynamic modelling approaches especially for areas with complex terrain.
Predicting active-layer soil thickness using topographic variables at a small watershed scale
Li, Aidi; Tan, Xing; Wu, Wei; Liu, Hongbin; Zhu, Jie
2017-01-01
Knowledge about the spatial distribution of active-layer (AL) soil thickness is indispensable for ecological modeling, precision agriculture, and land resource management. However, it is difficult to obtain the details on AL soil thickness by using conventional soil survey method. In this research, the objective is to investigate the possibility and accuracy of mapping the spatial distribution of AL soil thickness through random forest (RF) model by using terrain variables at a small watershed scale. A total of 1113 soil samples collected from the slope fields were randomly divided into calibration (770 soil samples) and validation (343 soil samples) sets. Seven terrain variables including elevation, aspect, relative slope position, valley depth, flow path length, slope height, and topographic wetness index were derived from a digital elevation map (30 m). The RF model was compared with multiple linear regression (MLR), geographically weighted regression (GWR) and support vector machines (SVM) approaches based on the validation set. Model performance was evaluated by precision criteria of mean error (ME), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Comparative results showed that RF outperformed MLR, GWR and SVM models. The RF gave better values of ME (0.39 cm), MAE (7.09 cm), and RMSE (10.85 cm) and higher R2 (62%). The sensitivity analysis demonstrated that the DEM had less uncertainty than the AL soil thickness. The outcome of the RF model indicated that elevation, flow path length and valley depth were the most important factors affecting the AL soil thickness variability across the watershed. These results demonstrated the RF model is a promising method for predicting spatial distribution of AL soil thickness using terrain parameters. PMID:28877196
NASA Astrophysics Data System (ADS)
Amadori, Chiara; Toscani, Giovanni; Ghielmi, Manlio; Maesano, Francesco Emanuele; D'Ambrogi, Chiara; Lombardi, Stefano; Milanesi, Riccardo; Panara, Yuri; Di Giulio, Andrea
2017-04-01
The Pliocene-Pleistocene tectonic and sedimentary evolution of the eastern Po Plain and northern Adriatic Foreland Basin (PPAF) (extended ca. 35,000 km2) was the consequence of severe Northern Apennine compressional activity and climate-driven eustatic changes. According with the 2D seismic interpretation, facies analysis and sequence stratigraphy approach by Ghielmi et al. (2013 and references therein), these tectono-eustatic phases generated six basin-scale unconformities referred as Base Pliocene (PL1), Intra-Zanclean (PL2), Intra-Piacenzian (PL3), Gelasian (PL4), Base Calabrian (PS1) and Late Calabrian (PS2). We present a basin-wide detailed 3D model of the PPAF region, derived from the interpretation of these unconformities in a dense network of seismic lines (ca. 6,000 km) correlated with more than 200 well stratigraphies (courtesy of ENI E&P). The initial 3D time-model has been time-to-depth converted using the 3D velocity model created with Vel-IO 3D, a tool for 3D depth conversions and then validated and integrated with depth domain dataset from bibliography and well log. Resultant isobath and isopach maps are produced to inspect step-by-step the basin paleogeographic evolution; it occurred through alternating stages of simple and fragmented foredeeps. Changes in the basin geometry through time, from the inner sector located in the Emilia-Romagna Apennines to the outermost region (Veneto and northern Adriatic Sea), were marked by repeated phases of outward migration of two large deep depocenters located in front of Emilia arcs on the west, and in front of Ferrara-Romagna thrusts on the east. During late Pliocene-early Pleistocene, the inner side of the Emilia-Romagna arcs evolved into an elongated deep thrust-top basin due to a strong foredeep fragmentation then, an overall tectono-stratigraphic analysis shows also a decreasing trend of tectonic intensity of the Northern Apennine since Pleistocene until present.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
NASA Technical Reports Server (NTRS)
Russell, P.; Livingston, J.; Schmid, B.; Eilers, J.; Kolyer, R.; Redemann, J.; Ramirez, S.; Yee, J-H.; Swartz, W.; Shetter, R.
2004-01-01
The 14-channel NASA Ames Airborne Tracking Sunphotometer (AATS-14) measured solar-beam transmission on the NASA DC-8 during the Second SAGE III Ozone Loss and Validation Experiment (SOLVE II). This paper presents AATS-14 results for multiwavelength aerosol optical depth (AOD), including its spatial structure and comparisons to results from two satellite sensors and another DC-8 instrument. These are the Stratospheric Aerosol and Gas Experiment III (SAGE III), the Polar Ozone and Aerosol Measurement III (POAM III) and the Direct beam Irradiance Airborne Spectrometer (DIAS).
Adamson, Joy; Gooberman-Hill, Rachael; Woolhead, Gillian; Donovan, Jenny
2004-07-01
Multi-method approaches are increasingly advocated in health services research (HSR). This paper examines the use of standardised self-completion questionnaires and questions, during in-depth interviews, a technique termed 'questerviews'. 'Questerview' techniques were used in four empirical studies of health perceptions conducted by the authors. The studies included both standardised self-completion questions or questionnaires and in-depth interviews. Respondents were tape-recorded while they completed the standardised questionnaires and were encouraged to discuss their definitions of terms and responses to items in-depth. In all studies, 'questerviews' were fully transcribed and data analysis involved the scrutinising of transcripts to identify emergent themes. Responses to the standardised items led to rich sources of qualitative data. They proved to be useful triggers as respondents discussed their understanding and definitions of terms, often explaining their responses with stories from their own experiences. The items triggered detailed exploration of the complex factors that comprise health, illness and healthcare seeking, and gave considerable insight into the ways in which people respond to standardised questions. Apparently simple questions and response categories conceal considerable complexity. Inclusion of standardised survey questions in qualitative interviews can provide an easy and fruitful method to explore research issues and provide triggers to difficult or contested topics. Well designed and validated questionnaires produce data of immense value to HSR, and this value could be further enhanced by their use within a qualitative interview. We suggest that the technique of 'questerviews' is a tangible and pragmatic way of doing this.
Tuning, Validation, and Uncertainty Estimates for a Sound Exposure Model
2011-09-01
to swell height. This phenomenon is described in ―Observations of Fluctuation of Transmitted Sound in Shallow Water‖ ( Urick 1969). Mean wave...newport/usrdiv/Transducers/G34.pdf] 40 Saunders, P. M., 1981: Practical Conversion of Pressure to Depth. J. Phys. Oceanogr., 11, 573–574. Urick , R
Long, Kristin A; Pariseau, Emily M; Muriel, Anna C; Chu, Andrea; Kazak, Anne E; Alderfer, Melissa A
2018-04-03
Although many siblings experience distress after a child's cancer diagnosis, their psychosocial functioning is seldom assessed in clinical oncology settings. One barrier to systematic sibling screening is the lack of a validated, sibling-specific screening instrument. Thus, this study developed sibling-specific screening modules in English and Spanish for the Psychosocial Assessment Tool (PAT), a well-validated screener of family psychosocial risk. A purposive sample of English- and Spanish-speaking parents of children with cancer (N = 29) completed cognitive interviews to provide in-depth feedback on the development of the new PAT sibling modules. Interviews were transcribed verbatim, cleaned, and analyzed using applied thematic analysis. Items were updated iteratively according to participants' feedback. Data collection continued until saturation was reached (i.e., all items were clear and valid). Two sibling modules were developed to assess siblings' psychosocial risk at diagnosis (preexisting risk factors) and several months thereafter (reactions to cancer). Most prior PAT items were retained; however, parents recommended changes to improve screening format (separately assessing each sibling within the family and expanding response options to include "sometimes"), developmental sensitivity (developing or revising items for ages 0-2, 3-4, 5-9, and 10+ years), and content (adding items related to sibling-specific social support, global assessments of sibling risk, emotional/behavioral reactions to cancer, and social ecological factors such as family and school). Psychosocial screening requires sibling-specific screening items that correspond to preexisting risk (at diagnosis) and reactions to cancer (several months after diagnosis). Validated, sibling-specific screeners will facilitate identification of siblings with elevated psychosocial risk.
NASA Astrophysics Data System (ADS)
Liu, Y. R.; Li, Y. P.; Huang, G. H.; Zhang, J. L.; Fan, Y. R.
2017-10-01
In this study, a Bayesian-based multilevel factorial analysis (BMFA) method is developed to assess parameter uncertainties and their effects on hydrological model responses. In BMFA, Differential Evolution Adaptive Metropolis (DREAM) algorithm is employed to approximate the posterior distributions of model parameters with Bayesian inference; factorial analysis (FA) technique is used for measuring the specific variations of hydrological responses in terms of posterior distributions to investigate the individual and interactive effects of parameters on model outputs. BMFA is then applied to a case study of the Jinghe River watershed in the Loess Plateau of China to display its validity and applicability. The uncertainties of four sensitive parameters, including soil conservation service runoff curve number to moisture condition II (CN2), soil hydraulic conductivity (SOL_K), plant available water capacity (SOL_AWC), and soil depth (SOL_Z), are investigated. Results reveal that (i) CN2 has positive effect on peak flow, implying that the concentrated rainfall during rainy season can cause infiltration-excess surface flow, which is an considerable contributor to peak flow in this watershed; (ii) SOL_K has positive effect on average flow, implying that the widely distributed cambisols can lead to medium percolation capacity; (iii) the interaction between SOL_AWC and SOL_Z has noticeable effect on the peak flow and their effects are dependent upon each other, which discloses that soil depth can significant influence the processes of plant uptake of soil water in this watershed. Based on the above findings, the significant parameters and the relationship among uncertain parameters can be specified, such that hydrological model's capability for simulating/predicting water resources of the Jinghe River watershed can be improved.
Parresol, B. R.; Scott, D. A.; Zarnoch, S. J.; ...
2017-12-15
Spatially explicit mapping of forest productivity is important to assess many forest management alternatives. We assessed the relationship between mapped variables and site index of forests ranging from southern pine plantations to natural hardwoods on a 74,000-ha landscape in South Carolina, USA. Mapped features used in the analysis were soil association, land use condition in 1951, depth to groundwater, slope and aspect. Basal area, species composition, age and height were the tree variables measured. Linear modelling identified that plot basal area, depth to groundwater, soils association and the interactions between depth to groundwater and forest group, and between land usemore » in 1951 and forest group were related to site index (SI) (R 2 =0.37), but this model had regression attenuation. We then used structural equation modeling to incorporate error-in-measurement corrections for basal area and groundwater to remove bias in the model. We validated this model using 89 independent observations and found the 95% confidence intervals for the slope and intercept of an observed vs. predicted site index error-corrected regression included zero and one, respectively, indicating a good fit. With error in measurement incorporated, only basal area, soil association, and the interaction between forest groups and land use were important predictors (R2 =0.57). Thus, we were able to develop an unbiased model of SI that could be applied to create a spatially explicit map based primarily on soils as modified by past (land use and forest type) and recent forest management (basal area).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parresol, B. R.; Scott, D. A.; Zarnoch, S. J.
Spatially explicit mapping of forest productivity is important to assess many forest management alternatives. We assessed the relationship between mapped variables and site index of forests ranging from southern pine plantations to natural hardwoods on a 74,000-ha landscape in South Carolina, USA. Mapped features used in the analysis were soil association, land use condition in 1951, depth to groundwater, slope and aspect. Basal area, species composition, age and height were the tree variables measured. Linear modelling identified that plot basal area, depth to groundwater, soils association and the interactions between depth to groundwater and forest group, and between land usemore » in 1951 and forest group were related to site index (SI) (R 2 =0.37), but this model had regression attenuation. We then used structural equation modeling to incorporate error-in-measurement corrections for basal area and groundwater to remove bias in the model. We validated this model using 89 independent observations and found the 95% confidence intervals for the slope and intercept of an observed vs. predicted site index error-corrected regression included zero and one, respectively, indicating a good fit. With error in measurement incorporated, only basal area, soil association, and the interaction between forest groups and land use were important predictors (R2 =0.57). Thus, we were able to develop an unbiased model of SI that could be applied to create a spatially explicit map based primarily on soils as modified by past (land use and forest type) and recent forest management (basal area).« less
Sukhawaha, Supattra; Arunpongpaisal, Suwanna; Hurst, Cameron
2016-09-30
Suicide prevention in adolescents by early detection using screening tools to identify high suicidal risk is a priority. Our objective was to build a multidimensional scale namely "Suicidality of Adolescent Screening Scale (SASS)" to identify adolescents at risk of suicide. An initial pool of items was developed by using in-depth interview, focus groups and a literature review. Initially, 77 items were administered to 307 adolescents and analyzed using the exploratory Multidimensional Item Response Theory (MIRT) to remove unnecessary items. A subsequent exploratory factor analysis revealed 35 items that collected into 4 factors: Stressors, Pessimism, Suicidality and Depression. To confirm this structure, a new sample of 450 adolescents were collected and confirmatory MIRT factor analysis was performed. The resulting scale was shown to be both construct valid and able to discriminate well between adolescents that had, and hadn't previous attempted suicide. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.