NASA Astrophysics Data System (ADS)
Jorge, Marco G.; Brennand, Tracy A.
2017-07-01
Relict drumlin and mega-scale glacial lineation (positive relief, longitudinal subglacial bedforms - LSBs) morphometry has been used as a proxy for paleo ice-sheet dynamics. LSB morphometric inventories have relied on manual mapping, which is slow and subjective and thus potentially difficult to reproduce. Automated methods are faster and reproducible, but previous methods for LSB semi-automated mapping have not been highly successful. Here, two new object-based methods for the semi-automated extraction of LSBs (footprints) from digital terrain models are compared in a test area in the Puget Lowland, Washington, USA. As segmentation procedures to create LSB-candidate objects, the normalized closed contour method relies on the contouring of a normalized local relief model addressing LSBs on slopes, and the landform elements mask method relies on the classification of landform elements derived from the digital terrain model. For identifying which LSB-candidate objects correspond to LSBs, both methods use the same LSB operational definition: a ruleset encapsulating expert knowledge, published morphometric data, and the morphometric range of LSBs in the study area. The normalized closed contour method was separately applied to four different local relief models, two computed in moving windows and two hydrology-based. Overall, the normalized closed contour method outperformed the landform elements mask method. The normalized closed contour method performed on a hydrological relief model from a multiple direction flow routing algorithm performed best. For an assessment of its transferability, the normalized closed contour method was evaluated on a second area, the Chautauqua drumlin field, Pennsylvania and New York, USA where it performed better than in the Puget Lowland. A broad comparison to previous methods suggests that the normalized relief closed contour method may be the most capable method to date, but more development is required.
Chao, Hsi-Chun; Chen, Guan-Yuan; Hsu, Lih-Ching; Liao, Hsiao-Wei; Yang, Sin-Yu; Wang, San-Yuan; Li, Yu-Liang; Tang, Sung-Chun; Tseng, Yufeng Jane; Kuo, Ching-Hua
2017-06-08
Cellular lipidomic studies have been favored approaches in many biomedical research areas. To provide fair comparisons of the studied cells, it is essential to perform normalization of the determined concentration before lipidomic analysis. This study proposed a cellular lipidomic normalization method by measuring the phosphatidylcholine (PC) and sphingomyelin (SM) contents in cell extracts. To provide efficient analysis of PC and SM in cell extracts, flow injection analysis-electrospray ionization-tandem mass spectrometry (FIA-ESI-MS/MS) with a precursor ion scan (PIS) of m/z 184 was used, and the parameters affecting the performance of the method were optimized. Good linearity could be observed between the cell extract dilution factor and the reciprocal of the total ion chromatogram (TIC) area in the PIS of m/z 184 within the dilution range of 1- to 16-fold (R 2 = 0.998). The calibration curve could be used for concentration adjustment of the unknown concentration of a cell extract. The intraday and intermediate precisions were below 10%. The accuracy ranged from 93.0% to 105.6%. The performance of the new normalization method was evaluated using different numbers of HCT-116 cells. Sphingosine, ceramide (d18:1/18:0), SM (d18:1/18:0) and PC (16:1/18:0) were selected as the representative test lipid species, and the results showed that the peak areas of each lipid species obtained from different cell numbers were within a 20% variation after normalization. Finally, the PIS of 184 normalization method was applied to study ischemia-induced neuron injury using oxygen and glucose deprivation (OGD) on primary neuronal cultured cells. Our results showed that the PIS of 184 normalization method is an efficient and effective approach for concentration normalization in cellular lipidomic studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
NASA Astrophysics Data System (ADS)
Ayyoub, Abdellatif; Er-Raki, Salah; Khabba, Saïd; Merlin, Olivier; César Rodriguez, Julio; Ezzahar, Jamal; Bahlaoui, Ahmed; Chehbouni, Abdelghani
2016-04-01
The present work aims to develop a simple approach relating normalized daily sap flow (per unit of leaf area) and daily ET0 (mm/day) calculated by two methods: FAO-Penman-Monteith (FAO-PM) and Hargreaves-Samani (HARG). The data sets used for developing this approach are taken from three experimental sites (olive trees, cv. "Oleaeuropaea L.", olive trees, cv. "Arbequino" and citrus trees cv. "Clementine Afourar") conducted in the Tensift region around Marrakech, Morocco and one experimental site (pecan orchard, cv. "Caryaillinoinensis, Wangenh. K. Koch") conducted in the Yaqui Valley, northwest of Mexico). The results showed that the normalized daily sap flow (volume of transpired water per unit of leaf area) was linearly correlated with ET0 (mm per day) calculated by FAO-PM method. The coefficient of determination (R2) and the slope of this linear regression varied between 0.71 and 0.97 and between 0.30 and 0.35, respectively, depending on the type of orchards. For HARG method, the relationship between both terms is also linear but with less accuracy (R2 =0.7) as expected due to the underestimation of ET0 by this method. Afterward, the validation of the developed linear relationship was performed over an olive orchard ("Oleaeuropaea L.") where the measurements of sap flow were available for another (2004) cropping season. The scatter plot between the normalized measured and estimated sap flow based on FAO-PM method reveals a very good agreement (slope = 1, with R2 = 0.83 and RMSE=0.14 L/m2 leaf area). However, for the estimation of normalized sap flow based on HARG method, the correlation is more scattered with some underestimation (5%). A further validation wasperformed using the measurements of evapotranspiration (ET) by eddy correlation system and the results showed that the correlation between normalized measured ET and estimated normalized sap flow is best when using FAO-PM method (RMSE=0.33 L/m2 leaf area) for estimating ET0 than when using HARG method (RMSE= 0.51 L/m2 leaf area). Finally, the performance of the developed approach was compared to the traditional dual crop coefficient scheme for estimating plant transpiration. Cross-comparison of these two approaches with the measurements data gave satisfactory results with an average value of RMSE equal to about 0.37 mm/day for both approaches.
Laser-induced differential normalized fluorescence method for cancer diagnosis
Vo-Dinh, Tuan; Panjehpour, Masoud; Overholt, Bergein F.
1996-01-01
An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample.
Laser-induced differential normalized fluorescence method for cancer diagnosis
Vo-Dinh, T.; Panjehpour, M.; Overholt, B.F.
1996-12-03
An apparatus and method for cancer diagnosis are disclosed. The diagnostic method includes the steps of irradiating a tissue sample with monochromatic excitation light, producing a laser-induced fluorescence spectrum from emission radiation generated by interaction of the excitation light with the tissue sample, and dividing the intensity at each wavelength of the laser-induced fluorescence spectrum by the integrated area under the laser-induced fluorescence spectrum to produce a normalized spectrum. A mathematical difference between the normalized spectrum and an average value of a reference set of normalized spectra which correspond to normal tissues is calculated, which provides for amplifying small changes in weak signals from malignant tissues for improved analysis. The calculated differential normalized spectrum is correlated to a specific condition of a tissue sample. 5 figs.
Extracting fields snow coverage information with HJ-1A/B satellites data
NASA Astrophysics Data System (ADS)
Dong, Wenquan; Meng, Jihua
2015-10-01
The distribution and change of snow coverage are sensitive factors of climate change. In northeast part of China, farmlands are still covered with snow in spring. Since sowing activity can only be done when the snow melted, fields snow coverage monitoring provides reference for the determination of sowing date. Because of the restriction of the sensors and application requirements, current researches on remote sensing of snow focus more on the study of musicale and large scale, rather than the study of small scale, and especially research on snow melting period is rarely reported.HJ-1A/B satellites are parts of little satellite constellation, focusing on environment and disaster monitoring and meteorological forecast. Compared to other data sources, HJ-1A/B satellites both have comparatively higher temporal and spatial resolution and are more conducive to monitor the variations of melting snow coverage at small watershed. This paper was based on HJ-1A/1B data, taking Hongxing farm of Bei'an, Heilongjiang Province, China as the study area. In this paper, we exploited the methods for extraction of snow cover information on farmland in two cases, both HJ-1A/1B CCD with HJ-1B IRS data and just HJ-1A/1B CCD data. The reason we chose the two cases is that, the two optical satellites HJ-1A/B are capable of providing a whole territory coverage period in visible light spectrum in two days, infrared spectrum in four days. So sometimes we can only obtain CCD image. In this case, the method of normalized snow index cannot be used to extract snow coverage information. Using HJ-1A/1B CCD with HJ-1B IRS data, combined with the theory of snow remote sensing monitoring, this paper analyzed spectral response characteristics of HJ-1A/1B satellites data, then the widely used Normalized Difference Snow Index(NDSI) and S3 Index were quoted to the HJ-1A/1B satellites data. The NDSI uses reflectance values of Red and SWIR spectral bands of HJ-1B, and S3 index uses reflectance values of NIR, Red and SWIR spectral bands. With multi-temporal HJ satellite data, the optimal threshold of normalized snow index was determined to divide the farmland into snow covering area, melting snow area and non-snow area. The results are quite similar to each other and of high accuracy, and the melting snow coverage can be well extracted by two types of normalized snow index. When we can only obtain CCD image, we use supervised classification method to extract melting snow coverage. With this method, the accuracy of fields snow coverage extraction is slightly lower than that using normalized snow index methods mentioned above. And in mountain area, the snow coverage area is slightly larger than that is extracted by normalized snow index methods, because the shadows make the color of snow in the valley darker, the supervised classification method divides it into non-snow coverage area, while the normalized snow index method well weakened the effect of shadow. This study shows that extraction accuracy in both cases is assessed, and both of them can meet the needs of practical applications. HJ-1A/1B satellites are conducive to monitor the variations of melting snow coverage over farmland, and they can provide reference for the determination of sowing date.
Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E
2018-01-16
The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .
Li, Zhixi; Peck, Kyung K.; Brennan, Nicole P.; Jenabi, Mehrnaz; Hsu, Meier; Zhang, Zhigang; Holodny, Andrei I.; Young, Robert J.
2014-01-01
Purpose The purpose of this study was to compare the deterministic and probabilistic tracking methods of diffusion tensor white matter fiber tractography in patients with brain tumors. Materials and Methods We identified 29 patients with left brain tumors <2 cm from the arcuate fasciculus who underwent pre-operative language fMRI and DTI. The arcuate fasciculus was reconstructed using a deterministic Fiber Assignment by Continuous Tracking (FACT) algorithm and a probabilistic method based on an extended Monte Carlo Random Walk algorithm. Tracking was controlled using two ROIs corresponding to Broca’s and Wernicke’s areas. Tracts in tumoraffected hemispheres were examined for extension between Broca’s and Wernicke’s areas, anterior-posterior length and volume, and compared with the normal contralateral tracts. Results Probabilistic tracts displayed more complete anterior extension to Broca’s area than did FACT tracts on the tumor-affected and normal sides (p < 0.0001). The median length ratio for tumor: normal sides was greater for probabilistic tracts than FACT tracts (p < 0.0001). The median tract volume ratio for tumor: normal sides was also greater for probabilistic tracts than FACT tracts (p = 0.01). Conclusion Probabilistic tractography reconstructs the arcuate fasciculus more completely and performs better through areas of tumor and/or edema. The FACT algorithm tends to underestimate the anterior-most fibers of the arcuate fasciculus, which are crossed by primary motor fibers. PMID:25328583
Martens, Jürgen
2005-01-01
The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.
The average receiver operating characteristic curve in multireader multicase imaging studies
Samuelson, F W
2014-01-01
Objective: In multireader, multicase (MRMC) receiver operating characteristic (ROC) studies for evaluating medical imaging systems, the area under the ROC curve (AUC) is often used as a summary metric. Owing to the limitations of AUC, plotting the average ROC curve to accompany the rigorous statistical inference on AUC is recommended. The objective of this article is to investigate methods for generating the average ROC curve from ROC curves of individual readers. Methods: We present both a non-parametric method and a parametric method for averaging ROC curves that produce a ROC curve, the area under which is equal to the average AUC of individual readers (a property we call area preserving). We use hypothetical examples, simulated data and a real-world imaging data set to illustrate these methods and their properties. Results: We show that our proposed methods are area preserving. We also show that the method of averaging the ROC parameters, either the conventional bi-normal parameters (a, b) or the proper bi-normal parameters (c, da), is generally not area preserving and may produce a ROC curve that is intuitively not an average of multiple curves. Conclusion: Our proposed methods are useful for making plots of average ROC curves in MRMC studies as a companion to the rigorous statistical inference on the AUC end point. The software implementing these methods is freely available from the authors. Advances in knowledge: Methods for generating the average ROC curve in MRMC ROC studies are formally investigated. The area-preserving criterion we defined is useful to evaluate such methods. PMID:24884728
NASA Technical Reports Server (NTRS)
Cole, G. L.; Willoh, R. G.
1975-01-01
A linearized mathematical analysis is presented for determining the response of normal shock position and subsonic duct pressures to flow-field perturbations upstream of the normal shock in mixed-compression supersonic inlets. The inlet duct cross-sectional area variation is approximated by constant-area sections; this approximation results in one-dimensional wave equations. A movable normal shock separates the supersonic and subsonic flow regions, and a choked exit is assumed for the inlet exit condition. The analysis leads to a closed-form matrix solution for the shock position and pressure transfer functions. Analytical frequency response results are compared with experimental data and a method of characteristics solution.
Regional Myocardial Blood Flow*
Sullivan, Jay M.; Taylor, Warren J.; Elliott, William C.; Gorlin, Richard
1967-01-01
A method is described which measures the local effectiveness of the myocardial circulation, expressed as a clearance constant. Uniform clearance constants have been demonstrated in the normal canine and human myocardium. A distinct difference in clearance constants has been demonstrated between the normal canine myocardium and areas of naturally occurring disease. Heterogeneous clearance constants have been found in a majority of human subjects with coronary artery disease—the lowest rates being noted in areas of fibrous aneurysm. PMID:6036537
2010-01-01
Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082
Armstrong, David S.; Parker, Gene W.; Richards, Todd A.
2003-01-01
Streamflow characteristics and methods for determining streamflow requirements for habitat protection were investigated at 23 active index streamflow-gaging stations in southern New England. Fish communities sampled near index streamflow-gaging stations in Massachusetts have a high percentage of fish that require flowing-water habitats for some or all of their life cycle. The relatively unaltered flow condition at these sites was assumed to be one factor that has contributed to this condition. Monthly flow durations and low flow statistics were determined for the index streamflow-gaging stations for a 25- year period from 1976 to 2000. Annual hydrographs were prepared for each index station from median streamflows at the 50-percent monthly flow duration, normalized by drainage area. A median monthly flow of 1 ft3/s/mi2 was used to split hydrographs into a high-flow period (November–May), and a low-flow period (June–October). The hydrographs were used to classify index stations into groups with similar median monthly flow durations. Index stations were divided into four regional groups, roughly paralleling the coast, to characterize streamflows for November to May; and into two groups, on the basis of base-flow index and percentage of sand and gravel in the contributing area, for June to October. For the June to October period, for index stations with a high base-flow index and contributing areas greater than 20 percent sand and gravel, median streamflows at the 50-percent monthly flow duration, normalized by drainage area, were 0.57, 0.49, and 0.46 ft3/s/mi2 for July, August, and September, respectively. For index stations with a low base-flow index and contributing areas less than 20 percent sand and gravel, median streamflows at the 50-percent monthly flow duration, normalized by drainage area, were 0.34, 0.28, and 0.27 ft3/s/mi2 for July, August, and September, respectively. Streamflow variability between wet and dry years can be characterized by use of the interquartile range of median streamflows at selected monthly flow durations. For example, the median Q50 discharge for August had an interquartile range of 0.30 to 0.87 ft3/s/mi2 for the high-flow group and 0.16 to 0.47 ft3/s/mi2 for the low-flow group. Streamflow requirements for habitat protection were determined for 23 index stations by use of three methods based on hydrologic records, the Range of Variability Approach, the Tennant method, and the New England Aquatic-Base-Flow method. Normalized flow management targets determined by the Range of Variability Approach for July, August, and September ranged between 0.21 and 0.84 ft3/s/mi2 for the low monthly flow duration group, and 0.37 and 1.27 ft3/s/mi2 for the high monthly flow duration group. Median streamflow requirements for habitat protection during summer for the 23 index streamflow-gaging stations determined by the Tennant method, normalized by drainage area, were 0.81, 0.61, and 0.21 ft3/s/mi2 for the Tennant 40-, 30-, and 10-percent of the mean annual flow methods, representing good, fair, and poor stream habitat conditions in summer, according to Tennant. New England Aquatic-Base-Flow streamflow requirements for habitat protection during summer were determined from median of monthly mean flows for August for index streamflow-gaging stations having drainage areas greater than 50 mi2 . For five index streamflow-gaging stations in the low median monthly flow group, the average median monthly mean streamflow for August, normalized by drainage area, was 0.48 ft3/s/mi2. Streamflow requirements for habitat protection were determined for riffle habitats near 10 index stations by use of two methods based on hydraulic ratings, the Wetted-Perimeter and R2Cross methods. Hydraulic parameters required by these methods were simulated by calibrated HEC-RAS models. Wetted-Perimeter streamflow requirements for habitat protection, normalized by drainage area, ranged between 0.13 and 0.58 ft3/s/mi2, and had a median value of 0.37 ft3/s/mi2. Streamflow requirements determined by the R2Cross 3-of-3 criteria method ranged between 0.39 and 2.1 ft3/s/mi2 , and had a median of 0.84 ft3/s/mi2. Streamflow requirements determined by the R2Cross 2-of-3 criteria method, normalized by drainage area, ranged between 0.16 and 0.85 ft3/s/mi2 and had a median of 0.36 ft3/s/mi2 , respectively. Streamflow requirements determined by the different methods were evaluated by comparison to streamflow statistics from the index streamflow-gaging stations.
NASA Astrophysics Data System (ADS)
El-Khadragy, A. A.; Shazly, T. F.; AlAlfy, I. M.; Ramadan, M.; El-Sawy, M. Z.
2018-06-01
An exploration method has been developed using surface and aerial gamma-ray spectral measurements in prospecting petroleum in stratigraphic and structural traps. The Gulf of Suez is an important region for studying hydrocarbon potentiality in Egypt. Thorium normalization technique was applied on the sandstone reservoirs in the region to determine the hydrocarbon potentialities zones using the three spectrometric radioactive gamma ray-logs (eU, eTh and K% logs). This method was applied on the recorded gamma-ray spectrometric logs for Rudeis and Kareem Formations in Ras Ghara oil Field, Gulf of Suez, Egypt. The conventional well logs (gamma-ray, resistivity, neutron, density and sonic logs) were analyzed to determine the net pay zones in the study area. The agreement ratios between the thorium normalization technique and the results of the well log analyses are high, so the application of thorium normalization technique can be used as a guide for hydrocarbon accumulation in the study reservoir rocks.
Chung, Kyu Sung; Choi, Choong Hyeok; Bae, Tae Soo; Ha, Jeong Ku; Jun, Dal Jae; Wang, Joon Ho; Kim, Jin Goo
2018-04-01
To compare tibiofemoral contact mechanics after fixation for medial meniscus posterior root radial tears (MMPRTs). Seven fresh knees from mature pigs were used. Each knee was tested under 5 conditions: normal knee, MMPRT, pullout fixation with simple sutures, fixation with modified Mason-Allen sutures, and all-inside fixation using Fastfix 360. The peak contact pressure and contact surface area were evaluated using a capacitive sensor positioned between the meniscus and tibial plateau, under a 1,000-N compression force, at different flexion angles (0°, 30°, 60°, and 90°). The peak contact pressure was significantly higher in MMPRTs than in normal knees (P = .018). Although the peak contact pressure decreased significantly after fixation at all flexion angles (P = .031), it never recovered to the values noted in the normal meniscus. No difference was observed among fixation groups (P = .054). The contact surface area was significantly lower in MMPRTs than in the normal meniscus (P = .018) and increased significantly after fixation at all flexion angles (P = .018) but did not recover to within normal limits. For all flexion angles except 60°, the contact surface area was significantly higher for fixation with Mason-Allen sutures than for fixation with simple sutures or all-inside fixation (P = .027). At 90° of flexion, the contact surface area was significantly better for fixation with simple sutures than for all-inside fixation (P = .031). The peak contact pressure and contact surface area improved significantly after fixation, regardless of the fixation method, but did not recover to the levels noted in the normal meniscus after any type of fixation. Among the fixation methods evaluated in this time 0 study, fixation using modified Mason-Allen sutures provided a superior contact surface area compared with that noted after fixation using simple sutures or all-inside fixation, except at 60° of flexion. However, this study had insufficient power to accurately detect the differences between the outcomes of various fixation methods. Our results in a porcine model suggest that fixation can restore tibiofemoral contact mechanics in MMPRT and that fixation with a locking mechanism leads to superior biomechanical properties. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Dynamics and Control of Flexible Space Vehicles
NASA Technical Reports Server (NTRS)
Likins, P. W.
1970-01-01
The purpose of this report is twofold: (1) to survey the established analytic procedures for the simulation of controlled flexible space vehicles, and (2) to develop in detail methods that employ a combination of discrete and distributed ("modal") coordinates, i.e., the hybrid-coordinate methods. Analytic procedures are described in three categories: (1) discrete-coordinate methods, (2) hybrid-coordinate methods, and (3) vehicle normal-coordinate methods. Each of these approaches is described and analyzed for its advantages and disadvantages, and each is found to have an area of applicability. The hybrid-coordinate method combines the efficiency of the vehicle normal-coordinate method with the versatility of the discrete-coordinate method, and appears to have the widest range of practical application. The results in this report have practical utility in two areas: (1) complex digital computer simulation of flexible space vehicles of arbitrary configuration subject to realistic control laws, and (2) preliminary control system design based on transfer functions for linearized models of dynamics and control laws.
Optical measurement of isolated canine lung filtration coefficients at normal hematocrits.
Klaesner, J W; Pou, N A; Parker, R E; Finney, C; Roselli, R J
1997-12-01
In this study, lung filtration coefficient (Kfc) values were measured in eight isolated canine lung preparations at normal hematocrit values using three methods: gravimetric, blood-corrected gravimetric, and optical. The lungs were kept in zone 3 conditions and subjected to an average venous pressure increase of 10.24 +/- 0.27 (SE) cmH2O. The resulting Kfc (ml . min-1 . cmH2O-1 . 100 g dry lung wt-1) measured with the gravimetric technique was 0.420 +/- 0.017, which was statistically different from the Kfc measured by the blood-corrected gravimetric method (0.273 +/- 0.018) or the product of the reflection coefficient (sigmaf) and Kfc measured optically (0. 272 +/- 0.018). The optical method involved the use of a Cellco filter cartridge to separate red blood cells from plasma, which allowed measurement of the concentration of the tracer in plasma at normal hematocrits (34 +/- 1.5). The permeability-surface area product was measured using radioactive multiple indicator-dilution methods before, during, and after venous pressure elevations. Results showed that the surface area of the lung did not change significantly during the measurement of Kfc. These studies suggest that sigmafKfc can be measured optically at normal hematocrits, that this measurement is not influenced by blood volume changes that occur during the measurement, and that the optical sigmafKfc agrees with the Kfc obtained via the blood-corrected gravimetric method.
Determination of rolling resistance coefficient based on normal tyre stiffness
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Tarasuyk, V. N.; Koval, V. S.; Ovchinnikova, N. I.; Fedotov, A. I.; Fedotov, K. V.
2018-03-01
The purpose of the article is to develop analytical dependence of wheel rolling resistance coefficient based on the mathematical description of normal tyre stiffness. The article uses the methods of non-holonomic mechanics and plane section methods. The article shows that the abscissa of gravity center of tyre stiffness expansion by the length of the contact area is the shift of normal road response. It can be used for determining rolling resistance coefficient. When determining rolling resistance coefficient using ellipsis and power function equations, one can reduce labor costs for testing and increase assessment accuracy.
Quantitative structure-cytotoxicity relationship of phenylpropanoid amides.
Shimada, Chiyako; Uesawa, Yoshihiro; Ishihara, Mariko; Kagaya, Hajime; Kanamoto, Taisei; Terakubo, Shigemi; Nakashima, Hideki; Takao, Koichi; Saito, Takayuki; Sugita, Yoshiaki; Sakagami, Hiroshi
2014-07-01
A total of 12 phenylpropanoid amides were subjected to quantitative structure-activity relationship (QSAR) analysis, based on their cytotoxicity, tumor selectivity and anti-HIV activity, in order to investigate on their biological activities. Cytotoxicity against four human oral squamous cell carcinoma (OSCC) cell lines and three human oral normal cells was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) method. Tumor selectivity was evaluated by the ratio of the mean CC50 (50% cytotoxic concentration) against normal oral cells to that against OSCC cell lines. Anti-HIV activity was evaluated by the ratio of CC50 to EC50 (50% cytoprotective concentration from HIV infection). Physicochemical, structural, and quantum-chemical parameters were calculated based on the conformations optimized by the LowModeMD method followed by density functional theory (DFT) method. Twelve phenylpropanoid amides showed moderate cytotoxicity against both normal and OSCC cell lines. N-Caffeoyl derivatives coupled with vanillylamine and tyramine exhibited relatively higher tumor selectivity. Cytotoxicity against normal cells was correlated with descriptors related to electrostatic interaction such as polar surface area and chemical hardness, whereas cytotoxicity against tumor cells correlated with free energy, surface area and ellipticity. The tumor-selective cytotoxicity correlated with molecular size (surface area) and electrostatic interaction (the maximum electrostatic potential). The molecular size, shape and ability for electrostatic interaction are useful parameters for estimating the tumor selectivity of phenylpropanoid amides. Copyright© 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Zhiming; de Wulf, Robert R.; van Coillie, Frieke M. B.; Verbeke, Lieven P. C.; de Clercq, Eva M.; Ou, Xiaokun
2011-01-01
Mapping of vegetation using remote sensing in mountainous areas is considerably hampered by topographic effects on the spectral response pattern. A variety of topographic normalization techniques have been proposed to correct these illumination effects due to topography. The purpose of this study was to compare six different topographic normalization methods (Cosine correction, Minnaert correction, C-correction, Sun-canopy-sensor correction, two-stage topographic normalization, and slope matching technique) for their effectiveness in enhancing vegetation classification in mountainous environments. Since most of the vegetation classes in the rugged terrain of the Lancang Watershed (China) did not feature a normal distribution, artificial neural networks (ANNs) were employed as a classifier. Comparing the ANN classifications, none of the topographic correction methods could significantly improve ETM+ image classification overall accuracy. Nevertheless, at the class level, the accuracy of pine forest could be increased by using topographically corrected images. On the contrary, oak forest and mixed forest accuracies were significantly decreased by using corrected images. The results also showed that none of the topographic normalization strategies was satisfactorily able to correct for the topographic effects in severely shadowed areas.
Renal Parenchymal Area Growth Curves for Children 0 to 10 Months Old.
Fischer, Katherine; Li, Chunming; Wang, Huixuan; Song, Yihua; Furth, Susan; Tasian, Gregory E
2016-04-01
Low renal parenchymal area, which is the gross area of the kidney in maximal longitudinal length minus the area of the collecting system, has been associated with increased risk of end stage renal disease during childhood in boys with posterior urethral valves. To our knowledge normal values do not exist. We aimed to increase the clinical usefulness of this measure by defining normal renal parenchymal area during infancy. In a cross-sectional study of children with prenatally detected mild unilateral hydronephrosis who were evaluated between 2000 and 2012 we measured the renal parenchymal area of normal kidney(s) opposite the kidney with mild hydronephrosis. Measurement was done with ultrasound from birth to post-gestational age 10 months. We used the LMS method to construct unilateral, bilateral, side and gender stratified normalized centile curves. We determined the z-score and the centile of a total renal parenchymal area of 12.4 cm(2) at post-gestational age 1 to 2 weeks, which has been associated with an increased risk of kidney failure before age 18 years in boys with posterior urethral valves. A total of 975 normal kidneys of children 0 to 10 months old were used to create renal parenchymal area centile curves. At the 97th centile for unilateral and single stratified curves the estimated margin of error was 4.4% to 8.8%. For bilateral and double stratified curves the estimated margin of error at the 97th centile was 6.6% to 13.2%. Total renal parenchymal area less than 12.4 cm(2) at post-gestational age 1 to 2 weeks had a z-score of -1.96 and fell at the 3rd percentile. These normal renal parenchymal area curves may be used to track kidney growth in infants and identify those at risk for chronic kidney disease progression. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Kanda, Hiroyuki; Morimoto, Takeshi; Fujikado, Takashi; Tano, Yasuo; Fukuda, Yutaka; Sawai, Hajime
2004-02-01
Assessment of a novel method of retinal stimulation, known as suprachoroidal-transretinal stimulation (STS), which was designed to minimize insult to the retina by implantation of stimulating electrodes for artificial vision. In 17 normal hooded rats and 12 Royal College of Surgeons (RCS) rats, a small area of the retina was focally stimulated with electric currents through an anode placed on the fenestrated sclera and a cathode inserted into the vitreous chamber. Evoked potentials (EPs) in response to STS were recorded from the surface of the superior colliculus (SC) with a silver-ball electrode, and their physiological properties and localization were studied. In both normal and RCS rats, STS elicited triphasic EPs that were vastly diminished by changing polarity of stimulating electrodes and abolished by transecting the optic nerve. The threshold intensity (C) of the EP response to STS was approximately 7.2 +/- 2.8 nC in normal and 12.9 +/- 7.7 nC in RCS rats. The responses to minimal STS were localized in an area on the SC surface measuring 0.12 +/- 0.07 mm(2) in normal rats and 0.24 +/- 0.12 mm(2) in RCS rats. The responsive area corresponded retinotopically to the retinal region immediately beneath the anodic stimulating electrode. STS is less invasive in the retina than stimulation through epiretinal or subretinal implants. STS can generate focal excitation in retinal ganglion cells in normal animals and in those with degenerated photoreceptors, which suggests that this method of retinal stimulation is suitable for artificial vision.
NASA Astrophysics Data System (ADS)
Kinkingnehun, Serge R. J.; du Boisgueheneuc, Foucaud; Golmard, Jean-Louis; Zhang, Sandy X.; Levy, Richard; Dubois, Bruno
2004-04-01
We have developed a new technique to analyze correlations between brain anatomy and its neurological functions. The technique is based on the anatomic MRI of patients with brain lesions who are administered neuropsychological tests. Brain lesions of the MRI scans are first manually segmented. The MRI volumes are then normalized to a reference map, using the segmented area as a mask. After normalization, the brain lesions of the MRI are segmented again in order to redefine the border of the lesions in the context of the normalized brain. Once the MRI is segmented, the patient's score on the neuropsychological test is assigned to each voxel in the lesioned area, while the rest of the voxels of the image are set to 0. Subsequently, the individual patient's MRI images are superimposed, and each voxel is reassigned the average score of the patients who have a lesion at that voxel. A threshold is applied to remove regions having less than three overlaps. This process leads to an anatomo-functional map that links brain areas to functional loss. Other maps can be created to aid in analyzing the functional maps, such as one that indicates the 95% confidence interval of the averaged scores for each area. This anatomo-clinical overlapping map (AnaCOM) method was used to obtain functional maps from patients with lesions in the superior frontal gyrus. By finding particular subregions more responsible for a particular deficit, this method can generate new hypotheses to be tested by conventional group methods.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
EEG Alpha and Beta Activity in Normal and Deaf Subjects.
ERIC Educational Resources Information Center
Waldron, Manjula; And Others
Electroencephalogram and task performance data were collected from three groups of young adult males: profoundly deaf Ss who signed from an early age, profoundly deaf Ss who only used oral (speech and speedreading) methods of communication, and normal hearing Ss. Alpha and Beta brain wave patterns over the Wernicke's area were compared across…
Normalized Temperature Contrast Processing in Flash Infrared Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.
Karan, Shivesh Kishore; Samadder, Sukha Ranjan; Maiti, Subodh Kumar
2016-11-01
The objective of the present study is to monitor reclamation activity in mining areas. Monitoring of these reclaimed sites in the vicinity of mining areas and on closed Over Burden (OB) dumps is critical for improving the overall environmental condition, especially in developing countries where area around the mines are densely populated. The present study evaluated the reclamation success in the Block II area of Jharia coal field, India, using Landsat satellite images for the years 2000 and 2015. Four image processing methods (support vector machine, ratio vegetation index, enhanced vegetation index, and normalized difference vegetation index) were used to quantify the change in vegetation cover between the years 2000 and 2015. The study also evaluated the relationship between vegetation health and moisture content of the study area using remote sensing techniques. Statistical linear regression analysis revealed that Normalized Difference Vegetation Index (NDVI) coupled with Normalized Difference Moisture Index (NDMI) is the best method for vegetation monitoring in the study area when compared to other indices. A strong linear relationship (r(2) > 0.86) was found between NDVI and NDMI. An increase of 21% from 213.88 ha in 2000 to 258.9 ha in 2015 was observed in the vegetation cover of the reclaimed sites for an open cast mine, indicating satisfactory reclamation activity. NDVI results indicated that vegetation health also improved over the years. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Werner, Charles L.; Wegmueller, Urs; Small, David L.; Rosen, Paul A.
1994-01-01
Terrain slopes, which can be measured with Synthetic Aperture Radar (SAR) interferometry either from a height map or from the interferometric phase gradient, were used to calculate the local incidence angle and the correct pixel area. Both are required for correct thematic interpretation of SAR data. The interferometric correlation depends on the pixel area projected on a plane perpendicular to the look vector and requires correction for slope effects. Methods for normalization of the backscatter and interferometric correlation for ERS-1 SAR are presented.
Comparison of the calculation QRS angle for bundle branch block detection
NASA Astrophysics Data System (ADS)
Goeirmanto, L.; Mengko, R.; Rajab, T. L.
2016-04-01
QRS angle represent condition of blood circulation in the heart. Normally QRS angle is between -30 until 90 degree. Left Axis Defiation (LAD) and Right Axis Defiation (RAD) are abnormality conditions that lead to Bundle Branch Block. QRS angle is calculated using common method from physicians and compared to mathematical method using difference amplitudos and difference areas. We analyzed the standard 12 lead electrocardiogram data from MITBIH physiobank database. All methods using lead I and lead avF produce similar QRS angle and right QRS axis quadrant. QRS angle from mathematical method using difference areas is close to common method from physician. Mathematical method using difference areas can be used as a trigger for detecting heart condition.
Structure and Function in Patients with Glaucomatous Defects Near Fixation
Shafi, Asifa; Swanson, William H.; Dul, Mitchell W.
2010-01-01
Purpose To assess relations between perimetric sensitivity and neuroretinal rim area using high-resolution perimetric mapping in patients with glaucomatous defects within 10 degrees of fixation. Methods One eye was tested in each of 31 patients with open angle glaucoma enrolled in a prospective study of perimetric defects within 10 degrees of fixation. Norms were derived from 110 control subjects free of eye disease ages 21 – 81. Perimetric sensitivity was measured using the 10-2 test pattern with the SITA Standard algorithm (HFAII-i, Carl Zeiss Meditec), stimulus size III. Area of the temporal neuroretinal rim was measured using the Heidelberg Retinal Tomograph (HRT III). Decibel (dB) values were converted into linear units of contrast sensitivity averaged across locations corresponding to the temporal rim sector. Both measures were expressed as percent of mean normal and the Bland-Altman method was used to assess agreement. Perimetric locations corresponding to the temporal sector were determined for six different optic nerve maps. Results Contrast sensitivity was moderately correlated with temporal rim area (r2 > 30%, p < 0.005). For all six optic nerve maps, Bland-Altman analysis found good agreement between perimetric sensitivity and rim area with both measures expressed as fraction of mean normal, and confidence limits for agreement that were consistent with normal between-subject variability in control eyes. Conclusions Using high-resolution perimetric mapping in patients with scotomas within 10° of fixation, we confirmed findings of linear relations between perimetric sensitivity and area of temporal neuroretinal rim, and showed that the confidence limits for agreement in patients with glaucoma were consistent with normal between-subject variability. PMID:20935585
Masjedi, Milad; Marquardt, Charles S; Drummond, Isabella M H; Harris, Simon J; Cobb, Justin P
2013-03-01
Cam hips are commonly quantified using the two-dimensional α angle. The accuracy of this measurement may be affected by patient position and the technician's experience. In this paper, we describe a method of measurement that provides a quantitative definition of cam hips based upon three-dimensional computed tomography (CT) images. CT scans of 47 (24 cam, 23 normal) femurs were segmented. A sphere was fitted to the articulating surface of the femoral head, the radius (r) recorded, and the femoral neck axis obtained. The cross sectional area at four locations spanning the head neck junction (r/4, r/2, 3r/4 and r), perpendicular to the neck axis, was measured. The ratios (Neck/Head) between the areas at each cut relative to the surface area at the head centre were calculated and aggregated. Normal and cam hips were significantly different: the sum of the head-neck ratios (HNRs) of the cam hips were always smaller than normal hips (p < 0.01). A cut off point of 2.55 with no overlap was found between the two groups, with HNRs larger than this being cam hips, and smaller being normal ones. Owing to its sensitivity and repeatability, the method could be used to confirm or refute the clinical diagnosis of a cam hip. Furthermore it can be used as a tool to measure the outcome of cam surgery.
Sibling experiences after a major childhood burn injury.
Lehna, Carlee
2010-01-01
The purpose of this research project was to understand, primarily from the sibling perspective, the effect of a child's major burn injury on his or her sibling. A mixed method qualitative dominant design was implemented using the life story method for the qualitative portion. Additionally, the Sibling Relationship Questionnaire -Revised (SRQ-R) was used as a structured interview guide and for calculating scoring data to explore sibling relationship factors of warmth/closeness, rivalry, conflict, and relative status/power. Participants from 22 family cases (one or multiple family members) and 40 individuals were interviewed. To capture impact on the family over time, interviews began a minimum of two years post-burn. The central thematic pattern for the sibling relationship in families having a child with a major burn injury was that of normalization. Two components of normalization were described: areas of normalization and the process of adjustment. Areas of normalization were found in play and other activities, in school and work, and in family relations with siblings. The process of adjustment was varied and often gradual, involved school and work re-entry, and in some instances, seemed to change life perspective. Clinical implications in providing family-centered care can focus on promoting normalization by assessing and supporting siblings who may only be occasionally seen in the hospital or clinic.
Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach
NASA Astrophysics Data System (ADS)
Taufik, Afirah; Sakinah Syed Ahmad, Sharifah
2016-06-01
The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.
Drug Use Normalization: A Systematic and Critical Mixed-Methods Review.
Sznitman, Sharon R; Taubman, Danielle S
2016-09-01
Drug use normalization, which is a process whereby drug use becomes less stigmatized and more accepted as normative behavior, provides a conceptual framework for understanding contemporary drug issues and changes in drug use trends. Through a mixed-methods systematic review of the normalization literature, this article seeks to (a) critically examine how the normalization framework has been applied in empirical research and (b) make recommendations for future research in this area. Twenty quantitative, 26 qualitative, and 4 mixed-methods studies were identified through five electronic databases and reference lists of published studies. Studies were assessed for relevance, study characteristics, quality, and aspects of normalization examined. None of the studies applied the most rigorous research design (experiments) or examined all of the originally proposed normalization dimensions. The most commonly assessed dimension of drug use normalization was "experimentation." In addition to the original dimensions, the review identified the following new normalization dimensions in the literature: (a) breakdown of demographic boundaries and other risk factors in relation to drug use; (b) de-normalization; (c) drug use as a means to achieve normal goals; and (d) two broad forms of micro-politics associated with managing the stigma of illicit drug use: assimilative and transformational normalization. Further development in normalization theory and methodology promises to provide researchers with a novel framework for improving our understanding of drug use in contemporary society. Specifically, quasi-experimental designs that are currently being made feasible by swift changes in cannabis policy provide researchers with new and improved opportunities to examine normalization processes.
Shadow Areas Robust Matching Among Image Sequence in Planetary Landing
NASA Astrophysics Data System (ADS)
Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin
2017-01-01
In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.
NASA Astrophysics Data System (ADS)
Lee, Min Jin; Hong, Helen; Shim, Kyu Won; Kim, Yong Oock
2017-03-01
This paper proposes morphological descriptors representing the degree of skull deformity for craniosynostosis in head CT images and a hierarchical classifier model distinguishing among normal and different types of craniosynostosis. First, to compare deformity surface model with mean normal surface model, mean normal surface models are generated for each age range and the mean normal surface model is deformed to the deformity surface model via multi-level threestage registration. Second, four shape features including local distance and area ratio indices are extracted in each five cranial bone. Finally, hierarchical SVM classifier is proposed to distinguish between the normal and deformity. As a result, the proposed method showed improved classification results compared to traditional cranial index. Our method can be used for the early diagnosis, surgical planning and postsurgical assessment of craniosynostosis as well as quantitative analysis of skull deformity.
Gondim Teixeira, Pedro Augusto; Leplat, Christophe; Chen, Bailiang; De Verbizier, Jacques; Beaumont, Marine; Badr, Sammy; Cotten, Anne; Blum, Alain
2017-12-01
To evaluate intra-tumour and striated muscle T1 value heterogeneity and the influence of different methods of T1 estimation on the variability of quantitative perfusion parameters. Eighty-two patients with a histologically confirmed musculoskeletal tumour were prospectively included in this study and, with ethics committee approval, underwent contrast-enhanced MR perfusion and T1 mapping. T1 value variations in viable tumour areas and in normal-appearing striated muscle were assessed. In 20 cases, normal muscle perfusion parameters were calculated using three different methods: signal based and gadolinium concentration based on fixed and variable T1 values. Tumour and normal muscle T1 values were significantly different (p = 0.0008). T1 value heterogeneity was higher in tumours than in normal muscle (variation of 19.8% versus 13%). The T1 estimation method had a considerable influence on the variability of perfusion parameters. Fixed T1 values yielded higher coefficients of variation than variable T1 values (mean 109.6 ± 41.8% and 58.3 ± 14.1% respectively). Area under the curve was the least variable parameter (36%). T1 values in musculoskeletal tumours are significantly different and more heterogeneous than normal muscle. Patient-specific T1 estimation is needed for direct inter-patient comparison of perfusion parameters. • T1 value variation in musculoskeletal tumours is considerable. • T1 values in muscle and tumours are significantly different. • Patient-specific T1 estimation is needed for comparison of inter-patient perfusion parameters. • Technical variation is higher in permeability than semiquantitative perfusion parameters.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Abd El-Rahman, Mohamed K.
2015-03-01
Normalized spectra have a great power in resolving spectral overlap of challenging Orphenadrine (ORP) and Paracetamol (PAR) binary mixture, four smart techniques utilizing the normalized spectra were used in this work, namely, amplitude modulation (AM), simultaneous area ratio subtraction (SARS), simultaneous derivative spectrophotometry (S1DD) and ratio H-point standard addition method (RHPSAM). In AM, peak amplitude at 221.6 nm of the division spectra was measured for both ORP and PAR determination, while in SARS, concentration of ORP was determined using the area under the curve from 215 nm to 222 nm of the regenerated ORP zero order absorption spectra, in S1DD, concentration of ORP was determined using the peak amplitude at 224 nm of the first derivative ratio spectra. PAR concentration was determined directly at 288 nm in the division spectra obtained during the manipulation steps in the previous three methods. The last RHPSAM is a dual wavelength method in which two calibrations were plotted at 216 nm and 226 nm. RH point is the intersection of the two calibration lines, where ORP and PAR concentrations were directly determined from coordinates of RH point. The proposed methods were applied successfully for the determination of ORP and PAR in their dosage form.
NASA Astrophysics Data System (ADS)
Prasetyo, Yudo; Ardi Gunawan, Setyo; Maksum, Zia Ul
2016-11-01
Semarang is the biggest city in central Java-Indonesia which has a rapid and massive infrastructure development nowadays. In order to control water resources and flood, the local goverment has been built east and west flood canal in Kaligarang and West Semarang River. One of main problem in Semarang city is the lack of fresh water in dry season because ground water is not rechargeable well. Rechargeable groundwater ability depends on underground water recharge rate and catchment area condition. The objective of the study is to determine condition and classification of water catchment area in Semarang city. The catchment area conditions will be determine by five parameters as follows soil type, land use, slope, ground water potential and rainfall intensity. In this study, we use three methods approach to solve the problem which is segmentation classification to acquire land use classification from high resolution imagery using nearest neighborhood algorithm, Interferometric Synthetic Aperture Radar (SAR) to derive DTM from SAR Imagery and multi criteria weighting and spatial analysis using GIS method. There are three types optical image (ALOS PRISM, SPOT-6 and ALOS PALSAR) to calculate water catchment area condition in Semarang city. For final result, this research will divide the water catchment into six criteria as follows good, naturally normal, early critical, a little bit critical, critical and very critical condition. The result shows that water catchment area condition is in an early critical condition around 2607,523 Ha (33,17 %), naturally normal condition around 1507,674 Ha (19,18 %), a little bit critical condition around 1452,931 Ha (18,48 %), good with 1157,04 Ha (14,72 %), critical with 1058,639 Ha (13,47 %) and very critical with 75,0387 Ha (0,95 %). The distribution of water catchment area conditions in West and East Flood Canal have an irreguler pattern. In northern area of watershed consists of begin to critical, naturally normal and good condition. Meanwhile in southern area of watershed consists of a little bit critical, critical and very critical condition.
Automated detection system for pulmonary emphysema on 3D chest CT images
NASA Astrophysics Data System (ADS)
Hara, Takeshi; Yamamoto, Akira; Zhou, Xiangrong; Iwano, Shingo; Itoh, Shigeki; Fujita, Hiroshi; Ishigaki, Takeo
2004-05-01
An automatic extraction of pulmonary emphysema area on 3-D chest CT images was performed using an adaptive thresholding technique. We proposed a method to estimate the ratio of the emphysema area to the whole lung volume. We employed 32 cases (15 normal and 17 abnormal) which had been already diagnosed by radiologists prior to the study. The ratio in all the normal cases was less than 0.02, and in abnormal cases, it ranged from 0.01 to 0.26. The effectiveness of our approach was confirmed through the results of the present study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjai, Serm
In order to investigate a potential use of concentrating solar power technologies and select an optimum site for these technologies, it is necessary to obtain information on the geographical distribution of direct normal solar irradiation over an area of interest. In this work, we have developed a method for estimating direct normal irradiation from satellite data for a tropical environment. The method starts with the estimation of global irradiation on a horizontal surface from MTSAT-1R satellite data and other ground-based ancillary data. Then a satellite-based diffuse fraction model was developed and used to estimate the diffuse component of the satellite-derivedmore » global irradiation. Based on this estimated global and diffuse irradiation and the solar radiation incident angle, the direct normal irradiation was finally calculated. To evaluate its performance, the method was used to estimate the monthly average hourly direct normal irradiation at seven pyrheliometer stations in Thailand. It was found that values of monthly average hourly direct normal irradiation from the measurements and those estimated from the proposed method are in reasonable agreement, with a root mean square difference of 16% and a mean bias of -1.6%, with respect to mean measured values. After the validation, this method was used to estimate the monthly average hourly direct normal irradiation over Thailand by using MTSAT-1R satellite data for the period from June 2005 to December 2008. Results from the calculation were displayed as hourly and yearly irradiation maps. These maps reveal that the direct normal irradiation in Thailand was strongly affected by the tropical monsoons and local topography of the country. (author)« less
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
Eye Dominance Predicts fMRI Signals in Human Retinotopic Cortex
Mendola, Janine D.; Conner, Ian P.
2009-01-01
There have been many attempts to define eye dominance in normal subjects, but limited consensus exists, and relevant physiological data is scarce. In this study, we consider two different behavioral methods for assignment of eye dominance, and how well they predict fMRI signals evoked by monocular stimulation. Sighting eye dominance was assessed with two standard tests, the Porta Test, and a ‘hole in hand’ variation of the Miles Test. Acuity dominance was tested with a standard eye chart and with a computerized test of grating acuity. We found limited agreement between the sighting and acuity methods for assigning dominance in our individual subjects. We then compared the fMRI response generated by dominant eye stimulation to that generated by non-dominant eye, according to both methods, in 7 normal subjects. The stimulus consisted of a high contrast hemifield stimulus alternating with no stimulus in a blocked paradigm. In separate scans, we used standard techniques to label the borders of visual areas V1, V2, V3, VP, V4, V3A, and MT. These regions of interest (ROIs) were used to analyze each visual area separately. We found that percent change in fMRI BOLD signal was stronger for the dominant eye as defined by the acuity method, and this effect was significant for areas located in the ventral occipital territory (V1v, V2v, VP, V4). In contrast, assigning dominance based on sighting produced no significant interocular BOLD differences. We conclude that interocular BOLD differences in normal subjects exist, and may be predicted by acuity measures. PMID:17194544
Method for welding chromium molybdenum steels
Sikka, Vinod K.
1986-01-01
Chromium-molybdenum steels exhibit a weakening after welding in an area adjacent to the weld. This invention is an improved method for welding to eliminate the weakness by subjecting normalized steel to a partial temper prior to welding and subsequently fully tempering the welded article for optimum strength and ductility.
Tree planting - strip-mined area in Maryland
Fred L. Bagley
1980-01-01
This report is written to elucidate some of the problems encountered in the planting of trees on strip-mined areas in Maryland. When problems are recognized, normally a solution (or at least, an improvement) can be instituted to alleviate the problem. The methods cited herein are those of experienced foresters engaged in strip-mine planting during the past seventeen...
Spatial event cluster detection using an approximate normal distribution.
Torabi, Mahmoud; Rosychuk, Rhonda J
2008-12-12
In geographic surveillance of disease, areas with large numbers of disease cases are to be identified so that investigations of the causes of high disease rates can be pursued. Areas with high rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. Typically cluster detection tests are applied to incident or prevalent cases of disease, but surveillance of disease-related events, where an individual may have multiple events, may also be of interest. Previously, a compound Poisson approach that detects clusters of events by testing individual areas that may be combined with their neighbours has been proposed. However, the relevant probabilities from the compound Poisson distribution are obtained from a recursion relation that can be cumbersome if the number of events are large or analyses by strata are performed. We propose a simpler approach that uses an approximate normal distribution. This method is very easy to implement and is applicable to situations where the population sizes are large and the population distribution by important strata may differ by area. We demonstrate the approach on pediatric self-inflicted injury presentations to emergency departments and compare the results for probabilities based on the recursion and the normal approach. We also implement a Monte Carlo simulation to study the performance of the proposed approach. In a self-inflicted injury data example, the normal approach identifies twelve out of thirteen of the same clusters as the compound Poisson approach, noting that the compound Poisson method detects twelve significant clusters in total. Through simulation studies, the normal approach well approximates the compound Poisson approach for a variety of different population sizes and case and event thresholds. A drawback of the compound Poisson approach is that the relevant probabilities must be determined through a recursion relation and such calculations can be computationally intensive if the cluster size is relatively large or if analyses are conducted with strata variables. On the other hand, the normal approach is very flexible, easily implemented, and hence, more appealing for users. Moreover, the concepts may be more easily conveyed to non-statisticians interested in understanding the methodology associated with cluster detection test results.
Comparison of different methods of inter-eye asymmetry of rim area and disc area analysis
Fansi, A A K; Boisjoly, H; Chagnon, M; Harasymowycz, P J
2011-01-01
Purpose To describe different methods of inter-eye asymmetry of rim area (RA) to disc area (DA) asymmetry ratio (RADAAR) analysis. Methods This was an observational, descriptive, and cross-sectional study. Both the eyes of all participants underwent confocal scanning laser ophthalmoscopy (Heidelberg retina tomograph (HRT 3)), frequency-doubling technology perimetry (FDT), and complete ophthalmological examination. Based on ophthalmological clinical examination and FDT results of the worse eye, subjects were classified as either normal, possible glaucoma, and probable glaucoma or definitive glaucoma. RADAAR values were calculated based on stereometric HRT 3 values using different mathematical formulae. RADAAR-1 was calculated as a relative difference of rim and DAs between the eyes. RADAAR-2 was calculated by subtracting the value of rim to DA ratio of the smaller disc from the value of rim to DA ratio of the larger disc. RADAAR-3 was calculated by dividing the previous two values. Statistical analyses included ANOVA as well as Student t-tests. Results Data of 334 participants were analysed, 78 of which were classified as definitive glaucoma. RADAAR-1 values were significantly different between the four different groups of diagnosis (F=5.82; P<0.001). The 1st and 99th percentile limits of normality for RADAAR-1, RADAAR-2, and RADAAR-3 in normal group were, respectively, −10.64 and 8.4; −0.32 and 0.22; and 0.58 and 1.32. Conclusions RADAAR-1 seems to best distinguish between the diagnostic groups. Knowledge of RADAAR distribution in various diagnostic groups may aid in clinical diagnosis of asymmetric glaucomatous damage. PMID:21921945
Bone structure studies with holographic interferometric nondestructive testing and x-ray methods
NASA Astrophysics Data System (ADS)
Silvennoinen, Raimo; Nygren, Kaarlo; Rouvinen, Juha; Petrova, Valentina V.
1994-02-01
Changes in the biomechanics and in the molecular texture and structure of isolated radioulnar bones of subadult European moose (Alces alces L.) collected in various environmentally polluted areas of Finland were investigated by means of holographic interferometric non- destructive testing (HNDT), radiological, morphometrical, and x-ray diffraction methods. By means of small caudal-cranial bending forces, the surface movements of the lower end (distal epiphysis) of the radial bone were recorded with the HNDT method. To study bone molecular texture and structure changes under external compressing forces, the samples for x-ray diffraction analysis were taken from the upper end of the ulnar bone (olecranon tip). Results showed that the bones obtained from the Harjavalta area and those of North Karelian moose showing malnutrition and healing femoral fractures produced different HNDT pictures compared with the four normally developed North Karelian moose. In the x-ray diffraction, the Harjavalta samples showed changes in molecular texture and structure compared with the samples from the apparently normal North Karelian animals.
Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.
Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui
2017-03-29
In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.
Automated Decision Tree Classification of Corneal Shape
Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.
2011-01-01
Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification problems. PMID:16357645
Sandhu, Rupninder; Chollet-Hinton, Lynn; Kirk, Erin L; Midkiff, Bentley; Troester, Melissa A
2016-02-01
Complete age-related regression of mammary epithelium, often termed postmenopausal involution, is associated with decreased breast cancer risk. However, most studies have qualitatively assessed involution. We quantitatively analyzed epithelium, stroma, and adipose tissue from histologically normal breast tissue of 454 patients in the Normal Breast Study. High-resolution digital images of normal breast hematoxylin and eosin-stained slides were partitioned into epithelium, adipose tissue, and nonfatty stroma. Percentage area and nuclei per unit area (nuclear density) were calculated for each component. Quantitative data were evaluated in association with age using linear regression and cubic spline models. Stromal area decreased (P = 0.0002), and adipose tissue area increased (P < 0.0001), with an approximate 0.7% change in area for each component, until age 55 years when these area measures reached a steady state. Although epithelial area did not show linear changes with age, epithelial nuclear density decreased linearly beginning in the third decade of life. No significant age-related trends were observed for stromal or adipose nuclear density. Digital image analysis offers a high-throughput method for quantitatively measuring tissue morphometry and for objectively assessing age-related changes in adipose tissue, stroma, and epithelium. Epithelial nuclear density is a quantitative measure of age-related breast involution that begins to decline in the early premenopausal period. Copyright © 2015 Elsevier Inc. All rights reserved.
Trace gas emissions to the atmosphere by biomass burning in the west African savannas
NASA Technical Reports Server (NTRS)
Frouin, Robert J.; Iacobellis, Samuel F.; Razafimpanilo, Herisoa; Somerville, Richard C. J.
1994-01-01
Savanna fires and atmospheric carbon dioxide (CO2) detection and estimating burned area using Advanced Very High Resolution Radiometer_(AVHRR) reflectance data are investigated in this two part research project. The first part involves carbon dioxide flux estimates and a three-dimensional transport model to quantify the effect of north African savanna fires on atmospheric CO2 concentration, including CO2 spatial and temporal variability patterns and their significance to global emissions. The second article describes two methods used to determine burned area from AVHRR data. The article discusses the relationship between the percentage of burned area and AVHRR channel 2 reflectance (the linear method) and Normalized Difference Vegetation Index (NDVI) (the nonlinear method). A comparative performance analysis of each method is described.
Wang, Lizhi; Gao, Xuedong; Wei, Ying; Liu, Kaerdun; Huang, Jianbin; Wang, Jide; Yan, Yun
2018-05-30
Specific imaging of cancer cells has been well-accepted in cancer diagnosis although it cannot precisely mark the boundary between the normal and cancerous cells and report their mutual influence. We report a nanorod fluorescent probe of copper perylenetetracarbonate (PTC-Cu) that can specifically light up normal cells. In combination with cancer cell imaging, the cocultured normal and cancer cells can be lit up with different colors, offering a clear contrast between the normal and cancer cells when they coexist. Because cancerous cells are only 20-30% in cancer area, this provides a possibility to visibly detect the mutual influence between the cancer and normal cells during therapy. We expect this method is beneficial to better cancer diagnosis and therapy.
NASA Astrophysics Data System (ADS)
Roychowdhury, K.
2016-06-01
Landcover is the easiest detectable indicator of human interventions on land. Urban and peri-urban areas present a complex combination of landcover, which makes classification challenging. This paper assesses the different methods of classifying landcover using dual polarimetric Sentinel-1 data collected during monsoon (July) and winter (December) months of 2015. Four broad landcover classes such as built up areas, water bodies and wetlands, vegetation and open spaces of Kolkata and its surrounding regions were identified. Polarimetric analyses were conducted on Single Look Complex (SLC) data of the region while ground range detected (GRD) data were used for spectral and spatial classification. Unsupervised classification by means of K-Means clustering used backscatter values and was able to identify homogenous landcovers over the study area. The results produced an overall accuracy of less than 50% for both the seasons. Higher classification accuracy (around 70%) was achieved by adding texture variables as inputs along with the backscatter values. However, the accuracy of classification increased significantly with polarimetric analyses. The overall accuracy was around 80% in Wishart H-A-Alpha unsupervised classification. The method was useful in identifying urban areas due to their double-bounce scattering and vegetated areas, which have more random scattering. Normalized Difference Built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) obtained from Landsat 8 data over the study area were used to verify vegetation and urban classes. The study compares the accuracies of different methods of classifying landcover using medium resolution SAR data in a complex urban area and suggests that polarimetric analyses present the most accurate results for urban and suburban areas.
2016-01-01
Abstract Microarray gene expression data sets are jointly analyzed to increase statistical power. They could either be merged together or analyzed by meta-analysis. For a given ensemble of data sets, it cannot be foreseen which of these paradigms, merging or meta-analysis, works better. In this article, three joint analysis methods, Z -score normalization, ComBat and the inverse normal method (meta-analysis) were selected for survival prognosis and risk assessment of breast cancer patients. The methods were applied to eight microarray gene expression data sets, totaling 1324 patients with two clinical endpoints, overall survival and relapse-free survival. The performance derived from the joint analysis methods was evaluated using Cox regression for survival analysis and independent validation used as bias estimation. Overall, Z -score normalization had a better performance than ComBat and meta-analysis. Higher Area Under the Receiver Operating Characteristic curve and hazard ratio were also obtained when independent validation was used as bias estimation. With a lower time and memory complexity, Z -score normalization is a simple method for joint analysis of microarray gene expression data sets. The derived findings suggest further assessment of this method in future survival prediction and cancer classification applications. PMID:26504096
Measuring lip force by oral screens. Part 1: Importance of screen size and individual variability.
Wertsén, Madeleine; Stenberg, Manne
2017-06-01
To reduce drooling and facilitate food transport in rehabilitation of patients with oral motor dysfunction, lip force can be trained using an oral screen. Longitudinal studies evaluating the effect of training require objective methods. The aim of this study was to evaluate a method for measuring lip strength, to investigate normal values and fluctuation of lip force in healthy adults on 1 occasion and over time, to study how the size of the screen affects the force, to evaluate the most appropriate measure of reliability, and to identify force performed in relation to gender. Three different sizes of oral screens were used to measure the lip force for 24 healthy adults on 3 different occasions, during a period of 6 months, using an apparatus based on strain gauge. The maximum lip force as evaluated with this method depends on the area of the screen size. By calculating the projected area of the screen, the lip force could be normalized to an oral screen pressure quantity expressed in kPa, which can be used for comparing measurements from screens with different sizes. Both the mean value and standard deviation were shown to vary between individuals. The study showed no differences regarding gender and only small variation with age. Normal variation over time (months) may be up to 3 times greater than the standard error of measurement at a certain occasion. The lip force increases in relation to the projected area of the screen. No general standard deviation can be assigned to the method and all measurements should be analyzed individually based on oral screen pressure to compensate for different screen sizes.
The Influence of Backrest Inclination on Buttock Pressure
Park, Un Jin
2011-01-01
Objective To assess the effects of backrest inclination of a wheelchair on buttock pressures in spinal cord injured (SCI) patients and normal subjects. Method The participants were 22 healthy subjects and 22 SCI patients. Buttock pressures of the participants were measured by a Tekscan® pressure sensing mat and software while they were sitting in a reclining wheelchair. Buttock pressures were recorded for 90°, 100°, 110°, 120° and 130° seat-to-back angles at the ischial tuberosity (IT) and sacrococcygeal (SC) areas. Recordings were made at each angle over four seconds at a sampling rate of 10 Hz. Results The side-to-side buttock pressure differences in the IT area for the SCI patients was significantly greater than for the normal subjects. There was no significant difference between the SCI patients and the normal subjects in the buttock pressure change pattern of the IT area. Significant increases in pressure on the SC area were found as backrest inclination angle was changed to 90°, 100° and 110° in the normal subjects, but no significant differences were found in the SCI patients. Conclusion Most of the SCI patients have freeform posture in wheelchairs, and this leads to an uneven distribution of buttock pressure. In the SCI patients, the peak pressure in the IT area reduced as the backrest angle was increased, but peak pressure at the SC area remained relatively unchanged. To reduce buttock pressure and prevent pressure ulcers and enhance ulcer healing, it can be helpful for tetraplegic patients, to have wheelchair seat-to-back angles above 120°. PMID:22506220
Monte Carlo modeling of the scatter radiation doses in IR
NASA Astrophysics Data System (ADS)
Mah, Eugene; He, Wenjun; Huda, Walter; Yao, Hai; Selby, Bayne
2011-03-01
Purpose: To use Monte Carlo techniques to compute the scatter radiation dose distribution patterns around patients undergoing Interventional Radiological (IR) examinations. Method: MCNP was used to model the scatter radiation air kerma (AK) per unit kerma area product (KAP) distribution around a 24 cm diameter water cylinder irradiated with monoenergetic x-rays. Normalized scatter fractions (SF) were generated defined as the air kerma at a point of interest that has been normalized by the Kerma Area Product incident on the phantom (i.e., AK/KAP). Three regions surrounding the water cylinder were investigated consisting of the area below the water cylinder (i.e., backscatter), above the water cylinder (i.e., forward scatter) and to the sides of the water cylinder (i.e., side scatter). Results: Immediately above and below the water cylinder and in the side scatter region, values of normalized SF decreased with the inverse square of the distance. For z-planes further away, the decrease was exponential. Values of normalized SF around the phantom were generally less than 10-4. Changes in normalized SF with x-ray energy were less than 20% and generally decreased with increasing x-ray energy. At a given distance from region where the x-ray beam enters the phantom, the normalized SF was higher in the backscatter regions, and smaller in the forward scatter regions. The ratio of forward to back scatter normalized SF was lowest at 60 keV and highest at 120 keV. Conclusion: Computed SF values quantify the normalized fractional radiation intensities at the operator location relative to the radiation intensities incident on the patient, where the normalization refers to the beam area that is incident on the patient. SF values can be used to estimate the radiation dose received by personnel within the procedure room, and which depend on the imaging geometry, patient size and location within the room. Monte Carlo techniques have the potential for simulating normalized SF values for any arrangement of imaging geometry, patient size and personnel location and are therefore an important tool for minimizing operator doses in IR.
The straight truth: measuring observer attention to the crooked nose.
Godoy, Andres; Ishii, Masaru; Byrne, Patrick J; Boahene, Kofi D O; Encarnacion, Carlos O; Ishii, Lisa E
2011-05-01
Quantify attentional distraction to crooked noses pre- and postoperatively as compared with normal noses by using an established metric of attention in a pilot study. Prospective, randomized, controlled experiment with crossover. An eye-tracker system was used to record the eye-movement patterns, called scanpaths, of 40 naive observers gazing at pictures of faces with crooked noses preoperatively or postoperatively and pictures of faces without a crooked nose included as "normals." The fixation durations within the nasal area for each group of faces presented were compared. A mixed-design univariate analysis of variance was performed to test the hypothesis that mean fixation times in the nasal region varied by face group. The results were highly statistically significant, F(2,116) = 20.28, P = .000, η(2) = 0.029. Marginal means were calculated for each nasal area of interest group with confidence intervals (normal, 2.32 [2.26-2.38]; preoperative, 2.66 [2.58-2.75]; postoperative, 2.43 [2.35-2.51]). Post hoc testing with Bonferroni correction for three comparisons showed differences between the normal and preoperative groups (χ(2) 41.38, P = .000) and between the preoperative and postoperative groups (χ(2) 14.41, P = .000) but not between the normal and postoperative groups (χ(2) 4.19, P = .12). There were highly statistically significant differences in attention paid to the nasal area of crooked noses preoperatively and postoperatively, and there were no differences in attention to the nasal area between the postoperative noses and the normal noses. This represents a novel method for objectively evaluating attention and success of surgical procedures to minimize the appearance of deformities. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Carvalho, Alysson Roncally S; Jandre, Frederico C; Pino, Alexandre V; Bozza, Fernando A; Salluh, Jorge; Rodrigues, Rosana; Ascoli, Fabio O; Giannella-Neto, Antonio
2007-01-01
Protective ventilatory strategies have been applied to prevent ventilator-induced lung injury in patients with acute lung injury (ALI). However, adjustment of positive end-expiratory pressure (PEEP) to avoid alveolar de-recruitment and hyperinflation remains difficult. An alternative is to set the PEEP based on minimizing respiratory system elastance (Ers) by titrating PEEP. In the present study we evaluate the distribution of lung aeration (assessed using computed tomography scanning) and the behaviour of Ers in a porcine model of ALI, during a descending PEEP titration manoeuvre with a protective low tidal volume. PEEP titration (from 26 to 0 cmH2O, with a tidal volume of 6 to 7 ml/kg) was performed, following a recruitment manoeuvre. At each PEEP, helical computed tomography scans of juxta-diaphragmatic parts of the lower lobes were obtained during end-expiratory and end-inspiratory pauses in six piglets with ALI induced by oleic acid. The distribution of the lung compartments (hyperinflated, normally aerated, poorly aerated and non-aerated areas) was determined and the Ers was estimated on a breath-by-breath basis from the equation of motion of the respiratory system using the least-squares method. Progressive reduction in PEEP from 26 cmH2O to the PEEP at which the minimum Ers was observed improved poorly aerated areas, with a proportional reduction in hyperinflated areas. Also, the distribution of normally aerated areas remained steady over this interval, with no changes in non-aerated areas. The PEEP at which minimal Ers occurred corresponded to the greatest amount of normally aerated areas, with lesser hyperinflated, and poorly and non-aerated areas. Levels of PEEP below that at which minimal Ers was observed increased poorly and non-aerated areas, with concomitant reductions in normally inflated and hyperinflated areas. The PEEP at which minimal Ers occurred, obtained by descending PEEP titration with a protective low tidal volume, corresponded to the greatest amount of normally aerated areas, with lesser collapsed and hyperinflated areas. The institution of high levels of PEEP reduced poorly aerated areas but enlarged hyperinflated ones. Reduction in PEEP consistently enhanced poorly or non-aerated areas as well as tidal re-aeration. Hence, monitoring respiratory mechanics during a PEEP titration procedure may be a useful adjunct to optimize lung aeration.
Development of a novel image-based program to teach narrow-band imaging.
Dumas, Cedric; Fielding, David; Coles, Timothy; Good, Norm
2016-08-01
Narrow-band imaging (NBI) is a widely available endoscopic imaging technology; however, uptake of the technique could be improved. Teaching new imaging techniques and assessing trainees' performance can be a challenging exercise during a 1-day workshop. To support NBI training, we developed an online training tool (Medimq) to help experts train novices in NBI bronchoscopy that could assess trainees' performance and provide feedback before the close of the 1-day course. The present study determines whether trainees' capacity to identify relevant pathology increases with the proposed interactive testing method. Two groups of 20 and 18 bronchoscopists have attended an NBI course where they did a pretest and post-test before and after the main lecture, and a follow-up test 4 weeks later to measure retention of knowledge. We measured their ability to mark normal and abnormal 'biopsy size' areas on bronchoscopic NBI images for biopsy. These markings were compared with areas marked by experts on the same images. The first group results were used to pilot the test. After modifications, the results of the improved test for group 2 showed trainees improved by 32% (total class average normalized gain) in detecting normal or abnormal areas. On follow-up testing, Group 2 improved by 23%. The overall class average normalized gain of 32% shows our test can be used to improve trainees' competency in analyzing NBI Images. The testing method (and tool) can be used to measure the follow up 4 weeks later. Better follow-up test results would be expected with more frequent practice by trainees after the course. © The Author(s), 2016.
Automated extraction of pleural effusion in three-dimensional thoracic CT images
NASA Astrophysics Data System (ADS)
Kido, Shoji; Tsunomori, Akinori
2009-02-01
It is important for diagnosis of pulmonary diseases to measure volume of accumulating pleural effusion in threedimensional thoracic CT images quantitatively. However, automated extraction of pulmonary effusion correctly is difficult. Conventional extraction algorithm using a gray-level based threshold can not extract pleural effusion from thoracic wall or mediastinum correctly, because density of pleural effusion in CT images is similar to those of thoracic wall or mediastinum. So, we have developed an automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion. Our method used a template of lung obtained from a normal lung for segmentation of lungs with pleural effusions. Registration process consisted of two steps. First step was a global matching processing between normal and abnormal lungs of organs such as bronchi, bones (ribs, sternum and vertebrae) and upper surfaces of livers which were extracted using a region-growing algorithm. Second step was a local matching processing between normal and abnormal lungs which were deformed by the parameter obtained from the global matching processing. Finally, we segmented a lung with pleural effusion by use of the template which was deformed by two parameters obtained from the global matching processing and the local matching processing. We compared our method with a conventional extraction method using a gray-level based threshold and two published methods. The extraction rates of pleural effusions obtained from our method were much higher than those obtained from other methods. Automated extraction method of pulmonary effusion by use of extracting lung area with pleural effusion is promising for diagnosis of pulmonary diseases by providing quantitative volume of accumulating pleural effusion.
Meyer, Swanhild U.; Kaiser, Sebastian; Wagner, Carola; Thirion, Christian; Pfaffl, Michael W.
2012-01-01
Background Adequate normalization minimizes the effects of systematic technical variations and is a prerequisite for getting meaningful biological changes. However, there is inconsistency about miRNA normalization performances and recommendations. Thus, we investigated the impact of seven different normalization methods (reference gene index, global geometric mean, quantile, invariant selection, loess, loessM, and generalized procrustes analysis) on intra- and inter-platform performance of two distinct and commonly used miRNA profiling platforms. Methodology/Principal Findings We included data from miRNA profiling analyses derived from a hybridization-based platform (Agilent Technologies) and an RT-qPCR platform (Applied Biosystems). Furthermore, we validated a subset of miRNAs by individual RT-qPCR assays. Our analyses incorporated data from the effect of differentiation and tumor necrosis factor alpha treatment on primary human skeletal muscle cells and a murine skeletal muscle cell line. Distinct normalization methods differed in their impact on (i) standard deviations, (ii) the area under the receiver operating characteristic (ROC) curve, (iii) the similarity of differential expression. Loess, loessM, and quantile analysis were most effective in minimizing standard deviations on the Agilent and TLDA platform. Moreover, loess, loessM, invariant selection and generalized procrustes analysis increased the area under the ROC curve, a measure for the statistical performance of a test. The Jaccard index revealed that inter-platform concordance of differential expression tended to be increased by loess, loessM, quantile, and GPA normalization of AGL and TLDA data as well as RGI normalization of TLDA data. Conclusions/Significance We recommend the application of loess, or loessM, and GPA normalization for miRNA Agilent arrays and qPCR cards as these normalization approaches showed to (i) effectively reduce standard deviations, (ii) increase sensitivity and accuracy of differential miRNA expression detection as well as (iii) increase inter-platform concordance. Results showed the successful adoption of loessM and generalized procrustes analysis to one-color miRNA profiling experiments. PMID:22723911
Relating normalization to neuronal populations across cortical areas.
Ruff, Douglas A; Alberts, Joshua J; Cohen, Marlene R
2016-09-01
Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. Copyright © 2016 the American Physiological Society.
Relating normalization to neuronal populations across cortical areas
Alberts, Joshua J.; Cohen, Marlene R.
2016-01-01
Normalization, which divisively scales neuronal responses to multiple stimuli, is thought to underlie many sensory, motor, and cognitive processes. In every study where it has been investigated, neurons measured in the same brain area under identical conditions exhibit a range of normalization, ranging from suppression by nonpreferred stimuli (strong normalization) to additive responses to combinations of stimuli (no normalization). Normalization has been hypothesized to arise from interactions between neuronal populations, either in the same or different brain areas, but current models of normalization are not mechanistic and focus on trial-averaged responses. To gain insight into the mechanisms underlying normalization, we examined interactions between neurons that exhibit different degrees of normalization. We recorded from multiple neurons in three cortical areas while rhesus monkeys viewed superimposed drifting gratings. We found that neurons showing strong normalization shared less trial-to-trial variability with other neurons in the same cortical area and more variability with neurons in other cortical areas than did units with weak normalization. Furthermore, the cortical organization of normalization was not random: neurons recorded on nearby electrodes tended to exhibit similar amounts of normalization. Together, our results suggest that normalization reflects a neuron's role in its local network and that modulatory factors like normalization share the topographic organization typical of sensory tuning properties. PMID:27358313
Tanaka, Rie; Sanada, Shigeru; Okazaki, Nobuo; Kobayashi, Takeshi; Fujimura, Masaki; Yasui, Masahide; Matsui, Takeshi; Nakayama, Kazuya; Nanbu, Yuko; Matsui, Osamu
2006-10-01
Dynamic flat panel detectors (FPD) permit acquisition of distortion-free radiographs with a large field of view and high image quality. The present study was performed to evaluate pulmonary function using breathing chest radiography with a dynamic FPD. We report primary results of a clinical study and computer algorithm for quantifying and visualizing relative local pulmonary airflow. Dynamic chest radiographs of 18 subjects (1 emphysema, 2 asthma, 4 interstitial pneumonia, 1 pulmonary nodule, and 10 normal controls) were obtained during respiration using an FPD system. We measured respiratory changes in distance from the lung apex to the diaphragm (DLD) and pixel values in each lung area. Subsequently, the interframe differences (D-frame) and difference values between maximum inspiratory and expiratory phases (D-max) were calculated. D-max in each lung represents relative vital capacity (VC) and regional D-frames represent pulmonary airflow in each local area. D-frames were superimposed on dynamic chest radiographs in the form of color display (fusion images). The results obtained using our methods were compared with findings on computed tomography (CT) images and pulmonary functional test (PFT), which were examined before inclusion in the study. In normal subjects, the D-frames were distributed symmetrically in both lungs throughout all respiratory phases. However, subjects with pulmonary diseases showed D-frame distribution patterns that differed from the normal pattern. In subjects with air trapping, there were some areas with D-frames near zero indicated as colorless areas on fusion images. These areas also corresponded to the areas showing air trapping on computed tomography images. In asthma, obstructive abnormality was indicated by areas continuously showing D-frame near zero in the upper lung. Patients with interstitial pneumonia commonly showed fusion images with an uneven color distribution accompanied by increased D-frames in the area identified as normal on computed tomography images. Furthermore, measurement of DLD was very effective for evaluating diaphragmatic kinetics. This is a rapid and simple method for evaluation of respiratory kinetics for pulmonary diseases, which can reveal abnormalities in diaphragmatic kinetics and regional lung ventilation. Furthermore, quantification and visualization of respiratory kinetics is useful as an aid in interpreting dynamic chest radiographs.
Electromagnetic exploration of the oceanic mantle
UTADA, Hisashi
2015-01-01
Electromagnetic exploration is a geophysical method for examining the Earth’s interior through observations of natural or artificial electromagnetic field fluctuations. The method has been in practice for more than 70 years, and 40 years ago it was first applied to ocean areas. During the past few decades, there has been noticeable progress in the methods of instrumentation, data acquisition (observation), data processing and inversion. Due to this progress, applications of this method to oceanic regions have revealed electrical features of the oceanic upper mantle down to depths of several hundred kilometers for different geologic and tectonic environments such as areas around mid-oceanic ridges, areas around hot-spot volcanoes, subduction zones, and normal ocean areas between mid-oceanic ridges and subduction zones. All these results estimate the distribution of the electrical conductivity in the oceanic mantle, which is key for understanding the dynamics and evolution of the Earth together with different physical properties obtained through other geophysical methods such as seismological techniques. PMID:26062736
Usefulness of Epicardial Area in the Short Axis to Identify Elevated Left Ventricular Mass in Men.
Fitzpatrick, Jesse K; Cohen, Beth E; Rosenblatt, Andrew; Shaw, Richard E; Schiller, Nelson B
2018-06-15
Left ventricular (LV) hypertrophy is strongly associated with increased cardiovascular morbidity and mortality. The 2-dimensional LV mass algorithms suffer from measurement variability that can lead to misclassification of patients with LV hypertrophy as normal, or vice versa. Among the 4 echocardiographic measurements required by the 2-dimensional LV mass algorithms, epicardial and endocardial area have the lowest interobserver variation and could be used to corroborate LV mass calculations. We sought cut-off values that are able to discriminate between elevated and normal LV mass based on endocardial or epicardial area alone. Using data from 664 men enrolled in the Mind Your Heart Study, we calculated the correlation of LV mass index with epicardial area and endocardial area. We then used receiver operator characteristic curves to identify epicardial and endocardial area cut-points that could discriminate subjects with normal LV mass and LV hypertrophy. LV mass index was more strongly correlated with epicardial area compared with endocardial area, r = 0.70 versus r = 0.27, respectively. Epicardial area had a significantly higher area under the receiver operator characteristic curve (p <0.001) compared with endocardial area, 0.90 (95% confidence interval 0.86 to 0.93) versus 0.63 (95% confidence interval 0.57 to 0.71). An epicardial area cut-point of ≥38.0 cm 2 corresponded to a sensitivity of 95.0% and specificity of 54.4% for detecting LV hypertrophy. In conclusion, epicardial area showed promise as a method of rapid screening for LV hypertrophy and could be used to validate formal LV mass calculations. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Coward, Trevor J; Watson, Roger M; Richards, Robin; Scott, Brendan J J
2012-01-01
Patients with hemifacial microsomia may have a missing ear on the deficient side of the face. The fabrication of an ear for such individuals usually has been accomplished by directly measuring the ear on the normal side to construct a prosthesis based on these dimensions, and the positioning has been, to a large extent, primarily operator-dependent. The aim of the present study was to compare three methods, developed from the identification of landmarks plotted on three-dimensional surface scans, to evaluate the position of an artificial ear on the deficient side of the face compared with the position of the natural ear on the normally developed side. Laser scans were undertaken of the faces of 14 subjects with hemifacial microsomia. Landmarks on the ear and face on the normal side were identified. Three methods of mirroring the normal ear on the deficient side of the face were investigated, which used either facial landmarks from the orbital area or a zero reference point generated from the intersection of three orthogonal planes on a frame of reference. To assess the methods, landmarks were identified on the ear situated on the normal side as well as on the face. These landmarks yielded paired dimensional measurements that could be compared between the normal and deficient sides. Mean differences and 95% confidence intervals were calculated. It was possible to mirror the normal ear image on to the deficient side of the face using all three methods. Generally only small differences between the dimensional measurements on the normal and deficient sides were observed. However, two-way analysis of variance revealed statistically significant differences between the three methods (P = .005). The method of mirroring using the outer canthi was found to result in the smallest dimensional differences between the anthropometric points on the ear and face between the normally developed and deficient sides. However, the effects of the deformity can result in limitations in relation to achieving a precise alignment of the ear to the facial tissues. This requires further study.
Sharen, Gao-Wa; Zhang, Jun; Qin, Chuan; Lv, Qing
2017-02-01
The dynamic characteristics of the area of the atrial septal defect (ASD) were evaluated using the technique of real-time three-dimensional echocardiography (RT 3DE), the potential factors responsible for the dynamic characteristics of the area of ASD were observed, and the overall and local volume and functions of the patients with ASD were measured. RT 3DE was performed on the 27 normal controls and 28 patients with ASD. Based on the three-dimensional data workstations, the area of ASD was measured at P wave vertex, R wave vertex, T wave starting point, and T wave terminal point and in the T-P section. The right atrial volume in the same time phase of the cardiac cycle and the motion displacement distance of the tricuspid annulus in the corresponding period were measured. The measured value of the area of ASD was analyzed. The changes in the right atrial volume and the motion displacement distance of the tricuspid annulus in the normal control group and the ASD group were compared. The right ventricular ejection fractions in the normal control group and the ASD group were compared using the RT 3DE long-axis eight-plane (LA 8-plane) method. Real-time three-dimensional volume imaging was performed in the normal control group and ASD group (n=30). The right ventricular inflow tract, outflow tract, cardiac apex muscular trabecula dilatation, end-systolic volume, overall dilatation, end-systolic volume, and appropriate local and overall ejection fractions in both two groups were measured with the four-dimensional right ventricular quantitative analysis method (4D RVQ) and compared. The overall right ventricular volume and the ejection fraction measured by the LA 8-plane method and 4D RVQ were subjected to a related analysis. Dynamic changes occurred to the area of ASD in the cardiac cycle. The rules for dynamic changes in the area of ASD and the rules for changes in the right atrial volume in the cardiac cycle were consistent. The maximum value of the changes in the right atrial volume occurred in the end-systolic period when the peak of the curve appeared. The minimum value of the changes occurred in the end-systolic period and was located at the lowest point of the volume variation curve. The area variation curve for ASD and the motion variation curve for the tricuspid annulus in the cardiac cycle were the same. The displacement of the tricuspid annulus exhibited directionality. The measured values of the area of ASD at P wave vertex, R wave vertex, T wave starting point, T wave terminal point and in the T-P section were properly correlated with the right atrial volume (P<0.001). The area of ASD and the motion displacement distance of the tricuspid annulus were negatively correlated (P<0.05). The right atrial volumes in the ASD group in the cardiac cycle in various time phases increased significantly as compared with those in the normal control group (P=0.0001). The motion displacement distance of the tricuspid annulus decreased significantly in the ASD group as compared with that in the normal control group (P=0.043). The right ventricular ejection fraction in the ASD group was lower than that in the normal control group (P=0.032). The ejection fraction of the cardiac apex trabecula of the ASD patients was significantly lower than the ejection fractions of the right ventricular outflow tract and inflow tract and overall ejection fraction. The difference was statistically significant (P=0.005). The right ventricular local and overall dilatation and end-systolic volumes in the ASD group increased significantly as compared with those in the normal control group (P=0.031). The aRVEF and the overall ejection fraction decreased in the ASD group as compared with those in the normal control group (P=0.0005). The dynamic changes in the area of ASD and the motion curves for the right atrial volume and tricuspid annulus have the same dynamic characteristics. RT 3DE can be used to accurately evaluate the local and overall volume and functions of the right ventricle. The local and overall volume loads of the right ventricle in the ASD patients increase significantly as compared with those of the normal people. The right ventricular cardiac apex and the overall systolic function decrease.
ERIC Educational Resources Information Center
Verderber, Nadine L.
1992-01-01
Presents the use of spreadsheets as an alternative method for precalculus students to solve maximum or minimum problems involving surface area and volume. Concludes that students with less technical backgrounds can solve problems normally requiring calculus and suggests sources for additional problems. (MDH)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frouin, R.J.; Iacobellis, S.F.; Razafimpanilo, H.
1994-08-01
Savanna fires and atmospheric carbon dioxide (CO2) detection and estimating burned area using Advanced Very High Resolution Radiometer (AVHRR) reflectance data are investigated in this two part research project. The first part involves carbon dioxide flux estimates and a three-dimensional transport model to quantify the effect of North African savanna fires on atmospheric CO2 concentration, including CO2 spatial and temporal variability patterns and their significance to global emissions. The second article describes two methods used to determine burned area from AVHRR data. The article discusses the relationship between the percentage of burned area and AVHRR channel 2 reflectance (the linearmore » method) and Normalized Difference Vegetation Index (NDVI) (the nonlinear method). A comparative performance analysis of each method is described.« less
Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.
Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823
Bogaarts, J G; Hilkman, D M W; Gommer, E D; van Kranen-Mastenbroek, V H J M; Reulen, J P H
2016-12-01
Continuous electroencephalographic monitoring of critically ill patients is an established procedure in intensive care units. Seizure detection algorithms, such as support vector machines (SVM), play a prominent role in this procedure. To correct for inter-human differences in EEG characteristics, as well as for intra-human EEG variability over time, dynamic EEG feature normalization is essential. Recently, the median decaying memory (MDM) approach was determined to be the best method of normalization. MDM uses a sliding baseline buffer of EEG epochs to calculate feature normalization constants. However, while this method does include non-seizure EEG epochs, it also includes EEG activity that can have a detrimental effect on the normalization and subsequent seizure detection performance. In this study, EEG data that is to be incorporated into the baseline buffer are automatically selected based on a novelty detection algorithm (Novelty-MDM). Performance of an SVM-based seizure detection framework is evaluated in 17 long-term ICU registrations using the area under the sensitivity-specificity ROC curve. This evaluation compares three different EEG normalization methods, namely a fixed baseline buffer (FB), the median decaying memory (MDM) approach, and our novelty median decaying memory (Novelty-MDM) method. It is demonstrated that MDM did not improve overall performance compared to FB (p < 0.27), partly because seizure like episodes were included in the baseline. More importantly, Novelty-MDM significantly outperforms both FB (p = 0.015) and MDM (p = 0.0065).
NASA Astrophysics Data System (ADS)
Van doninck, Jasper; Tuomisto, Hanna
2017-06-01
Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.
2013-01-01
Background Aluminum is used in a wide range of applications and is a potential environmental hazard. The known genotoxic effects of aluminum might play a role in the development of breast cancer. However, the data currently available on the subject are not sufficient to establish a causal relationship between aluminum exposure and the augmented risk of developing breast cancer. To achieve maximum sensitivity and specificity in the determination of aluminum levels, we have developed a detection protocol using graphite furnace atomic absorption spectrometry (GFAAS). The objective of the present study was to compare the aluminum levels in the central and peripheral areas of breast carcinomas with those in the adjacent normal breast tissues, and to identify patient and/or tumor characteristics associated with these aluminum levels. Methods A total of 176 patients with breast cancer were included in the study. Samples from the central and peripheral areas of their tumors were obtained, as well as from the surrounding normal breast tissue. Aluminum quantification was performed using GFAAS. Results The average (mean ± SD) aluminum concentrations were as follows: central area, 1.88 ± 3.60 mg/kg; peripheral area, 2.10 ± 5.67 mg/kg; and normal area, 1.68 ± 11.1 mg/kg. Overall and two-by-two comparisons of the aluminum concentrations in these areas indicated no significant differences. We detected a positive relationship between aluminum levels in the peripheral areas of the tumors, age and menopausal status of the patients (P = .02). Conclusions Using a sensitive quantification technique we detected similar aluminum concentrations in the central and peripheral regions of breast tumors, and in normal tissues. In addition, we did not detect significant differences in aluminum concentrations as related to the location of the breast tumor within the breast, or to other relevant tumor features such as stage, size and steroid receptor status. The next logical step is the assessment of whether the aluminum concentration is related to the key genomic abnormalities associated with breast carcinogenesis. PMID:23496847
New approach to estimating variability in visual field data using an image processing technique.
Crabb, D P; Edgar, D F; Fitzke, F W; McNaught, A I; Wynn, H P
1995-01-01
AIMS--A new framework for evaluating pointwise sensitivity variation in computerised visual field data is demonstrated. METHODS--A measure of local spatial variability (LSV) is generated using an image processing technique. Fifty five eyes from a sample of normal and glaucomatous subjects, examined on the Humphrey field analyser (HFA), were used to illustrate the method. RESULTS--Significant correlation between LSV and conventional estimates--namely, HFA pattern standard deviation and short term fluctuation, were found. CONCLUSION--LSV is not dependent on normals' reference data or repeated threshold determinations, thus potentially reducing test time. Also, the illustrated pointwise maps of LSV could provide a method for identifying areas of fluctuation commonly found in early glaucomatous field loss. PMID:7703196
Kotecha, P.V.; Patel, S.V.; Bhalani, K.D.; Shah, D.; Shah, V.S.; Mehta, K.G.
2012-01-01
Background & objectives: Endemic fluorosis resulting from high fluoride concentration in groundwater is a major public health problem in India. This study was carried out to measure and compare the prevalence of dental fluorosis and dental caries in the population residing in high and normal level of fluoride in their drinking water in Vadodara district, Gujarat, India. Methods: A cross-sectional study was conducted in Vadodara district, six of the 261 villages with high fluoride level and five of 1490 with normal fluoride level in drinking water were selected. The data collection was made by house-to-house visits twice during the study period. Results: The dental fluorosis prevalence in high fluoride area was 59.31 per cent while in normal fluoride area it was 39.21 per cent. The prevalence of dental caries in high fluoride area was 39.53 per cent and in normal fluoride area was 48.21 per cent with CI 6.16 to 11.18. Dental fluorosis prevalence was more among males as compared to females. Highest prevalence of dental fluorosis was seen in 12-24 yr age group. Interpretation & conclusions: The risk of dental fluorosis was higher in the areas showing more fluoride content in drinking water and to a lesser degree of dental caries in the same area. High fluoride content is a risk factor for dental fluorosis and problem of dental fluorosis increased with passage of time suggesting that the fluoride content in the water has perhaps increased over time. Longitudinal studies should be conducted to confirm the findings. PMID:22825606
Watanabe, Shota; Sakaguchi, Kenta; Hosono, Makoto; Ishii, Kazunari; Murakami, Takamichi; Ichikawa, Katsuhiro
The purpose of this study was to evaluate the effect of a hybrid-type iterative reconstruction method on Z-score mapping of hyperacute stroke in unenhanced computed tomography (CT) images. We used a hybrid-type iterative reconstruction [adaptive statistical iterative reconstruction (ASiR)] implemented in a CT system (Optima CT660 Pro advance, GE Healthcare). With 15 normal brain cases, we reconstructed CT images with a filtered back projection (FBP) and ASiR with a blending factor of 100% (ASiR100%). Two standardized normal brain data were created from normal databases of FBP images (FBP-NDB) and ASiR100% images (ASiR-NDB), and standard deviation (SD) values in basal ganglia were measured. The Z-score mapping was performed for 12 hyperacute stroke cases by using FBP-NDB and ASiR-NDB, and compared Z-score value on hyperacute stroke area and normal area between FBP-NDB and ASiR-NDB. By using ASiR-NDB, the SD value of standardized brain was decreased by 16%. The Z-score value of ASiR-NDB on hyperacute stroke area was significantly higher than FBP-NDB (p<0.05). Therefore, the use of images reconstructed with ASiR100% for Z-score mapping had potential to improve the accuracy of Z-score mapping.
A normalization strategy for comparing tag count data
2012-01-01
Background High-throughput sequencing, such as ribonucleic acid sequencing (RNA-seq) and chromatin immunoprecipitation sequencing (ChIP-seq) analyses, enables various features of organisms to be compared through tag counts. Recent studies have demonstrated that the normalization step for RNA-seq data is critical for a more accurate subsequent analysis of differential gene expression. Development of a more robust normalization method is desirable for identifying the true difference in tag count data. Results We describe a strategy for normalizing tag count data, focusing on RNA-seq. The key concept is to remove data assigned as potential differentially expressed genes (DEGs) before calculating the normalization factor. Several R packages for identifying DEGs are currently available, and each package uses its own normalization method and gene ranking algorithm. We compared a total of eight package combinations: four R packages (edgeR, DESeq, baySeq, and NBPSeq) with their default normalization settings and with our normalization strategy. Many synthetic datasets under various scenarios were evaluated on the basis of the area under the curve (AUC) as a measure for both sensitivity and specificity. We found that packages using our strategy in the data normalization step overall performed well. This result was also observed for a real experimental dataset. Conclusion Our results showed that the elimination of potential DEGs is essential for more accurate normalization of RNA-seq data. The concept of this normalization strategy can widely be applied to other types of tag count data and to microarray data. PMID:22475125
Normal Magnetization at ca. 1.5 Ma at Three Sites in Yukon Territory, Canada: The Gilsa Sub-chron?
NASA Astrophysics Data System (ADS)
Froese, D. G.; Westgate, J. A.; Barendregt, R. W.; Villeneuve, M.; Jackson, L. E.; Baker, J.; Enkin, R.; Irving, E.; Hart, C.; Preece, S. J.; Sandhu, A.
2001-12-01
Normally magnetized sediment is associated with Fort Selkirk and Paradise Hill tephras in central Yukon Territory. The Fort Selkirk tephra, sourced in the Wrangell Mountains of southeastern Alaska, is dated by the isothermal plateau fission track method (ITPFT) at 1.48 +/- 0.11 Ma. Fort Selkirk tephra is reversely magnetized, however a normal polarity interval occurs between the tephra and an overlying reversely magnetized basalt flow, dated by the Ar-Ar method, at 1.37 +/- 0.03 Ma. This succession is found at two sites in the Fort Selkirk area. At Paradise Hill, 120 km to the northwest, the Paradise Hill tephra, also sourced in the Wrangell Mountains of southeastern Alaska, has an ITPFT age of 1.54 +/- 0.13 Ma and is normally magnetized. The Paradise Hill and Fort Selkirk normal polarity intervals are certainly older than the Jaramillo (1.07-0.99 Ma) and Cobb Mountain (1.24-1.21 Ma) sub-chrons, and younger than the Olduvai (1.95-1.77 Ma). Our age estimates for this normal interval indicate a more plausible correlation to the Gilsa sub-chron estimated at 1.55 Ma from marine records and 1.6 Ma from Icelandic lavas.
A primitive study on unsupervised anomaly detection with an autoencoder in emergency head CT volumes
NASA Astrophysics Data System (ADS)
Sato, Daisuke; Hanaoka, Shouhei; Nomura, Yukihiro; Takenaga, Tomomi; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu
2018-02-01
Purpose: The target disorders of emergency head CT are wide-ranging. Therefore, people working in an emergency department desire a computer-aided detection system for general disorders. In this study, we proposed an unsupervised anomaly detection method in emergency head CT using an autoencoder and evaluated the anomaly detection performance of our method in emergency head CT. Methods: We used a 3D convolutional autoencoder (3D-CAE), which contains 11 layers in the convolution block and 6 layers in the deconvolution block. In the training phase, we trained the 3D-CAE using 10,000 3D patches extracted from 50 normal cases. In the test phase, we calculated abnormalities of each voxel in 38 emergency head CT volumes (22 abnormal cases and 16 normal cases) for evaluation and evaluated the likelihood of lesion existence. Results: Our method achieved a sensitivity of 68% and a specificity of 88%, with an area under the curve of the receiver operating characteristic curve of 0.87. It shows that this method has a moderate accuracy to distinguish normal CT cases to abnormal ones. Conclusion: Our method has potentialities for anomaly detection in emergency head CT.
Berman, N E; Grant, S
1992-07-01
The callosal connections between visual cortical areas 17 and 18 in adult normally pigmented and "Boston" Siamese cats were studied using degeneration methods, and by transport of WGA-HRP combined with electrophysiological mapping. In normal cats, over 90% of callosal neurons were located in the supragranular layers. The supragranular callosal cell zone spanned the area 17/18 border and extended, on average, some 2-3 mm into both areas to occupy a territory which was roughly co-extensive with the distribution of callosal terminations in these areas. The region of the visual field adjoining the vertical meridian that was represented by neurons in the supragranular callosal cell zone was shown to increase systematically with decreasing visual elevation. Thus, close to the area centralis, receptive-field centers recorded from within this zone extended only up to 5 deg into the contralateral hemifield but at elevations of -10 deg and -40 deg they extended as far as 8 deg and 14 deg, respectively, into this hemifield. This suggests an element of visual non-correspondence in the callosal pathway between these cortical areas, which may be an essential substrate for "coarse" stereopsis at the visual midline. In the Siamese cats, the callosal cell and termination zones in areas 17 and 18 were expanded in width compared to the normal animals, but the major components were less robust. The area 17/18 border was often devoid of callosal axons and, in particular, the number of supragranular layer neurons participating in the pathway were drastically reduced, to only about 25% of those found in the normally pigmented adults. The callosal zones contained representations of the contralateral and ipsilateral hemifields that were roughly mirror-symmetric about the vertical meridian, and both hemifield representations increased with decreasing visual elevation. The extent and severity of the anomalies observed were similar across individual cats, regardless of whether a strabismus was also present. The callosal pathway between these visual cortical areas in the Siamese cat has been considered "silent," since nearly all neurons within its territory are activated only by the contralateral eye. The paucity of supragranular pyramidal neurons involved in the pathway may explain this silence.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Kautz, Harold E.; Abel, Phillip B.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.
2000-01-01
Surface topography, which significantly affects the performance of many industrial components, is normally measured with diamond-tip profilometry over small areas or with optical scattering methods over larger areas. To develop air-coupled surface profilometry, the NASA Glenn Research Center at Lewis Field initiated a Space Act Agreement with Sonix, Inc., through two Glenn programs, the Advanced High Temperature Engine Materials Program (HITEMP) and COMMTECH. The work resulted in quantitative surface topography profiles obtained using only high-frequency, focused ultrasonic pulses in air. The method is nondestructive, noninvasive, and noncontact, and it does not require light-reflective surfaces. Air surface profiling may be desirable when diamond-tip or laserbased methods are impractical, such as over large areas, when a significant depth range is required, or for curved surfaces. When the configuration is optimized, the method is reasonably rapid and all the quantitative analysis facilities are online, including two- and three-dimensional visualization, extreme value filtering (for faulty data), and leveling.
Spatiotemporal analysis of Quaternary normal faults in the Northern Rocky Mountains, USA
NASA Astrophysics Data System (ADS)
Davarpanah, A.; Babaie, H. A.; Reed, P.
2010-12-01
The mid-Tertiary Basin-and-Range extensional tectonic event developed most of the normal faults that bound the ranges in the northern Rocky Mountains within Montana, Wyoming, and Idaho. The interaction of the thermally induced stress field of the Yellowstone hot spot with the existing Basin-and-Range fault blocks, during the last 15 my, has produced a new, spatially and temporally variable system of normal faults in these areas. The orientation and spatial distribution of the trace of these hot-spot induced normal faults, relative to earlier Basin-and-Range faults, have significant implications for the effect of the temporally varying and spatially propagating thermal dome on the growth of new hot spot related normal faults and reactivation of existing Basin-and-Range faults. Digitally enhanced LANDSAT 7 Enhanced Thematic Mapper Plus (ETM+) and Landsat 4 and 5 Thematic Mapper (TM) bands, with spatial resolution of 30 m, combined with analytical GIS and geological techniques helped in determining and analyzing the lineaments and traces of the Quaternary, thermally-induced normal faults in the study area. Applying the color composite (CC) image enhancement technique, the combination of bands 3, 2 and 1 of the ETM+ and TM images was chosen as the best statistical choice to create a color composite for lineament identification. The spatiotemporal analysis of the Quaternary normal faults produces significant information on the structural style, timing, spatial variation, spatial density, and frequency of the faults. The seismic Quaternary normal faults, in the whole study area, are divided, based on their age, into four specific sets, which from oldest to youngest include: Quaternary (>1.6 Ma), middle and late Quaternary (>750 ka), latest Quaternary (>15 ka), and the last 150 years. A density map for the Quaternary faults reveals that most active faults are near the current Yellowstone National Park area (YNP), where most seismically active faults, in the past 1.6 my, are located. The GIS based autocorrelation method, applied to the trace orientation, length, frequency, and spatial distribution for each age-defined fault set, revealed spatial homogeneity for each specific set. The results of the method of Moran`sI and Geary`s C show no spatial autocorrelation among the trend of the fault traces and their location. Our results suggest that while lineaments of similar age define a clustered pattern in each domain, the overall distribution pattern of lineaments with different ages seems to be non-uniform (random). The directional distribution analysis reveals a distinct range of variation for fault traces of different ages (i.e., some displaying ellipsis behavior). Among the Quaternary normal fault sets, the youngest lineament set (i.e., last 150 years) defines the greatest ellipticity (eccentricity) and the least lineaments distribution variation. The frequency rose diagram for the entire Quaternary normal faults, shows four major modes (around 360o, 330o, 300o, and 270o), and two minor modes (around 235 and 205).
A Noninvasive Test for MicroRNA Expression in Oral Squamous Cell Carcinoma.
Gissi, Davide B; Morandi, Luca; Gabusi, Andrea; Tarsitano, Achille; Marchetti, Claudio; Cura, Francesca; Palmieri, Annalisa; Montebugnoli, Lucio; Asioli, Sofia; Foschini, Maria P; Scapoli, Luca
2018-06-16
MicroRNAs have recently been proposed as non-invasive biomarkers in Oral Squamous Cell Carcinoma (OSCC). The aim of this study was to analyze the expression of a panel of miRNAs in epithelial cells collected by oral brushing from OSCCs from regenerative areas after OSCC surgical resection and from their respective normal distant mucosa. Oral brushing specimens were collected from 24 healthy donors, 14 OSCC patients with specimens from tumour and normal distant mucosa, and from 13 patients who had OSCC resection, with samples from regenerative areas after OSCC resection and normal distant mucosa. Expression levels of eight targets (miR-21, miR-375, miR-345, miR-181b, miR-146a, miR-649, miR-518b, and miR-191) were evaluated by real-time Polymerase Chain Reaction (PCR). A highly significant between-group difference was found for miR-21 (F = 6.58, p < 0.001), miR-146a (F = 6.974, p < 0.001), and miR-191 (F = 17.07, p < 0.001). The major difference was observed between samples from healthy donors and from OSCC brushing, whereas no significant differences were observed between areas infiltrated by OSCC and their respective normal distant mucosa. Furthermore, altered expression of miR-146a and miR-191 was also observed in regenerative areas after OSCC resection. Oral brushing could be proposed as a noninvasive method to study microRNA expression in oral mucosa in OSCC patients.
[Retrieve of red tide distributions from MODIS data based on the characteristics of water spectrum].
Qiu, Zhong-Feng; Cui, Ting-Wei; He, Yi-Jun
2011-08-01
After comparing the spectral differences between red tide water and normal water, we developed a method to retrieve red tide distributions from MODIS data based on the characteristics of red tide water spectrum. The authors used the 119 series of in situ observations to validate the method and found that only one observation has not been detected correctly. The authors then applied this method to MODIS data on April 4, 2005. In the research areas three locations of red tide water were apparently detected with the total areas about 2 000 km2. The retrieved red tide distributions are in good agreement with the distributions of high chlorophyll a concentrations. The research suggests that the method is available to eliminating the influence of suspended sediments and can be used to retrieve the locations and areas of red tide water.
Signature extension: An approach to operational multispectral surveys
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Morgenstern, J. P.
1973-01-01
Two data processing techniques were suggested as applicable to the large area survey problem. One approach was to use unsupervised classification (clustering) techniques. Investigation of this method showed that since the method did nothing to reduce the signal variability, the use of this method would be very time consuming and possibly inaccurate as well. The conclusion is that unsupervised classification techniques of themselves are not a solution to the large area survey problem. The other method investigated was the use of signature extension techniques. Such techniques function by normalizing the data to some reference condition. Thus signatures from an isolated area could be used to process large quantities of data. In this manner, ground information requirements and computer training are minimized. Several signature extension techniques were tested. The best of these allowed signatures to be extended between data sets collected four days and 80 miles apart with an average accuracy of better than 90%.
Cai, L Y; Wang, T; Lin, D S; Lu, D
2017-04-20
Objective: To investigate the effects and related mechanism of bivalirudin on the survival of random skin flap on the back of rat. Methods: Thirty SD rats were divided into bivalirudin group and normal saline group according to the random number table, with 15 rats in each group. The random flap model with size of 9 cm×3 cm was reproduced on the back of rats in two groups. Immediately post injury, rats in bivalirudin group were intraperitoneally injected with 5 mg/mL bivalirudin (0.8 mL/kg), while rats in normal saline group were intraperitoneally injected with normal saline (0.8 mL/kg) once a day. The continuous injection lasted for 7 days. The flap was divided into distal area, middle area and proximal area averagely based on the flap blood supply. On post injury day (PID) 1, 3, and 7, the overall survival of each area of flap was observed with naked eyes. On PID 7, the survival rate of flap was calculated, and then the morphology of skin tissue at the center of the three areas of flap was observed by HE staining, the microvessel density (MVD) of the middle area of flap was calculated, and the expression of vascular endothelial growth factor (VEGF) of the middle area of flap was detected with immunohistochemical staining. Data were processed with t test. Results: (1) On PID 1, flaps of rats in two groups had different degrees of swelling, mainly concentrated in distal area, but there was no obvious necrosis. The middle area and proximal area of flaps in two groups were survived. On PID 3, the necrosis of flaps of rats in two groups was concentrated in the middle area, while the proximal area of flap was still in survival state, and most distal area of flap was necrosis with a little scab. On PID 7, the necrosis of middle area of flaps of rats in two groups was gradually fused, and the survival area of flap of rats in bivalirudin group was larger than that in normal saline group. The distal area of flap was almost necrotic, and the proximal area of flap was almost survived. (2) On PID 7, the survival rate of flap of rats in bivalirudin group was (64±4)%, significantly higher than that in normal saline group [(45±3)%, t =13.49, P <0.01]. (3) On PID 7, the histological morphology of distal area of flap of rats in two groups was similar, the inflammatory cells were infiltrated abundantly, and tissue edema was obvious. A large number of new blood vessels appeared in the middle area of flap of rats in bivalirudin group, with the formation of collateral vessels, and basic dilation of new blood vessels was seen. There were fewer new blood vessels appeared in the middle area of flap of rats in normal saline group, and dilation of new blood vessels was not obvious. There was little inflammatory cells infiltration in the proximal area of flap of rats in two groups. Compared with that in normal saline group, tissue edema extent of proximal area of flap of rats in bivalirudin group was less, and expansion was observed in more blood vessels. (4) The MVD of middle area of flap of rats in bivalirudin group was (26±5)/mm(2,) significantly higher than that in normal saline group [(18±3)/mm(2,) t =5.43, P <0.05]. (5) The expression of VEGF of middle area of flap of rats in bivalirudin group was 6 534±384, significantly higher than that in normal saline group (4 659±448, t =12.31, P <0.05). Conclusions: Bivalirudin can promote the survival of random skin flap in rats, and the mechanisms may include reducing the formation of thrombosis, improving the blood supply of flap, and increasing the expression of VEGF, promoting the formation of new blood vessels.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
2013-01-01
Background Cardiovascular magnetic resonance (CMR) steady state free precession (SSFP) cine sequences with high temporal resolution and improved post-processing can accurately measure RA dimensions. We used this technique to define ranges for normal RA volumes and dimensions normalized, when necessary, to the influence of gender, body surface area (BSA) and age, and also to define the best 2D images-derived predictors of RA enlargement. Methods For definition of normal ranges of RA volume we studied 120 healthy subjects (60 men, 60 women; 20 subjects per age decile from 20 to 80 years), after careful exclusion of cardiovascular abnormality. We also studied 120 patients (60 men, 60 women; age range 20 to 80 years) with a clinical indication for CMR in order to define the best 1D and 2D predictors of RA enlargement. Data were generated from SSFP cine CMR, with 3-dimensional modeling, including tracking of the atrioventricular ring motion and time-volume curves analysis. Results In the group of healthy individuals, age influenced RA 2-chamber area and transverse diameter. Gender influenced most absolute RA dimensions and volume. Interestingly, right atrial volumes did not change with age and gender when indexed to body surface area. New CMR normal ranges for RA dimensions were modeled and displayed for clinical use with normalization for BSA and gender and display of parameter variation with age. Finally, the best 2D images-derived independent predictors of RA enlargement were indexed area and indexed longitudinal diameter in the 2-chamber view. Conclusion Reference RA dimensions and predictors of RA enlargement are provided using state-of-the-art CMR techniques. PMID:23566426
NASA Astrophysics Data System (ADS)
Moschandreas, D. J.; Kim, Y.; Karuchit, S.; Ari, H.; Lebowitz, M. D.; O'Rourke, M. K.; Gordon, S.; Robertson, G.
One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure distributions of chlorpyrifos and diazinon in the residential microenvironment using the database generated in Arizona (NHEXAS-AZ). A four-stage probability sampling design was used for sample selection. Exposures to pesticides were estimated using the indirect method of exposure calculation by combining measured concentrations of the two pesticides in multiple media with questionnaire information such as time subjects spent indoors, dietary and non-dietary items they consumed, and areas they touched. Most distributions of in-residence exposure to chlorpyrifos and diazinon were log-normal or nearly log-normal. Exposures to chlorpyrifos and diazinon vary by pesticide and route as well as by various demographic characteristics of the subjects. Comparisons of exposure to pesticides were investigated among subgroups of demographic categories, including gender, age, minority status, education, family income, household dwelling type, year the dwelling was built, pesticide use, and carpeted areas within dwellings. Residents with large carpeted areas within their dwellings have higher exposures to both pesticides for all routes than those in less carpet-covered areas. Depending on the route, several other determinants of exposure to pesticides were identified, but a clear pattern could not be established regarding the exposure differences between several subpopulation groups.
NASA Astrophysics Data System (ADS)
Hara, Takeshi; Matoba, Naoto; Zhou, Xiangrong; Yokoi, Shinya; Aizawa, Hiroaki; Fujita, Hiroshi; Sakashita, Keiji; Matsuoka, Tetsuya
2007-03-01
We have been developing the CAD scheme for head and abdominal injuries for emergency medical care. In this work, we have developed an automated method to detect typical head injuries, rupture or strokes of brain. Extradural and subdural hematoma region were detected by comparing technique after the brain areas were registered using warping. We employ 5 normal and 15 stroke cases to estimate the performance after creating the brain model with 50 normal cases. Some of the hematoma regions were detected correctly in all of the stroke cases with no false positive findings on normal cases.
A comparison of change detection methods using multispectral scanner data
Seevers, Paul M.; Jones, Brenda K.; Qiu, Zhicheng; Liu, Yutong
1994-01-01
Change detection methods were investigated as a cooperative activity between the U.S. Geological Survey and the National Bureau of Surveying and Mapping, People's Republic of China. Subtraction of band 2, band 3, normalized difference vegetation index, and tasseled cap bands 1 and 2 data from two multispectral scanner images were tested using two sites in the United States and one in the People's Republic of China. A new statistical method also was tested. Band 2 subtraction gives the best results for detecting change from vegetative cover to urban development. The statistical method identifies areas that have changed and uses a fast classification algorithm to classify the original data of the changed areas by land cover type present for each image date.
Identifying city PV roof resource based on Gabor filter
NASA Astrophysics Data System (ADS)
Ruhang, Xu; Zhilin, Liu; Yong, Huang; Xiaoyu, Zhang
2017-06-01
To identify a city’s PV roof resources, the area and ownership distribution of residential buildings in an urban district should be assessed. To achieve this assessment, remote sensing data analysing is a promising approach. Urban building roof area estimation is a major topic for remote sensing image information extraction. There are normally three ways to solve this problem. The first way is pixel-based analysis, which is based on mathematical morphology or statistical methods; the second way is object-based analysis, which is able to combine semantic information and expert knowledge; the third way is signal-processing view method. This paper presented a Gabor filter based method. This result shows that the method is fast and with proper accuracy.
Gel stretch method: a new method to measure constitutive properties of cardiac muscle cells
NASA Technical Reports Server (NTRS)
Zile, M. R.; Cowles, M. K.; Buckley, J. M.; Richardson, K.; Cowles, B. A.; Baicu, C. F.; Cooper G, I. V.; Gharpuray, V.
1998-01-01
Diastolic dysfunction is an important cause of congestive heart failure; however, the basic mechanisms causing diastolic congestive heart failure are not fully understood, especially the role of the cardiac muscle cell, or cardiocyte, in this process. Before the role of the cardiocyte in this pathophysiology can be defined, methods for measuring cardiocyte constitutive properties must be developed and validated. Thus this study was designed to evaluate a new method to characterize cardiocyte constitutive properties, the gel stretch method. Cardiocytes were isolated enzymatically from normal feline hearts and embedded in a 2% agarose gel containing HEPES-Krebs buffer and laminin. This gel was cast in a shape that allowed it to be placed in a stretching device. The ends of the gel were held between a movable roller and fixed plates that acted as mandibles. Distance between the right and left mandibles was increased using a stepper motor system. The force applied to the gel was measured by a force transducer. The resultant cardiocyte strain was determined by imaging the cells with a microscope, capturing the images with a CCD camera, and measuring cardiocyte and sarcomere length changes. Cardiocyte stress was characterized with a finite-element method. These measurements of cardiocyte stress and strain were used to determine cardiocyte stiffness. Two variables affecting cardiocyte stiffness were measured, the passive elastic spring and viscous damping. The passive spring was assessed by increasing the force on the gel at 1 g/min, modeling the resultant stress vs. strain relationship as an exponential [sigma = A/k(ekepsilon - 1)]. In normal cardiocytes, A = 23.0 kN/m2 and k = 16. Viscous damping was assessed by examining the loop area between the stress vs. strain relationship during 1 g/min increases and decreases in force. Normal cardiocytes had a finite loop area = 1.39 kN/m2, indicating the presence of viscous damping. Thus the gel stretch method provided accurate measurements of cardiocyte constitutive properties. These measurements have allowed the first quantitative assessment of passive elastic spring properties and viscous damping in normal mammalian cardiocytes.
Music score watermarking by clef modifications
NASA Astrophysics Data System (ADS)
Schmucker, Martin; Yan, Hongning
2003-06-01
In this paper we present a new method for hiding data in music scores. In contrast to previous published algorithms we investigate the possibilities of embedding information in clefs. Using the clef as information carrier has two advantages: First, a clef is present in each staff line which guarantees a fixed capacity. Second, the clef defines the reference system for musical symbols and music containing symbols, e.g. the notes and the rests, are not degraded by manipulations. Music scores must be robust against greyscale to binary conversion. As a consequence, the information is embedded by modifying the black and white distribution of pixels in certain areas. We evaluate simple image processing mechanisms based on erosion and dilation for embedding the information. For retrieving the watermark the b/w-distribution is extracted from the given clef. To solve the synchronization problem the watermarked clef is normalized in a pre-processing step. The normalization is based on moments. The areas used for watermarking are calculated by image segmentation techniques which consider the features of a clef. We analyze capacity and robustness of the proposed method using different parameters for our proposed method. This proposed method can be combined with other music score watermarking methods to increase the capacity of existing watermarking techniques.
[Kriging analysis of vegetation index depression in peak cluster karst area].
Yang, Qi-Yong; Jiang, Zhong-Cheng; Ma, Zu-Lu; Cao, Jian-Hua; Luo, Wei-Qun; Li, Wen-Jun; Duan, Xiao-Fang
2012-04-01
In order to master the spatial variability of the normal different vegetation index (NDVI) of the peak cluster karst area, taking into account the problem of the mountain shadow "missing" information of remote sensing images existing in the karst area, NDVI of the non-shaded area were extracted in Guohua Ecological Experimental Area, in Pingguo County, Guangxi applying image processing software, ENVI. The spatial variability of NDVI was analyzed applying geostatistical method, and the NDVI of the mountain shadow areas was predicted and validated. The results indicated that the NDVI of the study area showed strong spatial variability and spatial autocorrelation resulting from the impact of intrinsic factors, and the range was 300 m. The spatial distribution maps of the NDVI interpolated by Kriging interpolation method showed that the mean of NDVI was 0.196, apparently strip and block. The higher NDVI values distributed in the area where the slope was greater than 25 degrees of the peak cluster area, while the lower values distributed in the area such as foot of the peak cluster and depression, where slope was less than 25 degrees. Kriging method validation results show that interpolation has a very high prediction accuracy and could predict the NDVI of the shadow area, which provides a new idea and method for monitoring and evaluation of the karst rocky desertification.
Exploring resting-state EEG complexity before migraine attacks.
Cao, Zehong; Lai, Kuan-Lin; Lin, Chin-Teng; Chuang, Chun-Hsiang; Chou, Chien-Chen; Wang, Shuu-Jiun
2018-06-01
Objective Entropy-based approaches to understanding the temporal dynamics of complexity have revealed novel insights into various brain activities. Herein, electroencephalogram complexity before migraine attacks was examined using an inherent fuzzy entropy approach, allowing the development of an electroencephalogram-based classification model to recognize the difference between interictal and preictal phases. Methods Forty patients with migraine without aura and 40 age-matched normal control subjects were recruited, and the resting-state electroencephalogram signals of their prefrontal and occipital areas were prospectively collected. The migraine phases were defined based on the headache diary, and the preictal phase was defined as within 72 hours before a migraine attack. Results The electroencephalogram complexity of patients in the preictal phase, which resembled that of normal control subjects, was significantly higher than that of patients in the interictal phase in the prefrontal area (FDR-adjusted p < 0.05) but not in the occipital area. The measurement of test-retest reliability (n = 8) using the intra-class correlation coefficient was good with r1 = 0.73 ( p = 0.01). Furthermore, the classification model, support vector machine, showed the highest accuracy (76 ± 4%) for classifying interictal and preictal phases using the prefrontal electroencephalogram complexity. Conclusion Entropy-based analytical methods identified enhancement or "normalization" of frontal electroencephalogram complexity during the preictal phase compared with the interictal phase. This classification model, using this complexity feature, may have the potential to provide a preictal alert to migraine without aura patients.
Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA
2012-07-03
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA
2008-10-28
An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.
20 CFR 655.4 - Definitions of terms used in this subpart.
Code of Federal Regulations, 2012 CFR
2012-04-01
.... worker recruitment report. Area of Intended Employment means the geographic area within normal commuting... certification is sought. There is no rigid measure of distance which constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual circumstances among different areas (e...
LORETA imaging of P300 in schizophrenia with individual MRI and 128-channel EEG.
Pae, Ji Soo; Kwon, Jun Soo; Youn, Tak; Park, Hae-Jeong; Kim, Myung Sun; Lee, Boreom; Park, Kwang Suk
2003-11-01
We investigated the characteristics of P300 generators in schizophrenics by using voxel-based statistical parametric mapping of current density images. P300 generators, produced by a rare target tone of 1500 Hz (15%) under a frequent nontarget tone of 1000 Hz (85%), were measured in 20 right-handed schizophrenics and 21 controls. Low-resolution electromagnetic tomography (LORETA), using a realistic head model of the boundary element method based on individual MRI, was applied to the 128-channel EEG. Three-dimensional current density images were reconstructed from the LORETA intensity maps that covered the whole cortical gray matter. Spatial normalization and intensity normalization of the smoothed current density images were used to reduce anatomical variance and subject-specific global activity and statistical parametric mapping (SPM) was applied for the statistical analysis. We found that the sources of P300 were consistently localized at the left superior parietal area in normal subjects, while those of schizophrenics were diversely distributed. Upon statistical comparison, schizophrenics, with globally reduced current densities, showed a significant P300 current density reduction in the left medial temporal area and in the left inferior parietal area, while both left prefrontal and right orbitofrontal areas were relatively activated. The left parietotemporal area was found to correlate negatively with Positive and Negative Syndrome Scale total scores of schizophrenic patients. In conclusion, the reduced and increased areas of current density in schizophrenic patients suggest that the medial temporal and frontal areas contribute to the pathophysiology of schizophrenia, the frontotemporal circuitry abnormality.
NASA Astrophysics Data System (ADS)
Chen, X.; Vierling, L. A.; Deering, D. W.
2004-12-01
Satellite data offer unique perspectives for monitoring and quantifying land cover change, however, the radiometric consistency among co-located multi-temporal images is difficult to maintain due to variations in sensors and atmosphere. To detect accurate landscape change using multi-temporal images, we developed a new relative radiometric normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on 9 June 1990 (Landsat 4), 20 June 2000, and 26 August 2001 (Landsat 7) for analyses over boreal forests near the Siberian city of Krasnoyarsk. Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Reduced Simple Ratio (RSR) were investigated in the normalization study. The temporally invariant cluster (TIC) centers were identified through a point density map of the base image and the target image and a normalization regression line was created through all TIC centers. The target image digital data were then converted using the regression function so that the two images could be compared using the resulting common radiometric scale. We found that EVI was very sensitive to vegetation structure and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. NDVI was a very effective vegetation index to reduce the influence of shadow, while EVI was very sensitive to shadowing. After normalization, correlations of NDVI and EVI with field collected total Leaf Area Index (LAI) data in 2000 and 2001 were significantly improved; the r-square values in these regressions increased from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI ¡°cancellation effect¡± where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of landscape change. Compared with some previous relative normalization methods, this new method can avoid subjective selection of a normalization regression line. It does not require high level programming and statistical analyses, yet remains sensitive to landscape changes occurring over seasonal and inter-annual time scales. In addition, the TIC method maintains sensitivity to subtle changes in vegetation phenology and enables normalization even when invariant features are rare.
[Microcytomorphometric video-image detection of nuclear chromatin in ovarian cancer].
Grzonka, Dariusz; Kamiński, Kazimierz; Kaźmierczak, Wojciech
2003-09-01
Technology of detection of tissue preparates precisious evaluates contents of nuclear chromatine, largeness and shape of cellular nucleus, indicators of mitosis, DNA index, ploidy, phase-S fraction and other parameters. Methods of detection of picture are: microcytomorphometry video-image (MCMM-VI), flow, double flow and activated by fluorescence. Diagnostic methods of malignant neoplasm of ovary are still nonspecific and not precise, that is a reason of unsatisfied results of treatment. Evaluation of microcytomorphometric measurements of nuclear chromatine histopathologic tissue preparates (HP) of ovarian cancer and comparison to normal ovarian tissue. Estimated 10 paraffin embedded tissue preparates of serous ovarian cancer, 4 preparates mucinous cancer and 2 cases of tumor Kruckenberg patients operated in Clinic of Perinatology and Gynaecology Silesian Medical Academy in Zabrze in period 2001-2002, MCMM-VI estimation based on computer aided analysis system: microscope Axioscop 20, camera tv JVCTK-C 1380, CarlZeiss KS Vision 400 rel.3.0 software. Following MCMM-VI parameters assessed: count of pathologic nucleus, diameter of nucleus, area, min/max diameter ratio, equivalent circle diameter (Dcircle), mean of brightness (mean D), integrated optical density (IOD = area x mean D), DNA index and 2.5 c exceeding rate percentage (2.5 c ER%). MCMM-VI performed on the 160 areas of 16 preparates of cancer and 100 areas of normal ovarian tissue. Statistical analysis was performed by used t-Student test. We obtained stastistically significant higher values parameters of nuclear chromatine, DI, 2.5 c ER of mucinous cancer and tumor Kruckenberg comparison to serous cancer. MCMM-VI parameters of chromatine malignant ovarian neoplasm were statistically significantly higher than normal ovarian tissue. Cytometric and karyometric parametres of nuclear chromatine estimated MCMM-VI are useful in the diagnostics and prognosis of ovarian cancer.
NASA Astrophysics Data System (ADS)
Russel, Fhillipo; Damayanti, Astrid; Pin, Tjiong Giok
2018-02-01
This research is about geothermal potential of Mount Karang, Banten Province which is based on the characteristics of the region. This research method used is geochemistry sample of hot springs and integrated with GIS method for spatial of geothermal potential. Based on the geothermal potential, Mount Karang is divided into three regions, ie high potential, normal potential, and low potential. The high geothermal potential region covers an area of 24.16 Km2 and which there are Cisolong and Banjar 2 hot springs. The normal potential covers Kawah hot spring. Index of the fault of Mount Karang region is one of the significant physical characteristics to determine geothermal potential.
Structure and function in patients with glaucomatous defects near fixation.
Shafi, Asifa; Swanson, William H; Dul, Mitchell W
2011-01-01
To assess relations between perimetric sensitivity and neuroretinal rim area using high-resolution perimetric mapping in patients with glaucomatous defects within 10° of fixation. One eye was tested in each of 31 patients with open-angle glaucoma enrolled in a prospective study of perimetric defects within 10° of fixation. Norms were derived from 110 control subjects free of eye disease, aged 21 to 81 years. Perimetric sensitivity was measured using the 10-2 test pattern with the Swedish Interactive Threshold Algorithm (SITA) standard algorithm on the Humphrey Field Analyzer (HFA) II-i; Carl Zeiss Meditec), stimulus size III. Area of the temporal neuroretinal rim was measured using the Heidelberg retina tomograph 3. Decibel values were converted into linear units of contrast sensitivity averaged across locations corresponding to the temporal rim sector. Both measures were expressed as percent of mean normal, and the Bland-Altman method was used to assess agreement. Perimetric locations corresponding to the temporal sector were determined for six different optic nerve maps. Contrast sensitivity was moderately correlated with temporal rim area (r2 >30%, p < 0.005). For all six optic nerve maps, Bland-Altman analysis found good agreement between perimetric sensitivity and rim area with both measures expressed as fraction of mean normal and confidence limits for agreement that were consistent with normal between-subject variability in control eyes. By using high-resolution perimetric mapping in patients with scotomas within 10° of fixation, we confirmed findings of linear relations between perimetric sensitivity and area of temporal neuroretinal rim and showed that the confidence limits for agreement in patients with glaucoma were consistent with normal between-subject variability.
Cideciyan, Artur V.; Swider, Malgorzata; Jacobson, Samuel G.
2015-01-01
Purpose. We previously developed reduced-illuminance autofluorescence imaging (RAFI) methods involving near-infrared (NIR) excitation to image melanin-based fluorophores and short-wavelength (SW) excitation to image lipofuscin-based flurophores. Here, we propose to normalize NIR-RAFI in order to increase the relative contribution of retinal pigment epithelium (RPE) fluorophores. Methods. Retinal imaging was performed with a standard protocol holding system parameters invariant in healthy subjects and in patients. Normalized NIR-RAFI was derived by dividing NIR-RAFI signal by NIR reflectance point-by-point after image registration. Results. Regions of RPE atrophy in Stargardt disease, AMD, retinitis pigmentosa, choroideremia, and Leber congenital amaurosis as defined by low signal on SW-RAFI could correspond to a wide range of signal on NIR-RAFI depending on the contribution from the choroidal component. Retinal pigment epithelium atrophy tended to always correspond to high signal on NIR reflectance. Normalizing NIR-RAFI reduced the choroidal component of the signal in regions of atrophy. Quantitative evaluation of RPE atrophy area showed no significant differences between SW-RAFI and normalized NIR-RAFI. Conclusions. Imaging of RPE atrophy using lipofuscin-based AF imaging has become the gold standard. However, this technique involves bright SW lights that are uncomfortable and may accelerate the rate of disease progression in vulnerable retinas. The NIR-RAFI method developed here is a melanin-based alternative that is not absorbed by opsins and bisretinoid moieties, and is comfortable to view. Further development of this method may result in a nonmydriatic and comfortable imaging method to quantify RPE atrophy extent and its expansion rate. PMID:26024124
NASA Technical Reports Server (NTRS)
Nelson, Robert L.; Welsh, Clement J.
1960-01-01
The experimental wave drags of bodies and wing-body combinations over a wide range of Mach numbers are compared with the computed drags utilizing a 24-term Fourier series application of the supersonic area rule and with the results of equivalent-body tests. The results indicate that the equivalent-body technique provides a good method for predicting the wave drag of certain wing-body combinations at and below a Mach number of 1. At Mach numbers greater than 1, the equivalent-body wave drags can be misleading. The wave drags computed using the supersonic area rule are shown to be in best agreement with the experimental results for configurations employing the thinnest wings. The wave drags for the bodies of revolution presented in this report are predicted to a greater degree of accuracy by using the frontal projections of oblique areas than by using normal areas. A rapid method of computing wing area distributions and area-distribution slopes is given in an appendix.
NASA Astrophysics Data System (ADS)
Theodorakou, Chrysoula; Farquharson, Michael J.
2009-08-01
The motivation behind this study is to assess whether angular dispersive x-ray diffraction (ADXRD) data, processed using multivariate analysis techniques, can be used for classifying secondary colorectal liver cancer tissue and normal surrounding liver tissue in human liver biopsy samples. The ADXRD profiles from a total of 60 samples of normal liver tissue and colorectal liver metastases were measured using a synchrotron radiation source. The data were analysed for 56 samples using nonlinear peak-fitting software. Four peaks were fitted to all of the ADXRD profiles, and the amplitude, area, amplitude and area ratios for three of the four peaks were calculated and used for the statistical and multivariate analysis. The statistical analysis showed that there are significant differences between all the peak-fitting parameters and ratios between the normal and the diseased tissue groups. The technique of soft independent modelling of class analogy (SIMCA) was used to classify normal liver tissue and colorectal liver metastases resulting in 67% of the normal tissue samples and 60% of the secondary colorectal liver tissue samples being classified correctly. This study has shown that the ADXRD data of normal and secondary colorectal liver cancer are statistically different and x-ray diffraction data analysed using multivariate analysis have the potential to be used as a method of tissue classification.
Electrochemical lesions in the rat liver support its potential for treatment of liver tumors.
Wemyss-Holden, S A; Robertson, G S; Dennison, A R; de la M Hall, P; Fothergill, J C; Jones, B; Maddern, G J
2000-09-01
An effective therapy is needed for patients with surgically unresectable liver tumors who have very limited life expectancy. One possible treatment is electrochemical tumor necrosis. This study investigated the natural history of electrochemical lesions in the normal rat liver. A direct current generator, connected to platinum electrodes, was used to create controlled areas of liver necrosis. Animals were sacrificed 2 days, 2 weeks, 2 months, and 6 months after treatment and the macroscopic and histological appearance of the necrotic lesions was followed. No animal died as a result of electrolysis; postoperatively, all gained weight normally. Liver enzymes were significantly (P < 0.001) elevated after treatment, but returned to normal after a week. Two days after electrolysis, histology confirmed an ellipsoidal area of coagulative necrosis at the site of the electrode tip and commonly a segment of peripheral necrosis. After 2 weeks there was histological evidence of healing. By 6 months, very little necrotic tissue remained within a small fibrous scar. Electrolysis is a safe method for creating defined areas of liver necrosis that heal well with no associated mortality. This study supports the potential of electrolysis for treating patients with unresectable liver tumors. Copyright 2000 Academic Press.
Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.
Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming
2018-01-01
Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.
Todor, Nicolae; Todor, Irina; Săplăcan, Gavril
2014-01-01
The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.
NASA Astrophysics Data System (ADS)
Yi, Faliu; Moon, Inkyu; Lee, Yeon H.
2015-01-01
Counting morphologically normal cells in human red blood cells (RBCs) is extremely beneficial in the health care field. We propose a three-dimensional (3-D) classification method of automatically determining the morphologically normal RBCs in the phase image of multiple human RBCs that are obtained by off-axis digital holographic microscopy (DHM). The RBC holograms are first recorded by DHM, and then the phase images of multiple RBCs are reconstructed by a computational numerical algorithm. To design the classifier, the three typical RBC shapes, which are stomatocyte, discocyte, and echinocyte, are used for training and testing. Nonmain or abnormal RBC shapes different from the three normal shapes are defined as the fourth category. Ten features, including projected surface area, average phase value, mean corpuscular hemoglobin, perimeter, mean corpuscular hemoglobin surface density, circularity, mean phase of center part, sphericity coefficient, elongation, and pallor, are extracted from each RBC after segmenting the reconstructed phase images by using a watershed transform algorithm. Moreover, four additional properties, such as projected surface area, perimeter, average phase value, and elongation, are measured from the inner part of each cell, which can give significant information beyond the previous 10 features for the separation of the RBC groups; these are verified in the experiment by the statistical method of Hotelling's T-square test. We also apply the principal component analysis algorithm to reduce the dimension number of variables and establish the Gaussian mixture densities using the projected data with the first eight principal components. Consequently, the Gaussian mixtures are used to design the discriminant functions based on Bayesian decision theory. To improve the performance of the Bayes classifier and the accuracy of estimation of its error rate, the leaving-one-out technique is applied. Experimental results show that the proposed method can yield good results for calculating the percentage of each typical normal RBC shape in a reconstructed phase image of multiple RBCs that will be favorable to the analysis of RBC-related diseases. In addition, we show that the discrimination performance for the counting of normal shapes of RBCs can be improved by using 3-D features of an RBC.
Lack of harmonization in sweat testing for cystic fibrosis - a national survey.
Christiansen, Anne Lindegaard; Nybo, Mads
2014-11-01
Sweat testing is used in the diagnosis of cystic fibrosis. Interpretation of the sweat test depends, however, on the method performed since conductivity, osmolality and chloride concentration all can be measured as part of a sweat test. The aim of this study was to investigate how performance of the test is organized in Denmark. Departments conducting the sweat test were contacted and interviewed following a premade questionnaire. They were asked about methods performed, applied NPU (Nomenclature for Properties and Units) code, reference interval, recommended interpretation and referred literature. 14 departments performed the sweat test. One department measured chloride and sodium concentration, while 13 departments measured conductivity. One department used a non-existing NPU code, two departments applied NPU codes inconsistent with the method performed, four departments applied no NPU code and seven applied a correct NPU code. Ten of the departments measuring conductivity applied reference intervals. Nine departments measuring conductivity had recommendations of a normal area, a grey zone and a pathological value, while four departments only applied a normal and grey zone or a pathological value. Cut-off values for normal, grey and pathological areas were like the reference intervals inconsistent. There is inconsistent use of NPU codes, reference intervals and interpretation of sweat conductivity used in the process of diagnosing cystic fibrosis. Because diagnosing cystic fibrosis is a combined effort between local pediatric departments, biochemical and genetic departments and cystic fibrosis centers, a national harmonization is necessary to assure correct clinical use.
Outlier Detection in Urban Air Quality Sensor Networks.
van Zoest, V M; Stein, A; Hoek, G
2018-01-01
Low-cost urban air quality sensor networks are increasingly used to study the spatio-temporal variability in air pollutant concentrations. Recently installed low-cost urban sensors, however, are more prone to result in erroneous data than conventional monitors, e.g., leading to outliers. Commonly applied outlier detection methods are unsuitable for air pollutant measurements that have large spatial and temporal variations as occur in urban areas. We present a novel outlier detection method based upon a spatio-temporal classification, focusing on hourly NO 2 concentrations. We divide a full year's observations into 16 spatio-temporal classes, reflecting urban background vs. urban traffic stations, weekdays vs. weekends, and four periods per day. For each spatio-temporal class, we detect outliers using the mean and standard deviation of the normal distribution underlying the truncated normal distribution of the NO 2 observations. Applying this method to a low-cost air quality sensor network in the city of Eindhoven, the Netherlands, we found 0.1-0.5% of outliers. Outliers could reflect measurement errors or unusual high air pollution events. Additional evaluation using expert knowledge is needed to decide on treatment of the identified outliers. We conclude that our method is able to detect outliers while maintaining the spatio-temporal variability of air pollutant concentrations in urban areas.
Campos, Roberto E; Santos Filho, Paulo César F; de O Júnior, Osmir Batista; Ambrosano, Gláucia M B; Pereira, Cristina Alves
2018-01-01
Bond strength (BS) values from in vitro studies are useful when dentists are selecting an adhesive system, but there is no ideal measuring method. The purpose of this in vitro study was to investigate the influence of the evaluation method in the BS between dentin and composite resin. Molars with exposed superficial dentin (N=240) were divided into 3 groups according to the test: microtensile (μTBS), microshear (μSBS), and micropush-out (μPBS). Each one was subdivided into 4 groups according to the adhesive system: total etch, 3- and 2-step; and self-etch, 2- and 1-step). For the μPBS test, a conical cavity was prepared and restored with composite resin. An occlusal slice (1.5 mm in thickness) was obtained from each tooth. For the μSBS test, a composite resin cylinder (1 mm in diameter) was built on the dentin surface of each tooth. For the μTBS test, a 2-increment composite resin cylinder was built on the dentin surface, and beams with a sectional area of 0.5 mm 2 were obtained. Each subgroup was divided into 2 (n=10) as the specimens were tested after 7 days and 1 year of water storage. The specimens were submitted to load, and the failure recorded in units of megapascals. Original BS values from the μTBS and μSBS tests were normalized for the area from μPBS specimens. Original and normalized results were submitted to a 3-way ANOVA (α=.05). The correlation among mechanical results, stress distribution, and failure pattern was investigated. Significant differences (P<.05) were found among the adhesive systems and methods within both the original and normalized data but not between the storage times (P>.05). Within the 7 days of storage, the original BS values from μTBS were significantly higher (P<.001) than those from μPBS and μSBS. After 1 year, μSBS presented significantly lower results (P<.001). However, after the normalization for area, the BS values of the μTBS and μPBS tests were similar, and both were higher (P<.001) than that of μSBS in both storage times. In the μSBS and μTBS specimens, cohesive and adhesive failures were observed, whereas μPBS presented 100% of adhesive failures. The failure modes were compatible with the stress distribution. The storage time did not affect the results, but differences were found among the adhesives and methods. For comparisons of bond strength from tests with different bonding areas, the normalization for area seemed essential. The microshear bond test should not be used for bond strength evaluation, and the microtensile test needs improvement to enable reliable results regarding stress concentration and failure mode. The micropush-out test may be considered more reliable than the microtensile in the bond strength investigation, as demonstrated by the uniform stress concentration and adhesive failure pattern. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Statistical Considerations of Data Processing in Giovanni Online Tool
NASA Technical Reports Server (NTRS)
Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.
2005-01-01
The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.
Gordeev, S A; Voronin, S G
2015-01-01
The proprioceptive sensitivity of healthy volunteers and convalescents after acute cerebrovascular episodes was studied by a new neurophysiological method for registration of kinesthetic evoked potentials emerging in response to passive 50(o) bending of the hand in the wrist joint with the angular acceleration of 350 rad/sec(2). Kinesthetic evoked potentials were recorded above the somatosensory cortex projection areas in the hemispheres contra- and ipsilateral to the stimulated limb. The patients exhibited significantly longer latencies and lesser amplitudes of the early components of response in the involved hemisphere in comparison with normal subjects. The method for registration of the kinesthetic evoked potentials allows a more detailed study of the mechanisms of kinesthetic sensitivity in health and in organic involvement of the brain.
Control Systems with Normalized and Covariance Adaptation by Optimal Control Modification
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T. (Inventor); Burken, John J. (Inventor); Hanson, Curtis E. (Inventor)
2016-01-01
Disclosed is a novel adaptive control method and system called optimal control modification with normalization and covariance adjustment. The invention addresses specifically to current challenges with adaptive control in these areas: 1) persistent excitation, 2) complex nonlinear input-output mapping, 3) large inputs and persistent learning, and 4) the lack of stability analysis tools for certification. The invention has been subject to many simulations and flight testing. The results substantiate the effectiveness of the invention and demonstrate the technical feasibility for use in modern aircraft flight control systems.
NASA Astrophysics Data System (ADS)
Hadi Sutrisno, Himawan; Kiswanto, Gandjar; Istiyanto, Jos
2017-06-01
The rough machining is aimed at shaping a workpiece towards to its final form. This process takes up a big proportion of the machining time due to the removal of the bulk material which may affect the total machining time. In certain models, the rough machining has limitations especially on certain surfaces such as turbine blade and impeller. CBV evaluation is one of the concepts which is used to detect of areas admissible in the process of machining. While in the previous research, CBV area detection used a pair of normal vectors, in this research, the writer simplified the process to detect CBV area with a slicing line for each point cloud formed. The simulation resulted in three steps used for this method and they are: 1. Triangulation from CAD design models, 2. Development of CC point from the point cloud, 3. The slicing line method which is used to evaluate each point cloud position (under CBV and outer CBV). The result of this evaluation method can be used as a tool for orientation set-up on each CC point position of feasible areas in rough machining.
Minato, N; Itoh, T
1992-12-01
Applying the technology of direct imaging by fiberoptic cardioscopy, physiologic and pathophysiologic motions of the tricuspid valve anulus were studied in 10 anesthetized normal dogs (control group) and in 9 dogs that had chronic tricuspid regurgitation (TR group). The heart was perfused with transparent modified Tyrode's solution by working heart method, and the anuli, outlined by sutured beads, were observed and recorded on a high-speed video system in real time. Tricuspid valve annular area was calculated at 14 points during the cardiac cycle. The control group was studied in the normal condition, and the tricuspid regurgitation group was studied during four interventions: nontricuspid annuloplasty group and three tricuspid annuloplasty groups with reducing tricuspid valve annular area to 80%, 65%, and 50% of that of the non-tricuspid annuloplasty group by De Vega's procedure. Tricuspid valve annular area in the control group increased by 7% during atrial systole and was reduced by 34% mainly during ventricular systole, in which the free wall annular area and the septal annular area narrowed by an equal 34%. Chronic tricuspid regurgitation lessened tricuspid valve annular area narrowing to 20% in percent reduction (p < 0.01). In the TR group the decrease in tricuspid valve annular area narrowing was attributed mainly to lessened narrowing of the free wall anulus (percent reduction of tricuspid valve annular area, 19%; p < 0.01). The amplitudes in tricuspid valve annular area narrowing were unchanged in the tricuspid annuloplasty groups even when tricuspid valve annular area, was reduced to 50% by De Vega's tricuspid annuloplasty (percent reduction of tricuspid valve annular area, 16%; not significant). These findings suggest that De Vega's tricuspid annuloplasty is a reasonable method that does preserve the physiologic annular motions in the opening and closing mechanism of the tricuspid valve.
Balthazar, Marcio L.F.; Yasuda, Clarissa L.; Lopes, Tátila M.; Pereira, Fabrício R.S.; Damasceno, Benito Pereira; Cendes, Fernando
2011-01-01
Neuroanatomical correlations of naming and lexical-semantic memory are not yet fully understood. The most influential approaches share the view that semantic representations reflect the manner in which information has been acquired through perception and action, and that each brain area processes different modalities of semantic representations. Despite these anatomical differences in semantic processing, generalization across different features that have similar semantic significance is one of the main characteristics of human cognition. Methods We evaluated the brain regions related to naming, and to the semantic generalization, of visually presented drawings of objects from the Boston Naming Test (BNT), which comprises different categories, such as animals, vegetables, tools, food, and furniture. In order to create a model of lesion method, a sample of 48 subjects presenting with a continuous decline both in cognitive functions, including naming skills, and in grey matter density (GMD) was compared to normal young adults with normal aging, amnestic mild cognitive impairment (aMCI) and mild Alzheimer’s disease (AD). Semantic errors on the BNT, as well as naming performance, were correlated with whole brain GMD as measured by voxel-based morphometry (VBM). Results The areas most strongly related to naming and to semantic errors were the medial temporal structures, thalami, superior and inferior temporal gyri, especially their anterior parts, as well as prefrontal cortices (inferior and superior frontal gyri). Conclusion The possible role of each of these areas in the lexical-semantic networks was discussed, along with their contribution to the models of semantic memory organization. PMID:29213726
Izumori, Ayumi; Horii, Rie; Akiyama, Futoshi; Iwase, Takuji
2013-01-01
With the recent advances in modalities for diagnostic imaging of the breast, it is now essential to detect isoechoic masses and small nonmass lesions, to which little attention has so far been paid using ultrasound (US) of the breast. It will be possible with the observation method to understand normal breast structural images and anatomy. We elucidated the detailed histological architecture of the normal breast, information indispensable for diagnostic US of the breast. Verification of the above hypotheses was carried out using the breasts of 11 patients who underwent total mastectomy at our clinic. Isoechoic structures with fat are lobules, all ducts, and surrounding stroma that support the ducts; intervening hyperechoic areas are edematous stroma and fat-containing stroma that support the breast. By taking an isoechoic structure that reflects the course of the ducts as the basic structure for observation, the boundary between the lobes can be inferred. Detection of deviations from the normal structure using the method for interpreting three-dimensional ultrasound images of mammary lobes is a radical new approach for diagnosing breast cancer. This technique is very simple and amenable to standardization once one understands the underlying theory. Furthermore, it is useful as a screening method as well as for easy detection of faint minute lesions that can only be detected by magnetic resonance imaging or second-look targeted US.
Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew
2013-01-01
Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to what were generated using the LTSS data set, and their kappa coefficients were higher than 0.97. These results indicate that AWiFS can be used instead of Landsat data to detect multitemporal disturbance in the event of Landsat data discontinuity.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2014 CFR
2014-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2013 CFR
2013-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
20 CFR 655.103 - Overview of this subpart and definition of terms.
Code of Federal Regulations, 2012 CFR
2012-04-01
... effect wage rate (AEWR). The annual weighted average hourly wage for field and livestock workers.... 1188. Area of intended employment. The geographic area within normal commuting distance of the place of... constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual...
Code of Federal Regulations, 2013 CFR
2013-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2011 CFR
2011-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2014 CFR
2014-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
Code of Federal Regulations, 2012 CFR
2012-07-01
... pursuant to 5 U.S.C. 3105. Adverse effect wage rate (AEWR). The annual weighted average hourly wage for... worker subject to 8 U.S.C. 1188. Area of intended employment. The geographic area within normal commuting... measure of distance that constitutes a normal commuting distance or normal commuting area, because there...
NASA Astrophysics Data System (ADS)
Heid, T.; Kääb, A.
2011-12-01
Automatic matching of images from two different times is a method that is often used to derive glacier surface velocity. Nearly global repeat coverage of the Earth's surface by optical satellite sensors now opens the possibility for global-scale mapping and monitoring of glacier flow with a number of applications in, for example, glacier physics, glacier-related climate change and impact assessment, and glacier hazard management. The purpose of this study is to compare and evaluate different existing image matching methods for glacier flow determination over large scales. The study compares six different matching methods: normalized cross-correlation (NCC), the phase correlation algorithm used in the COSI-Corr software, and four other Fourier methods with different normalizations. We compare the methods over five regions of the world with different representative glacier characteristics: Karakoram, the European Alps, Alaska, Pine Island (Antarctica) and southwest Greenland. Landsat images are chosen for matching because they expand back to 1972, they cover large areas, and at the same time their spatial resolution is as good as 15 m for images after 1999 (ETM+ pan). Cross-correlation on orientation images (CCF-O) outperforms the three similar Fourier methods, both in areas with high and low visual contrast. NCC experiences problems in areas with low visual contrast, areas with thin clouds or changing snow conditions between the images. CCF-O has problems on narrow outlet glaciers where small window sizes (about 16 pixels by 16 pixels or smaller) are needed, and it also obtains fewer correct matches than COSI-Corr in areas with low visual contrast. COSI-Corr has problems on narrow outlet glaciers and it obtains fewer correct matches compared to CCF-O when thin clouds cover the surface, or if one of the images contains snow dunes. In total, we consider CCF-O and COSI-Corr to be the two most robust matching methods for global-scale mapping and monitoring of glacier velocities. If combining CCF-O with locally adaptive template sizes and by filtering the matching results automatically by comparing the displacement matrix to its low pass filtered version, the matching process can be automated to a large degree. This allows the derivation of glacier velocities with minimal (but not without!) user interaction and hence also opens up the possibility of global-scale mapping and monitoring of glacier flow.
Hou, Fang; Huang, Chang-Bing; Lesmes, Luis; Feng, Li-Xia; Tao, Liming; Zhou, Yi-Feng; Lu, Zhong-Lin
2010-01-01
Purpose. The qCSF method is a novel procedure for rapid measurement of spatial contrast sensitivity functions (CSFs). It combines Bayesian adaptive inference with a trial-to-trial information gain strategy, to directly estimate four parameters defining the observer's CSF. In the present study, the suitability of the qCSF method for clinical application was examined. Methods. The qCSF method was applied to rapidly assess spatial CSFs in 10 normal and 8 amblyopic participants. The qCSF was evaluated for accuracy, precision, test–retest reliability, suitability of CSF model assumptions, and accuracy of amblyopia screening. Results. qCSF estimates obtained with as few as 50 trials matched those obtained with 300 Ψ trials. The precision of qCSF estimates obtained with 120 and 130 trials, in normal subjects and amblyopes, matched the precision of 300 Ψ trials. For both groups and both methods, test–retest sensitivity estimates were well matched (all R > 0.94). The qCSF model assumptions were valid for 8 of 10 normal participants and all amblyopic participants. Measures of the area under log CSF (AULCSF) and the cutoff spatial frequency (cutSF) were lower in the amblyopia group; these differences were captured within 50 qCSF trials. Amblyopia was detected at an approximately 80% correct rate in 50 trials, when a logistic regression model was used with AULCSF and cutSF as predictors. Conclusions. The qCSF method is sufficiently rapid, accurate, and precise in measuring CSFs in normal and amblyopic persons. It has great potential for clinical practice. PMID:20484592
A method for fast automated microscope image stitching.
Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong
2013-05-01
Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Anderson, D.; Andrais, B.; Mirzayans, R.; Siegbahn, E. A.; Fallone, B. G.; Warkentin, B.
2013-06-01
Microbeam radiation therapy (MRT) delivers single fractions of very high doses of synchrotron x-rays using arrays of microbeams. In animal experiments, MRT has achieved higher tumour control and less normal tissue toxicity compared to single-fraction broad beam irradiations of much lower dose. The mechanism behind the normal tissue sparing of MRT has yet to be fully explained. An accurate method for evaluating DNA damage, such as the γ-H2AX immunofluorescence assay, will be important for understanding the role of cellular communication in the radiobiological response of normal and cancerous cell types to MRT. We compare two methods of quantifying γ-H2AX nuclear fluorescence for uniformly irradiated cell cultures: manual counting of γ-H2AX foci by eye, and an automated, MATLAB-based fluorescence intensity measurement. We also demonstrate the automated analysis of cell cultures irradiated with an array of microbeams. In addition to offering a relatively high dynamic range of γ-H2AX signal versus irradiation dose ( > 10 Gy), our automated method provides speed, robustness, and objectivity when examining a series of images. Our in-house analysis facilitates the automated extraction of the spatial distribution of the γ-H2AX intensity with respect to the microbeam array — for example, the intensities in the peak (high dose area) and valley (area between two microbeams) regions. The automated analysis is particularly beneficial when processing a large number of samples, as is needed to systematically study the relationship between the numerous dosimetric and geometric parameters involved with MRT (e.g., microbeam width, microbeam spacing, microbeam array dimensions, peak dose, valley dose, and geometric arrangement of multiple arrays) and the resulting DNA damage.
Method for machining holes in composite materials
NASA Technical Reports Server (NTRS)
Daniels, Julia G. (Inventor); Ledbetter, Frank E., III (Inventor); Clemons, Johnny M. (Inventor); Penn, Benjamin G. (Inventor); White, William T. (Inventor)
1987-01-01
A method for boring well defined holes in a composite material such as graphite/epoxy is discussed. A slurry of silicon carbide powder and water is projected onto a work area of the composite material in which a hole is to be bored with a conventional drill bit. The silicon carbide powder and water slurry allow the drill bit, while experiencing only normal wear, to bore smooth, cylindrical holes in the composite material.
Arai, Toshio; Akao, Nobuaki; Seki, Takenori; Kumagai, Takashi; Ishikawa, Hirofumi; Ohta, Nobuo; Hirata, Nobuto; Nakaji, So; Yamauchi, Kenji; Hirai, Mitsuru; Shiratori, Toshiyasu; Kobayashi, Masayoshi; Fujii, Hiroyuki; Ishii, Eiji; Naito, Mikio; Saitoh, Shin-ichi; Yamaguchi, Toshikazu; Shibata, Nobumitsu; Shimo, Masamune; Tokiwa, Toshihiro
2014-01-01
Background Anisakiasis is a parasitic disease caused primarily by Anisakis spp. larvae in Asia and in Western countries. The aim of this study was to investigate the genotype of Anisakis larvae endoscopically removed from Middle Eastern Japanese patients and to determine whether mucosal atrophy affects the risk of penetration in gastric anisakiasis. Methods In this study, 57 larvae collected from 44 patients with anisakiasis (42 gastric and 2 colonic anisakiasis) were analyzed retrospectively. Genotyping was confirmed by restriction fragment length polymorphism (RFLP) analysis of ITS regions and by sequencing the mitochondrial small subunit (SSU) region. In the cases of gastric anisakiasis, correlation analyses were conducted between the frequency of larval penetration in normal/atrophic area and the manifestation of clinical symptoms. Results Nearly all larvae were A. simplex seusu stricto (s.s.) (99%), and one larva displayed a hybrid genotype. The A. simplex larvae penetrated normal mucosa more frequently than atrophic area (p = 0.005). Finally, patients with normal mucosa infection were more likely to exhibit clinical symptoms than those with atrophic mucosa infection (odds ratio, 6.96; 95% confidence interval, 1.52–31.8). Conclusions In Japan, A. simplex s.s. is the main etiological agent of human anisakiasis and tends to penetrate normal gastric mucosa. Careful endoscopic examination of normal gastric mucosa, particularly in the greater curvature of the stomach will improve the detection of Anisakis larvae. PMID:24586583
NASA Astrophysics Data System (ADS)
Arinilhaq; Widita, R.
2016-03-01
Diagnosis of macular degeneration using a Stratus OCT with a fast macular thickness map (FMTM) method produced six B-scan images of macula from different angles. The images were converted into a retinal thickness chart to be evaluated by normal distribution percentile of data so that it can be classified as normal thickness of macula or as experiencing abnormality (e.g. thickening and thinning). Unfortunately, the diagnostic images only represent the retinal thickness in several areas of the macular region. Thus, this study is aims to obtain the entire retinal thickness in the macula area from Status OCT's output images. Basically, the volumetric image is obtained by combining each of the six images. Reconstruction consists of a series of processes such as pre-processing, segmentation, and interpolation. Linear interpolation techniques are used to fill the empty pixels in reconstruction matrix. Based on the results, this method is able to provide retinal thickness maps on the macula surface and the macula 3D image. Retinal thickness map can display the macula area which experienced abnormalities. The macula 3D image can show the layers of tissue in the macula that is abnormal. The system built cannot replace ophthalmologist in decision making in term of diagnosis.
Remote Sensing Monitoring of Changes in Soil Salinity: A Case Study in Inner Mongolia, China.
Wu, Jingwei; Vincent, Bernard; Yang, Jinzhong; Bouarfa, Sami; Vidal, Alain
2008-11-07
This study used archived remote sensing images to depict the history of changes in soil salinity in the Hetao Irrigation District in Inner Mongolia, China, with the purpose of linking these changes with land and water management practices and to draw lessons for salinity control. Most data came from LANDSAT satellite images taken in 1973, 1977, 1988, 1991, 1996, 2001, and 2006. In these years salt-affected areas were detected using a normal supervised classification method. Corresponding cropped areas were detected from NVDI (Normalized Difference Vegetation Index) values using an unsupervised method. Field samples and agricultural statistics were used to estimate the accuracy of the classification. Historical data concerning irrigation/drainage and the groundwater table were used to analyze the relation between changes in soil salinity and land and water management practices. Results showed that: (1) the overall accuracy of remote sensing in detecting soil salinity was 90.2%, and in detecting cropped area, 98%; (2) the installation/innovation of the drainage system did help to control salinity; and (3) a low ratio of cropped land helped control salinity in the Hetao Irrigation District. These findings suggest that remote sensing is a useful tool to detect soil salinity and has potential in evaluating and improving land and water management practices.
Chen, X.; Vierling, Lee; Deering, D.
2005-01-01
Satellite data offer unrivaled utility in monitoring and quantifying large scale land cover change over time. Radiometric consistency among collocated multi-temporal imagery is difficult to maintain, however, due to variations in sensor characteristics, atmospheric conditions, solar angle, and sensor view angle that can obscure surface change detection. To detect accurate landscape change using multi-temporal images, we developed a variation of the pseudoinvariant feature (PIF) normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on June 9, 1990 (Landsat 4), June 20, 2000 (Landsat 7), and August 26, 2001 (Landsat 7) to analyze boreal forests near the Siberian city of Krasnoyarsk using the normalized difference vegetation index (NDVI), enhanced vegetation index (EVI), and reduced simple ratio (RSR). The temporally invariant cluster (TIC) centers were identified via a point density map of collocated pixel VIs from the base image and the target image, and a normalization regression line was created to intersect all TIC centers. Target image VI values were then recalculated using the regression function so that these two images could be compared using the resulting common radiometric scale. We found that EVI was very indicative of vegetation structure because of its sensitivity to shadowing effects and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. Conversely, because NDVI reduced the radiometric influence of shadow, it did not allow for distinctions among these vegetation types. After normalization, correlations of NDVI and EVI with forest leaf area index (LAI) field measurements combined for 2000 and 2001 were significantly improved; the r 2 values in these regressions rose from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI "cancellation effect" where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence with normalized data. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of landscape change. Compared to some previous relative radiometric normalization methods, this new method does not require high level programming and statistical skills, yet remains sensitive to landscape changes occurring over seasonal and inter-annual time scales. In addition, the TIC method maintains sensitivity to subtle changes in vegetation phenology and enables normalization even when invariant features are rare. While this normalization method allowed detection of a range of land use, land cover, and phenological/biophysical changes in the Siberian boreal forest region studied here, it is necessary to further examine images representing a wide variety of ecoregions to thoroughly evaluate the TIC method against other normalization schemes. ?? 2005 Elsevier Inc. All rights reserved.
Beattie, Louise; Espie, Colin A; Kyle, Simon D; Biello, Stephany M
2015-06-01
There appears to be some inconsistency in how normal sleepers (controls) are selected and screened for participation in research studies for comparison with insomnia patients. The purpose of the current study is to assess and compare methods of identifying normal sleepers in insomnia studies, with reference to published standards. We systematically reviewed the literature on insomnia patients, which included control subjects. The resulting 37 articles were systematically reviewed with reference to the five criteria for normal sleep specified by Edinger et al. In summary, these criteria are as follows: evidence of sleep disruption, sleep scheduling, general health, substance/medication use, and other sleep disorders. We found sleep diaries, polysomnography (PSG), and clinical screening examinations to be widely used with both control subjects and insomnia participants. However, there are differences between research groups in the precise definitions applied to the components of normal sleep. We found that none of the reviewed studies applied all of the Edinger et al. criteria, and 16% met four criteria. In general, screening is applied most rigorously at the level of a clinical disorder, whether physical, psychiatric, or sleep. While the Edinger et al. criteria seem to be applied in some form by most researchers, there is scope to improve standards and definitions in this area. Ideally, different methods such as sleep diaries and questionnaires would be used concurrently with objective measures to ensure normal sleepers are identified, and descriptive information for control subjects would be reported. Here, we have devised working criteria and methods to be used for the assessment of normal sleepers. This would help clarify the nature of the control group, in contrast to insomnia subjects and other patient groups. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Weighted optimization of irradiance for photodynamic therapy of port wine stains
NASA Astrophysics Data System (ADS)
He, Linhuan; Zhou, Ya; Hu, Xiaoming
2016-10-01
Planning of irradiance distribution (PID) is one of the foremost factors for on-demand treatment of port wine stains (PWS) with photodynamic therapy (PDT). A weighted optimization method for PID was proposed according to the grading of PWS with a three dimensional digital illumination instrument. Firstly, the point clouds of lesions were filtered to remove the error or redundant points, the triangulation was carried out and the lesion was divided into small triangular patches. Secondly, the parameters such as area, normal vector and orthocenter for optimization of each triangular patch were calculated, and the weighted coefficients were determined by the erythema indexes and areas of patches. Then, the optimization initial point was calculated based on the normal vectors and orthocenters to optimize the light direction. In the end, the irradiation can be optimized according to cosine values of irradiance angles and weighted coefficients. Comparing the irradiance distribution before and after optimization, the proposed weighted optimization method can make the irradiance distribution match better with the characteristics of lesions, and has the potential to improve the therapeutic efficacy.
A Method to Overcome Space Charge at Injection
NASA Astrophysics Data System (ADS)
Derbenev, Ya.
2005-06-01
The transverse space charge forces in a high current, low energy beam can be reduced by mean of a large increase of the beam's transverse sizes while maintaining the beam area in the 4D phase space. This can be achieved by transforming the beam area in phase space of each of two normal 2D transverse (either plane or circular) modes from a spot shape into a narrow ring of a large amplitude, but homogeneous in phase. Such a transformation results from the beam evolution in the island of a dipole resonance when the amplitude width of the island shrinks adiabatically. After stacking (by using stripping foils or cooling) the beam in such a state and accelerating to energies sufficiently high that the space charge becomes insignificant, the beam then can be returned back to a normal spot shape by applying the reverse transformation. An arrangement that can provide such beam gymnastics along a transport line after a linac and before a booster and/or in a ring with circulating beam will be described and numerical estimates will be presented. Other potential applications of the method will be briefly discussed.
Almubrad, Turki; Paladini, Iacopo; Mencucci, Rita
2013-01-01
Purpose Study to investigate the effects of collagen cross-linking on the ultrastructural organization of the corneal stroma in the human keratoconus cornea (KC). Methods Three normal, three keratoconus (KC1, KC2, KC3), and three cross-linked keratoconus (CXL1, CXL2, CXL3) corneas were analyzed. The KC corneas were treated with a riboflavin-ultraviolet A (UVA) treatment (CXL) method described by Wollensak et al. Penetrating keratoplasty (PKP) was performed 6 months after treatment. All samples were processed for electron microscopy. Results The riboflavin-UVA-treated CXL corneal stroma showed interlacing lamellae in the anterior stroma followed by well-organized parallel running lamellae. The lamellae contained uniformly distributed collagen fibrils (CFs) decorated with normal proteoglycans (PGs). The CF diameter and interfibrillar spacing in the CXL cornea were significantly increased compared to those in the KC cornea. The PG area in the CXL corneas were significantly smaller than the PGs in the KC cornea. The epithelium and Bowman’s layer were also normal. On rare occasions, a thick basement membrane and collagenous pannus were also observed. Conclusions Corneal cross-linking leads to modifications of the cornea stroma. The KC corneal structure showed a modification in the CF diameter, interfibrillar spacing, and PG area. This resulted in a more uniform distribution of collagen fibrils, a key feature for corneal transparency. PMID:23878503
Method to fabricate a tilted logpile photonic crystal
Williams, John D.; Sweatt, William C.
2010-10-26
A method to fabricate a tilted logpile photonic crystal requires only two lithographic exposures and does not require mask repositioning between exposures. The mask and photoresist-coated substrate are spaced a fixed and constant distance apart using a spacer and the stack is clamped together. The stack is then tilted at a crystallographic symmetry angle (e.g., 45 degrees) relative to the X-ray beam and rotated about the surface normal until the mask is aligned with the X-ray beam. The stack is then rotated in plane by a small stitching angle and exposed to the X-ray beam to pattern the first half of the structure. The stack is then rotated by 180.degree. about the normal and a second exposure patterns the remaining half of the structure. The method can use commercially available DXRL scanner technology and LIGA processes to fabricate large-area, high-quality tilted logpile photonic crystals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimosegawa, E.; Miura, S.; Murakami, M.
1994-05-01
On the basis of previous validation of kinetic two-compartment model and the determination of normal values of three parameters (k{sub 1}:influx rate constant, k{sub 2}:outflux rate constant, Vd:distribution volume), PET measurements of in vivo amino acid transport from blood to brain using L-(2-18F)-fluorophenylalanine ({sup 18}F-Phe) were undergone in the patients with cerebral infarction. The purposes of this study are to evaluate the alteration of amino acid transport in subacute and chronic stage of cerebral infarction and to compare with cerebral blood flow (CBF) and oxygen metabolism. Dynamic {sup 18}F-Phe PET studies for 50 minutes were performed in 7 patients withmore » cerebral infarction. The input function was obtained by 27 points of arterial sampling. In all patients, measurements of CBF, cerebral blood volume (CBV), cerebral metabolic rate of oxygen (CMRO{sub 2}), and oxygen extraction fraction (OEF) were made on the same day of {sup 18}F-Phe PET measurement. Each patient was studied twice, within 2 weeks of the onset and 3 months later. Weighted integration technique with table look-up method was applied for the reconstruction of parametric images of {sup 18}F-Phe and ROI analysis of k{sub 1}, k{sub 2}, and Vd. In subacute stage, significant reduction of k{sub 2} value in infarct area was observed when compared to that in periinfarct area (p<0.05) and in normal cortices (p<0.001). k{sub 1} value in this stage showed only slightly decrease in infarct area, therefore, Vd value in infarct area increased significantly compared to normal cortices (p<0.001). In chronic stage, both k{sub 1} and k{sub 2} values in infarct area were significantly lower than that in normal cortices (p<0.001), and corresponding Vd value reduced to normal level. Correlativity between kinetic parameters of {sup 18}F-Phe and CBF or oxygen metabolism was not observed both in subacute and chronic stage of infarction.« less
Broadband waveform inversion of moderate earthquakes in the Messina Straits, southern Italy
NASA Astrophysics Data System (ADS)
D'Amico, Sebastiano; Orecchio, Barbara; Presti, Debora; Zhu, Lupei; Herrmann, Robert B.; Neri, Giancarlo
2010-04-01
We report the first application of the Cut and Paste (CAP) method to compute earthquake focal mechanisms in the Messina Straits area by waveform inversion of Pnl and surface wave segments. This application of CAP has furnished new knowledge about low-magnitude earthquake mechanics that will be useful for improved understanding of the local geodynamics. This is possible because the CAP inversion technique can be applied to small earthquakes, for which traditional moment tensor inversion methods are not appropriate and P-onset focal mechanisms in the study area fail because of a lack of sufficient observations. We estimate the focal mechanisms of 23 earthquakes with local magnitudes in the range of 3-4 occurring in the 2004-2008 time period, and recorded by the broadband stations of the Italian National Seismic Network and the Mediterranean Very Broadband Seismographic Network (MedNet) run by the Istituto Nazionale di Geofisica e Vulcanologia (INGV). The solutions show that normal faulting is the prevailing style of seismic deformation in the northern part of the study area while co-existence of normal faulting and strike-slip has been detected in the southern part. In the whole area of investigation the T-axes of focal mechanisms display a preferential northwest-southeast direction of extension. Combined with the findings of previous investigations, this improved database of focal mechanisms allows us to better detail the transitional area between the extensional domain related to subduction trench retreat (southern Calabria) and the compressional one associated with continental collision (western-central Sicily). The observed spatial change of seismic deformation regime offers new data to current seismotectonic and seismic hazard investigations in the area of Messina Straits where a magnitude 7.2 earthquake caused more than 60,000 casualties on 28 December 1908.
NASA Technical Reports Server (NTRS)
Niederhaus, Charles E.; Miller, Fletcher J.
2008-01-01
The missions envisioned under the Vision for Space Exploration will require development of new methods to handle crew medical care. Medications and intravenous (IV) fluids have been identified as one area needing development. Storing certain medications and solutions as powders or concentrates can both increase the shelf life and reduce the overall mass and volume of medical supplies. The powders or concentrates would then be mixed in an IV bag with Sterile Water for Injection produced in situ from the potable water supply. Fluid handling in microgravity is different than terrestrial settings, and requires special consideration in the design of equipment. This document describes the analyses and down-select activities used to identify the IV mixing method to be developed that is suitable for ISS and exploration missions. The chosen method is compatible with both normal gravity and microgravity, maintains sterility of the solution, and has low mass and power requirements. The method will undergo further development, including reduced gravity aircraft experiments and computations, in order to fully develop the mixing method and associated operational parameters.
Interior near-field acoustical holography in flight.
Williams, E G; Houston, B H; Herdic, P C; Raveendra, S T; Gardner, B
2000-10-01
In this paper boundary element methods (BEM) are mated with near-field acoustical holography (NAH) in order to determine the normal velocity over a large area of a fuselage of a turboprop airplane from a measurement of the pressure (hologram) on a concentric surface in the interior of the aircraft. This work represents the first time NAH has been applied in situ, in-flight. The normal fuselage velocity was successfully reconstructed at the blade passage frequency (BPF) of the propeller and its first two harmonics. This reconstructed velocity reveals structure-borne and airborne sound-transmission paths from the engine to the interior space.
20 CFR 656.3 - Definitions, for purposes of this part, of terms used in this part.
Code of Federal Regulations, 2014 CFR
2014-04-01
... normal commuting distance of the place (address) of intended employment. There is no rigid measure of distance which constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual circumstances among different areas (e.g., normal commuting distances might be 20...
20 CFR 656.3 - Definitions, for purposes of this part, of terms used in this part.
Code of Federal Regulations, 2012 CFR
2012-04-01
... normal commuting distance of the place (address) of intended employment. There is no rigid measure of distance which constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual circumstances among different areas (e.g., normal commuting distances might be 20...
20 CFR 656.3 - Definitions, for purposes of this part, of terms used in this part.
Code of Federal Regulations, 2013 CFR
2013-04-01
... normal commuting distance of the place (address) of intended employment. There is no rigid measure of distance which constitutes a normal commuting distance or normal commuting area, because there may be widely varying factual circumstances among different areas (e.g., normal commuting distances might be 20...
Maskless micro/nanofabrication on GaAs surface by friction-induced selective etching
2014-01-01
In the present study, a friction-induced selective etching method was developed to produce nanostructures on GaAs surface. Without any resist mask, the nanofabrication can be achieved by scratching and post-etching in sulfuric acid solution. The effects of the applied normal load and etching period on the formation of the nanostructure were studied. Results showed that the height of the nanostructure increased with the normal load or the etching period. XPS and Raman detection demonstrated that residual compressive stress and lattice densification were probably the main reason for selective etching, which eventually led to the protrusive nanostructures from the scratched area on the GaAs surface. Through a homemade multi-probe instrument, the capability of this fabrication method was demonstrated by producing various nanostructures on the GaAs surface, such as linear array, intersecting parallel, surface mesas, and special letters. In summary, the proposed method provided a straightforward and more maneuverable micro/nanofabrication method on the GaAs surface. PMID:24495647
Harada, Ichiro; Kim, Sung-Gon; Cho, Chong Su; Kurosawa, Hisashi; Akaike, Toshihiro
2007-01-01
In this study, a simple combined method consisting of floating and anchored collagen gel in a ligament or tendon equivalent culture system was used to produce the oriented fibrils in fibroblast-populated collagen matrices (FPCMs) during the remodeling and contraction of the collagen gel. Orientation of the collagen fibrils along single axis occurred over the whole area of the floating section and most of the fibroblasts were elongated and aligned along the oriented collagen fibrils, whereas no significant orientation of fibrils was observed in normally contracted FPCMs by the floating method. Higher elasticity and enhanced mechanical strength were obtained using our simple method compared with normally contracted floating FPCMs. The Young's modulus and the breaking point of the FPCMs were dependent on the initial cell densities. This simple method will be applied as a convenient bioreactor to study cellular processes of the fibroblasts in the tissues with highly oriented fibrils such as ligaments or tendons. (c) 2006 Wiley Periodicals, Inc.
Kyoung Jae Kim; Lucarevic, Jennifer; Bennett, Christopher; Gaunaurd, Ignacio; Gailey, Robert; Agrawal, Vibhor
2016-08-01
The quantification of postural sway during the unipedal stance test is one of the essentials of posturography. A shift of center of pressure (CoP) is an indirect measure of postural sway and also a measure of a person's ability to maintain balance. A widely used method in laboratory settings to calculate the sway of body center of mass (CoM) is through an ellipse that encloses 95% of CoP trajectory. The 95% ellipse can be computed under the assumption that the spatial distribution of the CoP points recorded from force platforms is normal. However, to date, this assumption of normality has not been demonstrated for sway measurements recorded from a sacral inertial measurement unit (IMU). This work provides evidence for non-normality of sway trajectories calculated at a sacral IMU with injured subjects as well as healthy subjects.
Chinese life cycle impact assessment factors.
Yang, J X; Nielsen, P H
2001-04-01
The methodological basis and procedures for determination of Chinese normalization references and weighting factors according to the EDIP-method is described. According to Chinese industrial development intensity and population density, China was divided into three regions and the normalization references for each region were calculated on the basis of an inventory of all of the region's environmental emissions in 1990. The normalization reference was determined as the total environmental impact potential for the area in question in 1990 (EP(j)90) divided by the population. The weighting factor was determined as the normalization reference (ER(j)90) divided by society's target contribution in the year 2000 based on Chinese political reduction plans, ER(j)T2000. This paper presents and discuss results obtained for eight different environmental impact categories relevant for China: global warming, stratospheric ozone depletion, acidification, nutrient enrichment, photochemical ozone formation and generation of bulk waste, hazardous waste and slag and ashes.
Comparison of RNFL thickness and RPE-normalized RNFL attenuation coefficient for glaucoma diagnosis
NASA Astrophysics Data System (ADS)
Vermeer, K. A.; van der Schoot, J.; Lemij, H. G.; de Boer, J. F.
2013-03-01
Recently, a method to determine the retinal nerve fiber layer (RNFL) attenuation coefficient, based on normalization on the retinal pigment epithelium, was introduced. In contrast to conventional RNFL thickness measures, this novel measure represents a scattering property of the RNFL tissue. In this paper, we compare the RNFL thickness and the RNFL attenuation coefficient on 10 normal and 8 glaucomatous eyes by analyzing the correlation coefficient and the receiver operator curves (ROCs). The thickness and attenuation coefficient showed moderate correlation (r=0.82). Smaller correlation coefficients were found within normal (r=0.55) and glaucomatous (r=0.48) eyes. The full separation between normal and glaucomatous eyes based on the RNFL attenuation coefficient yielded an area under the ROC (AROC) of 1.0. The AROC for the RNFL thickness was 0.9875. No statistically significant difference between the two measures was found by comparing the AROC. RNFL attenuation coefficients may thus replace current RNFL thickness measurements or be combined with it to improve glaucoma diagnosis.
Keratinocytes in culture accumulate phagocytosed melanosomes in the perinuclear area.
Ando, Hideya; Niki, Yoko; Yoshida, Masaki; Ito, Masaaki; Akiyama, Kaoru; Kim, Jin-Hwa; Yoon, Tae-Jin; Lee, Jeung-Hoon; Matsui, Mary S; Ichihashi, Masamitsu
2010-02-01
There are many techniques for evaluating melanosome transfer to keratinocytes but the spectrophotometric quantification of melanosomes incorporated by keratinocyte phagocytosis has not been previously reported. Here we describe a new method that allows the spectrophotometric visualization of melanosome uptake by normal human keratinocytes in culture. Fontana-Masson staining of keratinocytes incubated with isolated melanosomes showed the accumulation of incorporated melanosomes in the perinuclear areas of keratinocytes within 48 h. Electron microscopic observations of melanosomes ingested by keratinocytes revealed that many phagosomes containing clusters of melanosomes or their fragments were localized in the perinuclear area. A known inhibitor of keratinocyte phagocytosis which inhibits protease-activated receptor-2, i.e., soybean trypsin inhibitor, decreased melanosome uptake by keratinocytes in a dose-dependent manner. These data suggest that our method is a useful model to quantitate keratinocyte phagocytosis of melanosomes visually in vitro.
Pagaiya, Nonglak; Kongkam, Lalitaya; Sriratana, Sanya
2015-03-01
In Thailand, the inequitable distribution of doctors between rural and urban areas has a major impact on access to care for those living in rural communities. The rural medical education programme 'Collaborative Project to Increase Rural Doctors (CPIRD)' was implemented in 1994 with the aim of attracting and retaining rural doctors. This study examined the impact of CPIRD in relation to doctor retention in rural areas and public health service. Baseline data consisting of age, sex and date of entry to the Ministry of Health (MoH) service was collected from 7,157 doctors graduating between 2000 and 2007. There were 1,093 graduates from the CPIRD track and 6,064 that graduated through normal channels. Follow-up data, consisting of workplace, number of years spent in rural districts and years within the MoH service, were retrieved from June 2000 to July 2011. The Kaplan-Meier method of survival analysis and Cox proportional hazards ratios were used to interpret the data. Female subjects slightly outnumbered their male counterparts. Almost half of the normal track (48%) and 33% of the CPIRD doctors eventually left the MoH. The retention rate at rural hospitals was 29% for the CPIRD doctors compared to 18% for those from the normal track. Survival curves indicated a dramatic drop rate after 3 years in service for both groups, but normal track individuals decreased at a faster rate. Multivariate Cox proportional hazards modelling revealed that the normal track doctors had a significantly higher risk of leaving rural areas at about 1.3 times the CPIRD doctors. The predicted median survival time in rural hospitals was 4.2 years for the CPIRD group and 3.4 years for the normal track. The normal track doctors had a significantly higher risk of leaving public service at about 1.5 times the CPIRD doctors. The project evaluation results showed a positive impact in that CPIRD doctors were more likely to stay longer in rural areas and in public service than their counterparts. However, turnover has been increasing in recent years for both groups. There is a need for the MoH to review and improve upon the project implementation.
Heavy-metal contamination on training ranges at the Grafenwoehr Training Area, Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zellmer, S.D.; Schneider, J.F.
1993-05-01
Large quantities of lead and other heavy metals are deposited in the environment of weapons ranges during training exercises. This study was conducted to determine the type, degree, and extent of heavy-metal contamination on selected handgun, rifle, and hand-grenade ranges at Grafenwoehr Training Area, Germany. Soil, vegetation, and surface-water samples were collected and analyzed using the inductively-coupled plasma atomic-emission spectroscopy (ICP-AES) method and the toxic characterization leaching procedure (TCLP). The ICP-AES results show that above-normal levels of lead and copper are in the surface soil at the handgun range, high concentrations of lead and copper are in the berm andmore » soil surface at the rifle range, and elevated levels of cadmium and above-normal concentrations of arsenic, copper, and zinc are present in the surface soil at the hand-grenade range. The TCLP results show that surface soils can be considered hazardous waste because of lead content at the rifle range and because of cadmium concentration at the hand-grenade range. Vegetation at the handgun and rifle ranges has above-normal concentrations of lead. At the hand-grenade range, both vegetation and surface water have high levels of cadmium. A hand-held X-ray fluorescence (XRF) spectrum analyzer was used to measure lead concentrations in soils in a field test of the method. Comparison of XRF readings with ICP-AES results for lead indicate that the accuracy and precision of the hand-held XRF unit must improve before the unit can be used as more than a screening tool. Results of this study show that heavy-metal contamination at all three ranges is limited to the surface soil; heavy metals are not being leached into the soil profile or transported into adjacent areas.« less
Mayama, Chihiro; Tsutsumi, Tae; Saito, Hitomi; Asaoka, Ryo; Tomidokoro, Atsuo; Iwase, Aiko; Otani, Shinichiro; Miyata, Kazunori; Araie, Makoto
2014-01-01
This study was performed to first investigate the morphological differences in the optic nerve head between highly myopic non-glaucomatous controls and highly myopic glaucomatous eyes in comparison with the differences between emmetropic non-glaucomatous controls and emmetropic glaucomatous eyes using confocal scanning laser ophthalmoscopy. Further, the ability of the apparatus in glaucoma diagnosis in highly myopic eyes was compared with that in emmetropic eyes. Healthy subjects and age-matched patients with early-stage open-angle glaucoma were divided into two groups: emmetropic eyes (-1.0 to +1.0 diopters) and highly myopic eyes (-12.0 to -5.0 diopters).The participants were comprised of 65 emmetropic normal eyes, 59 emmetropic glaucomatous eyes, 62 highly myopic normal eyes, and 68 highly myopic glaucomatous eyes and eyes with pathologic myopia were carefully excluded. Confocal scanning laser tomographic parameters were compared among all subjects after adjustment for age and disc area. The ROC curves and sensitivity and specificity for glaucoma detection using several clinical methods were then compared between the emmetropic and highly myopic eyes. Rim area, cup/disc area ratio, mean cup depth, and cup shape measure of glaucoma eyes are significantly different from those of normal eyes in both highly myopic eyes and emmetropic eyes. Methodological overestimation of retinal nerve fiber layer cross sectional area due to optic disc tilting was suggested in the highly myopic eyes. The diagnostic performance of glaucoma using several discriminant methods significantly deteriorated in the highly myopic eyes. In the highly myopic glaucomatous eyes, confocal scanning laser tomographic parameters were significantly different from that of non-glaucomatous highly myopic eyes but diagnostic performance of glaucoma was deteriorated than that in emmetropic eyes. These findings demonstrate the utility and limitations of the apparatus in diagnosing glaucoma in highly myopic patients.
NASA Astrophysics Data System (ADS)
Gaudiosi, Germana; Nappi, Rosa; Alessio, Giuliana; Cella, Federico; Fedi, Maurizio; Florio, Giovanni
2014-05-01
The Southern Apennines is one of the Italian most active areas from a geodynamic point of view since it is characterized by occurrence of intense and widely spread seismic activity. Most seismicity of the area is concentrated along the chain, affecting mainly the Irpinia and Sannio-Matese areas. The seismogenetic sources responsible for the destructive events of 1456, 1688, 1694, 1702, 1732, 1805, 1930, 1962 and 1980 (Io = X-XI MCS) occurred mostly on NW-SE faults, and the relative hypocenters are concentrated within the upper 20 km of the crust. Structural observations on the Pleistocene faults suggest normal to sinistral movements for the NW-SE trending faults and normal to dextral for the NE-SW trending structures. The available focal mechanisms of the largest events show normal solutions consistent with NE-SW extension of the chain. After the 1980 Irpinia large earthquake, the release of seismic energy in the Southern Apennines has been characterized by occurrence of moderate energy sequences of main shock-aftershocks type and swarm-type activity with low magnitude sequences. Low-magnitude (Md<5) historical and recent earthquakes, generally clustered in swarms, have commonly occurred along the NE-SW faults. This paper deals with integrated analysis of geological and geophysical data in GIS environment to identify surface, buried and hidden active faults and to characterize their geometry. In particular we have analyzed structural data, earthquake space distribution and gravimetric data. The main results of the combined analysis indicate good correlation between seismicity and Multiscale Derivative Analysis (MDA) lineaments from gravity data. Furthermore 2D seismic hypocentral locations together with high-resolution analysis of gravity anomalies have been correlated to estimate the fault systems parameters (strike, dip direction and dip angle) through the application of the DEXP method (Depth from Extreme Points).
Zhang, Z B; Xue, Z X; Wu, X J; Wang, T M; Li, Y H; Song, X L; Chao, X F; Wang, G; Nazibam, Nurmamat; Ayxamgul, Bawudun; Gulbahar, Elyas; Zhou, Z Y; Sun, B S; Wang, Y Z; Wang, M
2017-06-10
Objective: To understand the prevalence of dyslipidemia and normal blood lipids level in Uygur diabetes patients in Kashgar prefecture in southern area of Xinjiang. Methods: A total of 5 078 local residents aged ≥18 years (42.56 % were men) selected through cluster random sampling in Kashgar were surveyed by means of questionnaire survey, physical examination and laboratory test, and 521 diabetes patients were screened. Results: The overall prevalence of dyslipidemia in diabetes patients was 59.50 % (310/521) with adjusted rate as 49.39 % . Age ≥65 years, overweight, obesity and abdominal obesity increased the risk for dyslipidemia by 0.771 times (95 % CI : 1.015-3.088), 1.132 times (95 % CI : 1.290-3.523), 1.688 times (95 % CI : 1.573-4.592) and 0.801 times (95 % CI : 1.028-3.155) respectively. Compared with males, female was a protective factor for dyslipidemia ( OR =0.507, 95 %CI : 0.334-0.769). The overall normal rate of blood lipids level including total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C) and low-density lipoprotein cholesterol (LDL-C) for type 2 diabetes patients was 11.13 % . Female, higher BMI and abdominal obesity were the factors influencing the overall normal blood lipids level. The normal rate of LDL-C level decreased with increase of age, BMI and waist circumferences (trend test χ (2)=18.049, P <0.001; trend test χ (2)=10.582, P =0.001; χ (2)=19.081, P <0.001), but increased with educational level (trend test χ (2)=9.764, P =0.002). Conclusion: The prevalence of dyslipidemia in Uygur diabetes patients in Kashgar was high, however, the overall normal rate of blood lipid level was relatively low. Obesity was the most important risk factor for dyslipidemia in this area. More attention should be paid to dyslipidemia prevention in women.
Schwarz, Stefan T; Xing, Yue; Tomar, Pragya; Bajaj, Nin; Auer, Dorothee P
2017-06-01
Purpose To investigate the pattern of neuromelanin signal intensity loss within the substantia nigra pars compacta (SNpc), locus coeruleus, and ventral tegmental area in Parkinson disease (PD); the specific aims were (a) to study regional magnetic resonance (MR) quantifiable depigmentation in association with PD severity and (b) to investigate whether imaging- and platform-dependent signal intensity variations can be normalized. Materials and Methods This prospective case-control study was approved by the local ethics committee and the research department of Nottingham University Hospitals. Written informed consent was obtained from all participants before enrollment in the study. Sixty-nine participants (39 patients with PD and 30 control subjects) were investigated with neuromelanin-sensitive MR imaging by using two different 3-T platforms and three differing protocols. Neuromelanin-related volumes of the anterior and posterior SNpc, locus coeruleus, and ventral tegmental area were determined, and normalized neuromelanin volumes were assessed for protocol-dependent effects. Diagnostic test performance of normalized neuromelanin volume was investigated by using receiver operating characteristic analyses, and correlations with the Unified Parkinson's Disease Rating Scale scores were tested. Results Reduction of normalized neuromelanin volume in PD was most pronounced in the posterior SNpc (median, -83%; P < .001), followed by the anterior SNpc (-49%; P < .001) and the locus coeruleus (-37%; P < .05). Normalized neuromelanin volume loss of the posterior and whole SNpc allowed the best differentiation of patients with PD and control subjects (area under the receiver operating characteristic curve, 0.92 and 0.88, respectively). Normalized neuromelanin volume of the anterior, posterior, and whole SNpc correlated with Unified Parkinson's Disease Rating Scale scores (r 2 = 0.25, 0.22, and 0.28, respectively; all P < .05). Conclusion PD-induced neuromelanin loss can be quantified across imaging protocols and platforms by using appropriate adjustment. Depigmentation in PD follows a distinct spatial pattern, affords high diagnostic accuracy, and is associated with disease severity. © RSNA, 2016 Online supplemental material is available for this article.
Computer-aided diagnosis of splenic enlargement using wave pattern of spleen in abdominal CT images
NASA Astrophysics Data System (ADS)
Seong, Won; Cho, June-Sik; Noh, Seung-Moo; Park, Jong Won
2006-03-01
It is known that the spleen accompanied by liver cirrhosis is hypertrophied or enlarged. We have examined a wave pattern at the left boundary of spleen on the abdominal CT images having liver cirrhosis, and found that they are different from those on the images having a normal liver. It is noticed that the abdominal CT images of patient with liver cirrhosis shows strong bending in the wave pattern. In the case of normal liver, the images may also have a wave pattern, but its bends are not strong. Therefore, the total waving area of the spleen with liver cirrhosis is found to be greater than that of the spleen with a normal liver. Moreover, we found that the waves of the spleen from the image with liver cirrhosis have the higher degree of circularity compared to the normal liver case. Based on the two observations above, we propose an automatic method to diagnose splenic enlargement by using the wave pattern of the spleen in abdominal CT images. The proposed automatic method improves the diagnostic performance compared with the conventional process based on the size of spleen.
Super-delta: a new differential gene expression analysis procedure with robust data normalization.
Liu, Yuhang; Zhang, Jinfeng; Qiu, Xing
2017-12-21
Normalization is an important data preparation step in gene expression analyses, designed to remove various systematic noise. Sample variance is greatly reduced after normalization, hence the power of subsequent statistical analyses is likely to increase. On the other hand, variance reduction is made possible by borrowing information across all genes, including differentially expressed genes (DEGs) and outliers, which will inevitably introduce some bias. This bias typically inflates type I error; and can reduce statistical power in certain situations. In this study we propose a new differential expression analysis pipeline, dubbed as super-delta, that consists of a multivariate extension of the global normalization and a modified t-test. A robust procedure is designed to minimize the bias introduced by DEGs in the normalization step. The modified t-test is derived based on asymptotic theory for hypothesis testing that suitably pairs with the proposed robust normalization. We first compared super-delta with four commonly used normalization methods: global, median-IQR, quantile, and cyclic loess normalization in simulation studies. Super-delta was shown to have better statistical power with tighter control of type I error rate than its competitors. In many cases, the performance of super-delta is close to that of an oracle test in which datasets without technical noise were used. We then applied all methods to a collection of gene expression datasets on breast cancer patients who received neoadjuvant chemotherapy. While there is a substantial overlap of the DEGs identified by all of them, super-delta were able to identify comparatively more DEGs than its competitors. Downstream gene set enrichment analysis confirmed that all these methods selected largely consistent pathways. Detailed investigations on the relatively small differences showed that pathways identified by super-delta have better connections to breast cancer than other methods. As a new pipeline, super-delta provides new insights to the area of differential gene expression analysis. Solid theoretical foundation supports its asymptotic unbiasedness and technical noise-free properties. Implementation on real and simulated datasets demonstrates its decent performance compared with state-of-art procedures. It also has the potential of expansion to be incorporated with other data type and/or more general between-group comparison problems.
Deport, Coralie; Ratel, Jérémy; Berdagué, Jean-Louis; Engel, Erwan
2006-05-26
The current work describes a new method, the comprehensive combinatory standard correction (CCSC), for the correction of instrumental signal drifts in GC-MS systems. The method consists in analyzing together with the products of interest a mixture of n selected internal standards, and in normalizing the peak area of each analyte by the sum of standard areas and then, select among the summation operator sigma(p = 1)(n)C(n)p possible sums, the sum that enables the best product discrimination. The CCSC method was compared with classical techniques of data pre-processing like internal normalization (IN) or single standard correction (SSC) on their ability to correct raw data from the main drifts occurring in a dynamic headspace-gas chromatography-mass spectrometry system. Three edible oils with closely similar compositions in volatile compounds were analysed using a device which performance was modulated by using new or used dynamic headspace traps and GC-columns, and by modifying the tuning of the mass spectrometer. According to one-way ANOVA, the CCSC method increased the number of analytes discriminating the products (31 after CCSC versus 25 with raw data or after IN and 26 after SSC). Moreover, CCSC enabled a satisfactory discrimination of the products irrespective of the drifts. In a factorial discriminant analysis, 100% of the samples (n = 121) were well-classified after CCSC versus 45% for raw data, 90 and 93%, respectively after IN and SSC.
NASA Technical Reports Server (NTRS)
Titterington, W. A.
1973-01-01
The solid polymer electrolyte (SPE) water electrolysis technology is presented as a potential energy conversion method for wind driven generator systems. Electrolysis life and performance data are presented from laboratory sized single cells (7.2 sq in active area) with high cell current density selected (1000 ASF) for normal operation.
RECENT ADVANCES IN QUANTITATIVE NEUROPROTEOMICS
Craft, George E; Chen, Anshu; Nairn, Angus C
2014-01-01
The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson’s disease and Alzheimer’s disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed light on a number of aspects of neuroscience that relates to normal brain function as well as of the changes in protein expression and regulation that occurs in neuropsychiatric and neurodegenerative disorders. PMID:23623823
Recent advances in quantitative neuroproteomics.
Craft, George E; Chen, Anshu; Nairn, Angus C
2013-06-15
The field of proteomics is undergoing rapid development in a number of different areas including improvements in mass spectrometric platforms, peptide identification algorithms and bioinformatics. In particular, new and/or improved approaches have established robust methods that not only allow for in-depth and accurate peptide and protein identification and modification, but also allow for sensitive measurement of relative or absolute quantitation. These methods are beginning to be applied to the area of neuroproteomics, but the central nervous system poses many specific challenges in terms of quantitative proteomics, given the large number of different neuronal cell types that are intermixed and that exhibit distinct patterns of gene and protein expression. This review highlights the recent advances that have been made in quantitative neuroproteomics, with a focus on work published over the last five years that applies emerging methods to normal brain function as well as to various neuropsychiatric disorders including schizophrenia and drug addiction as well as of neurodegenerative diseases including Parkinson's disease and Alzheimer's disease. While older methods such as two-dimensional polyacrylamide electrophoresis continued to be used, a variety of more in-depth MS-based approaches including both label (ICAT, iTRAQ, TMT, SILAC, SILAM), label-free (label-free, MRM, SWATH) and absolute quantification methods, are rapidly being applied to neurobiological investigations of normal and diseased brain tissue as well as of cerebrospinal fluid (CSF). While the biological implications of many of these studies remain to be clearly established, that there is a clear need for standardization of experimental design and data analysis, and that the analysis of protein changes in specific neuronal cell types in the central nervous system remains a serious challenge, it appears that the quality and depth of the more recent quantitative proteomics studies is beginning to shed light on a number of aspects of neuroscience that relates to normal brain function as well as of the changes in protein expression and regulation that occurs in neuropsychiatric and neurodegenerative disorders. Copyright © 2013. Published by Elsevier Inc.
Kotini, A; Anninos, P; Anastasiadis, A N; Tamiolakis, D
2005-09-07
The aim of this study was to compare a theoretical neural net model with MEG data from epileptic patients and normal individuals. Our experimental study population included 10 epilepsy sufferers and 10 healthy subjects. The recordings were obtained with a one-channel biomagnetometer SQUID in a magnetically shielded room. Using the method of x2-fitting it was found that the MEG amplitudes in epileptic patients and normal subjects had Poisson and Gauss distributions respectively. The Poisson connectivity derived from the theoretical neural model represents the state of epilepsy, whereas the Gauss connectivity represents normal behavior. The MEG data obtained from epileptic areas had higher amplitudes than the MEG from normal regions and were comparable with the theoretical magnetic fields from Poisson and Gauss distributions. Furthermore, the magnetic field derived from the theoretical model had amplitudes in the same order as the recorded MEG from the 20 participants. The approximation of the theoretical neural net model with real MEG data provides information about the structure of the brain function in epileptic and normal states encouraging further studies to be conducted.
Xian, George Z.; Homer, Collin G.
2009-01-01
The U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 is widely used as a baseline for national land cover and impervious conditions. To ensure timely and relevant data, it is important to update this base to a more recent time period. A prototype method was developed to update the land cover and impervious surface by individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season from both 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, impervious surface was estimated for areas of change by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain a variety of metropolitan areas. Results from the five study areas show that the vast majority of impervious surface changes associated with urban developments were accurately captured and updated. The approach optimizes mapping efficiency and can provide users a flexible method to generate updated impervious surface at national and regional scales.
NASA Astrophysics Data System (ADS)
Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro
2003-05-01
This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.
Xian, George; Homer, Collin G.
2010-01-01
A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline.
Ecological Health and Water Quality Assessments in Big Creek Lake, AL
NASA Astrophysics Data System (ADS)
Childs, L. M.; Frey, J. W.; Jones, J. B.; Maki, A. E.; Brozen, M. W.; Malik, S.; Allain, M.; Mitchell, B.; Batina, M.; Brooks, A. O.
2008-12-01
Big Creek Lake (aka J.B. Converse Reservoir) serves as the water supply for the majority of residents in Mobile County, Alabama. The area surrounding the reservoir serves as a gopher tortoise mitigation bank and is protected from further development, however, impacts from previous disasters and construction have greatly impacted the Big Creek Lake area. The Escatawpa Watershed drains into the lake, and of the seven drainage streams, three have received a 303 (d) (impaired water bodies) designation in the past. In the adjacent ecosystem, the forest is experiencing major stress from drought and pine bark beetle infestations. Various agencies are using control methods such as pesticide treatment to eradicate the beetles. There are many concerns about these control methods and the run-off into the ecosystem. In addition to pesticide control methods, the Highway 98 construction projects cross the north area of the lake. The community has expressed concern about both direct and indirect impacts of these construction projects on the lake. This project addresses concerns about water quality, increasing drought in the Southeastern U.S., forest health as it relates to vegetation stress, and state and federal needs for improved assessment methods supported by remotely sensed data to determine coastal forest susceptibility to pine bark beetles. Landsat TM, ASTER, MODIS, and EO-1/ALI imagery was employed in Normalized Difference Vegetation Index (NDVI) and Normalized Difference Moisture Index (NDMI), as well as to detect concentration of suspended solids, chlorophyll and water turbidity. This study utilizes NASA Earth Observation Systems to determine how environmental conditions and human activity relate to pine tree stress and the onset of pine beetle invasion, as well as relate current water quality data to community concerns and gain a better understanding of human impacts upon water resources.
Omura, Yoshiaki; Lu, Dominic; Jones, Marilyn K; Nihrane, Abdallah; Duvvi, Harsha; Yapor, Dario; Shimotsuura, Yasuhiro; Ohki, Motomu
2015-01-01
Lyme disease is found in a majority of people we tested. Once Borrelia Burgdorferi (B.B.) spirochete enters human body, it not only causes pain by infecting joints, but it also often enters the brain and the heart. Infection of brain can be quickly detected from the pupil and infection of the heart by ECGs non-invasively. By evaluating recorded ECGs of atrial fibrillation (AF), using U.S. patented non-invasive highly sensitive electromagnetic field (EMF) resonance phenomenon between 2 identical molecules or between a molecule and its antibody, we examined 25 different AF patients' ECGs and found the majority of them suffer from various degrees of B.B. spirochete infection in SA node areas, also in the right & left atria, and pulmonary vein near and around its junction at left atrium & lesser degrees of infection at the AV node & His Bundle. When B.B. infection reaches over 224-600ng or higher at these areas, AF often appears in the majority of all AF analyzed. In order to develop AF, the 4 abnormal factors must be present simultaneously: 1) B.B. infection must be increased to 224-600ng or higher, 2) Atrial Natriuretic Peptide (ANP) must be markedly reduced from normal value of less than 4ng to over 100-400ng, 3) A significant increase of Cardiac Troponin I from normal value of less than 3ng to over 12ng and 4) Taurine must also be markedly reduced from normal value of 4-6ng to 0.25ng. These 4 changes were mainly found only at infected sites of the SA node area, both atria and between the end of the T wave & the beginning of the SA node area, which corresponds to U waves at recorded ECG. Origin of the U wave is mainly due to abnormal electrical potential of pulmonary vein at L-atrium. If all 4 factors do not occur at the infection site, no AF will develop. In seemingly normal ECGs, if using this method, one can detect invisible B.B. infection in early stages. Long before AF appears, AF can be prevented by improved treatment with Amoxicillin 500ng 3 times/day + Taurine 175mg x 3 times/day, with or without EPA 180 mg & DHA 120 mg, to avoid serious current limitations in the use of Doxycycline 100 mg 2 times/day, for 4 weeks.
Huang, C.-C.; Lee, Y.-H.; Liu, Huaibao P.; Keefer, D.K.; Jibson, R.W.
2001-01-01
The 1999 Chi-Chi, Taiwan, earthquake triggered numerous landslides throughout a large area in the Central Range, to the east, southeast, and south of the fault rupture. Among them are two large rock avalanches, at Tsaoling and at Jih-Feng-Erh-Shan. At Jih-Feng-Erh-Shan, the entire thickness (30-50 m) of the Miocene Changhukeng Shale over an area of 1 km2 slid down its bedding plane for a distance of about 1 km. Initial movement of the landslide was nearly purely translational. We investigate the effect of surface-normal acceleration on the initiation of the Jih-Feng-Erh-Shan landslide using a block slide model. We show that this acceleration, currently not considered by dynamic slope-stability analysis methods, significantly influences the initiation of the landslide.
Photovoltaic healing of non-uniformities in semiconductor devices
Karpov, Victor G.; Roussillon, Yann; Shvydka, Diana; Compaan, Alvin D.; Giolando, Dean M.
2006-08-29
A method of making a photovoltaic device using light energy and a solution to normalize electric potential variations in the device. A semiconductor layer having nonuniformities comprising areas of aberrant electric potential deviating from the electric potential of the top surface of the semiconductor is deposited onto a substrate layer. A solution containing an electrolyte, at least one bonding material, and positive and negative ions is applied over the top surface of the semiconductor. Light energy is applied to generate photovoltage in the semiconductor, causing a redistribution of the ions and the bonding material to the areas of aberrant electric potential. The bonding material selectively bonds to the nonuniformities in a manner such that the electric potential of the nonuniformities is normalized relative to the electric potential of the top surface of the semiconductor layer. A conductive electrode layer is then deposited over the top surface of the semiconductor layer.
Ohmichi, Takuma; Kondo, Masaki; Itsukage, Masahiro; Koizumi, Hidetaka; Matsushima, Shigenori; Kuriyama, Nagato; Ishii, Kazunari; Mori, Etsuro; Yamada, Kei; Mizuno, Toshiki; Tokuda, Takahiko
2018-03-16
OBJECTIVE The gold standard for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH) is the CSF removal test. For elderly patients, however, a less invasive diagnostic method is required. On MRI, high-convexity tightness was reported to be an important finding for the diagnosis of iNPH. On SPECT, patients with iNPH often show hyperperfusion of the high-convexity area. The authors tested 2 hypotheses regarding the SPECT finding: 1) it is relative hyperperfusion reflecting the increased gray matter density of the convexity, and 2) it is useful for the diagnosis of iNPH. The authors termed the SPECT finding the convexity apparent hyperperfusion (CAPPAH) sign. METHODS Two clinical studies were conducted. In study 1, SPECT was performed for 20 patients suspected of having iNPH, and regional cerebral blood flow (rCBF) of the high-convexity area was examined using quantitative analysis. Clinical differences between patients with the CAPPAH sign (CAP) and those without it (NCAP) were also compared. In study 2, the CAPPAH sign was retrospectively assessed in 30 patients with iNPH and 19 healthy controls using SPECT images and 3D stereotactic surface projection. RESULTS In study 1, rCBF of the high-convexity area of the CAP group was calculated as 35.2-43.7 ml/min/100 g, which is not higher than normal values of rCBF determined by SPECT. The NCAP group showed lower cognitive function and weaker responses to the removal of CSF than the CAP group. In study 2, the CAPPAH sign was positive only in patients with iNPH (24/30) and not in controls (sensitivity 80%, specificity 100%). The coincidence rate between tight high convexity on MRI and the CAPPAH sign was very high (28/30). CONCLUSIONS Patients with iNPH showed hyperperfusion of the high-convexity area on SPECT; however, the presence of the CAPPAH sign did not indicate real hyperperfusion of rCBF in the high-convexity area. The authors speculated that patients with iNPH without the CAPPAH sign, despite showing tight high convexity on MRI, might have comorbidities such as Alzheimer's disease.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1991-01-01
New methods were developed for efficient aeroservoelastic analysis and optimization. The main target was to develop a method for investigating large structural variations using a single set of modal coordinates. This task was accomplished by basing the structural modal coordinates on normal modes calculated with a set of fictitious masses loading the locations of anticipated structural changes. The following subject areas are covered: (1) modal coordinates for aeroelastic analysis with large local structural variations; and (2) time simulation of flutter with large stiffness changes.
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Ping, Bonnie Tay Yen; Aziz, Haliza Abdul; Idris, Zainab
2018-01-01
High-Performance Liquid Chromatography (HPLC) methods via evaporative light scattering (ELS) and refractive index (RI) detectors are used by the local palm oil industry to monitor the TAG profiles of palm oil and its fractions. The quantitation method used is based on area normalization of the TAG components and expressed as percentage area. Although not frequently used, peak-area ratios based on TAG profiles are a possible qualitative method for characterizing the TAG of palm oil and its fractions. This paper aims to compare these two detectors in terms of peak-area ratio, percentage peak area composition, and TAG elution profiles. The triacylglycerol (TAG) composition for palm oil and its fractions were analysed under similar HPLC conditions i.e. mobile phase and column. However, different sample concentrations were used for the detectors while remaining within the linearity limits of the detectors. These concentrations also gave a good baseline resolved separation for all the TAGs components. The results of the ELSD method's percentage area composition for the TAGs of palm oil and its fractions differed from those of RID. This indicates an unequal response of TAGs for palm oil and its fractions using the ELSD, also affecting the peak area ratios. They were found not to be equivalent to those obtained using the HPLC-RID. The ELSD method showed a better baseline separation for the TAGs components, with a more stable baseline as compared with the corresponding HPLC-RID. In conclusion, the percentage area compositions and peak-area ratios for palm oil and its fractions as derived from HPLC-ELSD and RID were not equivalent due to different responses of TAG components to the ELSD detector. The HPLC-RID has a better accuracy for percentage area composition and peak-area ratio because the TAG components response equally to the detector.
Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data
NASA Astrophysics Data System (ADS)
Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.
2018-04-01
Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.
NASA Astrophysics Data System (ADS)
Yang, Y.; Zeng, Z.; Shuang, X.; Li, X.
2017-12-01
On 17th October, 2016, an earthquake of Ms6.3 occurred in Zaduo County, Qinghai Province (32.9°N, 95.0°E), 159 km away from the epicenter of Yushu Ms7.3 earthquake in 2011. The earthquake is located in the eastern Tibet Plateau and the north region of Eastern Himalayan Syntaxis. Using the broadband seismic waveform data form regional networks, we determined the focal mechanism solutions (FMSs) of 83 earthquakes (M>3.5) occurred in Zaduo and its adjacent areas from 2009 to 2017. We also collected another 63 published FMSs and then inversed the current tectonic stress field in study region using the damped linear inversion method. The results show that the Zaduo earthquake is a normal oblique earthquake. The FMSs in our study region are mainly in strike-slip and normal fault patterns. The strike-slip earthquakes are mainly distributed in Yushu-Ganzi, Zaduo and Yanshiping fault zones, and the normal faulting events occurred in Nu Jiang fault zone and Nierong County and its vicinity, the south and southwest of the study areas. The tectonic stress field results indicate that the stress distribution in the north and east of the study region changes homogeneously and slowly. From west to east, the σ1 gradually changes from NNE to NE direction, and the σ3 varies from NWW to NW direction. Both the maximum (σ1) and minimum (σ3) principal stress axes in the study area are nearly horizontal, except in the Nu Jiang fault zone and its vicinity, the south of the study area, which is in a normal faulting stress regime (σ1 is vertical and σ3 is horizontal). The localized normal faulting stress field in the south area, which is almost limited in a semicircle, indicates that a high pressure and low viscosity body with low S-wave velocity and high conductivity might exists beneath the anomaly area. And there may be another semicircle abnormal area beyond the south of the study region. Waveform data for this study are provided by Data Management Centre of China National Seismic Network at Institute of Geophysics (SEISDMC, doi:10.11998/SeisDmc/SN), China Earthquake Networks Center and GS, QH, SC, XZ Seismic Networks, China Earthquake Administration. This work was supported by the National Nature Science Foundation of China under Grant No.41230206.
Transperineal ultrasound in the assessment of haemorrhoids and haemorrhoidectomy: a pilot study.
Zbar, A P; Murison, R
2010-06-01
The purpose of the study was the measurement of the anal cushion area using static transperineal ultrasound in a group of patients with symptomatic grade III and IV haemorrhoids about to undergo haemorrhoidectomy and compare them with a group of age-matched normals and the measured area following haemorrhoidectomy. Transperineal sonography was performed using a linear transducer measuring the anal cushion area by subtracting the measured luminal diameter of the undisturbed anal canal from the inner border of the internal anal sphincter. Measures were made 6 weeks following haemorrhoidectomy. Comparisons were made between 22 normals and 36 patients with haemorrhoids (31 evaluable post-operatively). The median area of normals was 0.78 cm², that of pre-operative patients 2.25 cm² and that of post-operative cases 1.20 cm². There was a significant difference between pre- and post-operative cases with cushion areas of normal patients being significantly lower than post-operative cases. Variance of measurement in all 3 groups was negligible. Static transperineal sonography measuring the anal cushion area is reproducible and shows marked differences between normals and patients with symptomatic haemorrhoids. There is a marked effect on measured area resultant from haemorrhoidectomy.
Xian, George; Homer, Collin G.; Fry, Joyce
2009-01-01
The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline.
Autofluorescence imaging to optimize 5-ALA-induced fluorescence endoscopy of bladder carcinoma.
Frimberger, D; Zaak, D; Stepp, H; Knüchel, R; Baumgartner, R; Schneede, P; Schmeller, N; Hofstetter, A
2001-09-01
To design an optical system for detecting autofluorescence (AF) of bladder tumors and to determine the success of reducing the false-positive rate of 5-aminolevulinic acid-induced fluorescence endoscopy (AFE). AFE provides significantly higher sensitivity in detecting and localizing bladder carcinoma compared with white light endoscopy. The specificity of AFE is equivalent to white light endoscopy, mostly because of the false-positive fluorescence of chronic cystitis lesions. Laser-induced spectral autofluorescence detection is also an efficient method in the diagnosis of bladder carcinoma. Bladder tissue was excited to AF using the D-Light (375 to 440 nm) after regular AFE with detection of fluorescence-positive areas. The optical image was produced using a special RGB camera. Biopsies were taken from AFE-positive areas, the peritumoral edges, and normal bladder mucosa. The AF images of the suspicious areas were compared with the AFE images and the histologic results. A total of 43 biopsies were histologically examined (24 benign and 19 neoplastic). AF imaging showed contrast differences between papillary tumors, flat lesions, and normal mucosa. The combination of AFE with AF raised the specificity of AFE alone from 67% to 88%. AF imaging is possible. The value of the method in reducing the false-positive rate of the highly sensitive AFE needs to be validated with higher numbers. The combination of AF with AFE had a 20% higher specificity than AFE alone in our study.
Ex vivo method to visualize and quantify vascular networks in native and tissue engineered skin.
Egaña, José Tomás; Condurache, Alexandru; Lohmeyer, Jörn Andreas; Kremer, Mathias; Stöckelhuber, Beate M; Lavandero, Sergio; Machens, Hans-Günther
2009-03-01
Neovascularization plays a pivotal role in tissue engineering and tissue regeneration. However, reliable technologies to visualize and quantify blood vessel networks in target tissue areas are still pending. In this work, we introduce a new method which allows comparing vascularization levels in normal and tissue-engineered skin. Normal skin was isolated, and vascular dermal regeneration was analyzed based on tissue transillumination and computerized digital segmentation. For tissue-engineered skin, a bilateral full skin defect was created in a nude mouse model and then covered with a commercially available scaffold for dermal regeneration. After 3 weeks, the whole skin (including scaffold for dermal regeneration) was harvested, and vascularization levels were analyzed. The blood vessel network in the skin was better visualized by transillumination than by radio-angiographic studies, the gold standard for angiographies. After visualization, the whole vascular network was digitally segmented showing an excellent overlapping with the original pictures. Quantification over the digitally segmented picture was performed, and an index of vascularization area (VAI) and length (VLI) of the vessel network was obtained in target tissues. VAI/VLI ratio was calculated to obtain the vessel size index. We present a new technique which has several advantages compared to others, as animals do not require intravascular perfusions, total areas of interest can be quantitatively analyzed at once, and the same target tissue can be processed for further experimental analysis.
Evaluation of normalization methods in mammalian microRNA-Seq data
Garmire, Lana Xia; Subramaniam, Shankar
2012-01-01
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701
An analytical method for predicting postwildfire peak discharges
Moody, John A.
2012-01-01
An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The threshold value was about 7.6 millimeters per hour for the year of the wildfire and the first year after the wildfire, and it was about 11.1 millimeters per hour for the second year after the wildfire.
NASA Astrophysics Data System (ADS)
Gantumur, Byambakhuu; Wu, Falin; Zhao, Yan; Vandansambuu, Battsengel; Dalaibaatar, Enkhjargal; Itiritiphan, Fareda; Shaimurat, Dauryenbyek
2017-10-01
Urban growth can profoundly alter the urban landscape structure, ecosystem processes, and local climates. Timely and accurate information on the status and trends of urban ecosystems is critical to develop strategies for sustainable development and to improve the urban residential environment and living quality. Ulaanbaatar city was urbanized very rapidly caused by herders and farmers, many of them migrating from rural places, have played a big role in this urban expansion (sprawl). Today, 1.3 million residents for about 40% of total population are living in the Ulaanbaatar region. Those human activities influenced stronger to green environments. Therefore, the aim of this study is determined to change detection of land use/land cover (LULC) and estimating their areas for the trend of future by remote sensing and statistical methods. The implications of analysis were provided by change detection methods of LULC, remote sensing spectral indices including normalized difference vegetation index (NDVI), normalized difference water index (NDWI) and normalized difference built-up index (NDBI). In addition, it can relate to urban heat island (UHI) provided by Land surface temperature (LST) with local climate issues. Statistical methods for image processing used to define relations between those spectral indices and change detection images and regression analysis for time series trend in future. Remote sensing data are used by Landsat (TM/ETM+/OLI) satellite images over the period between 1990 and 2016 by 5 years. The advantages of this study are very useful remote sensing approaches with statistical analysis and important to detecting changes of LULC. The experimental results show that the LULC changes can image on the present and after few years and determined relations between impacts of environmental conditions.
NASA Astrophysics Data System (ADS)
Gómez-Alba, Sebastián; Fajardo-Zarate, Carlos Eduardo; Vargas, Carlos Alberto
2016-11-01
At least 156 earthquakes (Mw 2.8-4.4) were detected in Puerto Gaitán, Colombia (Eastern Llanos Basin) between April 2013 and December 2014. Out of context, this figure is not surprising. However, from its inception in 1993, the Colombian National Seismological Network (CNSN) found no evidence of significant seismic events in this region. In this study, we used CNSN data to model the rupture front and orientation of the highest-energy events. For these earthquakes, we relied on a joint inversion method to estimate focal mechanisms and, in turn, determine the area's fault trends and stress tensor. While the stress tensor defines maximum stress with normal tendency, focal mechanisms generally represent normal faults with NW orientation, an orientation which lines up with the tracking rupture achieved via Back Projection Imaging for the study area. We ought to bear in mind that this anomalous earthquake activity has taken place within oil fields. In short, the present paper argues that, based on the spatiotemporal distribution of seismic events, hydrocarbon operations may induce the study area's seismicity.
Multifocal ERG findings in carriers of X-linked retinoschisis
Kim, Linda S.; Seiple, William; Szlyk, Janet P.
2006-01-01
Purpose To determine whether retinal dysfunction in obligate carriers of X-linked retinoschisis (XLRS) could be observed in local electroretinographic responses obtained with the multifocal electroretinogram (mfERG). Methods Nine obligate carriers of XLRS (mean age, 46.2 years) were examined for the study. Examination of each carrier included an ocular examination and mfERG testing. For the mfERG, we used a 103-scaled hexagonal stimulus array that subtended a retinal area of approximately 40° in diameter. The amplitudes and implicit times in each location for the mfERG were compared with the corresponding values determined for a group of 34 normally-sighted, age-similar control subjects. Results Mapping of 103 local electroretinographic response amplitudes and implicit times within a central 40° area with the mfERG showed regions of reduced mfERG amplitudes and delayed implicit times in two of nine carriers. Conclusions The mfERG demonstrated areas of retinal dysfunction in two carriers of XLRS. When present, retinal dysfunction was evident in the presence of a normal-appearing fundus. Multifocal ERG testing can be useful for identifying some carriers of XLRS. PMID:17180613
Shi, Jingsheng; Chen, Jie; Wu, Jianguo; Chen, Feiyan; Huang, Gangyong; Wang, Zhan; Zhao, Guanglei; Wei, Yibing; Wang, Siqun
2014-01-01
Background The aim of this study was to contrast the collapse values of the postoperative weight-bearing areas of different tantalum rod implant positions, fibula implantation, and core decompression model and to investigate the advantages and disadvantages of tantalum rod implantation in different ranges of osteonecrosis in comparison with other methods. Material/Methods The 3D finite element method was used to establish the 3D finite element model of normal upper femur, 3D finite element model after tantalum rod implantation into different positions of the upper femur in different osteonecrosis ranges, and other 3D finite element models for simulating fibula implant and core decompression. Results The collapse values in the weight-bearing area of the femoral head of the tantalum rod implant model inside the osteonecrosis area, implant model in the middle of the osteonecrosis area, fibula implant model, and shortening implant model exhibited no statistically significant differences (p>0.05) when the osteonecrosis range was small (60°). The stress values on the artificial bone surface for the tantalum rod implant model inside the osteonecrosis area and the shortening implant model exhibited statistical significance (p<0.01). Conclusions Tantalum rod implantation into the osteonecrosis area can reduce the collapse values in the weight-bearing area when osteonecrosis of the femoral head (ONFH) was in a certain range, thereby obtaining better clinical effects. When ONFH was in a large range (120°), the tantalum rod implantation inside the osteonecrosis area, shortening implant or fibula implant can reduce the collapse values of the femoral head, as assessed by other methods. PMID:25479830
Mapping tree density in forests of the southwestern USA using Landsat 8 data
Humagain, Kamal; Portillo-Quintero, Carlos; Cox, Robert D.; Cain, James W.
2017-01-01
The increase of tree density in forests of the American Southwest promotes extreme fire events, understory biodiversity losses, and degraded habitat conditions for many wildlife species. To ameliorate these changes, managers and scientists have begun planning treatments aimed at reducing fuels and increasing understory biodiversity. However, spatial variability in tree density across the landscape is not well-characterized, and if better known, could greatly influence planning efforts. We used reflectance values from individual Landsat 8 bands (bands 2, 3, 4, 5, 6, and 7) and calculated vegetation indices (difference vegetation index, simple ratios, and normalized vegetation indices) to estimate tree density in an area planned for treatment in the Jemez Mountains, New Mexico, characterized by multiple vegetation types and a complex topography. Because different vegetation types have different spectral signatures, we derived models with multiple predictor variables for each vegetation type, rather than using a single model for the entire project area, and compared the model-derived values to values collected from on-the-ground transects. Among conifer-dominated areas (73% of the project area), the best models (as determined by corrected Akaike Information Criteria (AICc)) included Landsat bands 2, 3, 4, and 7 along with simple ratios, normalized vegetation indices, and the difference vegetation index (R2 values for ponderosa: 0.47, piñon-juniper: 0.52, and spruce-fir: 0.66). On the other hand, in aspen-dominated areas (9% of the project area), the best model included individual bands 4 and 2, simple ratio, and normalized vegetation index (R2 value: 0.97). Most areas dominated by ponderosa, pinyon-juniper, or spruce-fir had more than 100 trees per hectare. About 54% of the study area has medium to high density of trees (100–1000 trees/hectare), and a small fraction (4.5%) of the area has very high density (>1000 trees/hectare). Our results provide a better understanding of tree density for identifying areas in need of treatment and planning for more effective treatment. Our analysis also provides an integrated method of estimating tree density across complex landscapes that could be useful for further restoration planning.
Drug scheduling of cancer chemotherapy based on natural actor-critic approach.
Ahn, Inkyung; Park, Jooyoung
2011-11-01
Recently, reinforcement learning methods have drawn significant interests in the area of artificial intelligence, and have been successfully applied to various decision-making problems. In this paper, we study the applicability of the NAC (natural actor-critic) approach, a state-of-the-art reinforcement learning method, to the drug scheduling of cancer chemotherapy for an ODE (ordinary differential equation)-based tumor growth model. ODE-based cancer dynamics modeling is an active research area, and many different mathematical models have been proposed. Among these, we use the model proposed by de Pillis and Radunskaya (2003), which considers the growth of tumor cells and their interaction with normal cells and immune cells. The NAC approach is applied to this ODE model with the goal of minimizing the tumor cell population and the drug amount while maintaining the adequate population levels of normal cells and immune cells. In the framework of the NAC approach, the drug dose is regarded as the control input, and the reward signal is defined as a function of the control input and the cell populations of tumor cells, normal cells, and immune cells. According to the control policy found by the NAC approach, effective drug scheduling in cancer chemotherapy for the considered scenarios has turned out to be close to the strategy of continuing drug injection from the beginning until an appropriate time. Also, simulation results showed that the NAC approach can yield better performance than conventional pulsed chemotherapy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Comparison of human radiation exchange models in outdoor areas
NASA Astrophysics Data System (ADS)
Park, Sookuk; Tuller, Stanton E.
2011-10-01
Results from the radiation components of seven different human thermal exchange models/methods are compared. These include the Burt, COMFA, MENEX, OUT_SET* and RayMan models, the six-directional method and the new Park and Tuller model employing projected area factors ( f p) and effective radiation area factors ( f eff) determined from a sample of normal- and over-weight Canadian Caucasian adults. Input data include solar and longwave radiation measured during a clear summer day in southern Ontario. Variations between models came from differences in f p and f eff and different estimates of longwave radiation from the open sky. The ranges between models for absorbed solar, net longwave and net all-wave radiation were 164, 31 and 187 W m-2, respectively. These differentials between models can be significant in total human thermal exchange. Therefore, proper f p and f eff values should be used to make accurate estimation of radiation on the human body surface.
On the effects of subsurface parameters on evaporite dissolution (Switzerland)
NASA Astrophysics Data System (ADS)
Zidane, Ali; Zechner, Eric; Huggenberger, Peter; Younes, Anis
2014-05-01
Uncontrolled subsurface evaporite dissolution could lead to hazards such as land subsidence. Observed subsidences in a study area of Northwestern Switzerland were mainly due to subsurface dissolution (subrosion) of evaporites such as halite and gypsum. A set of 2D density driven flow simulations were evaluated along 1000 m long and 150 m deep 2D cross sections within the study area that is characterized by tectonic horst and graben structures. The simulations were conducted to study the effect of the different subsurface parameters that could affect the dissolution process. The heterogeneity of normal faults and its impact on the dissolution of evaporites is studied by considering several permeable faults that include non-permeable areas. The mixed finite element method (MFE) is used to solve the flow equation, coupled with the multipoint flux approximation (MPFA) and the discontinuous Galerkin method (DG) to solve the diffusion and the advection parts of the transport equation.
Distribution of green open space in Malang City based on multispectral data
NASA Astrophysics Data System (ADS)
Hasyim, A. W.; Hernawan, F. P.
2017-06-01
Green open space is one of the land that its existence is quite important in urban areas where the minimum area is set to reach 30% of the total area of the city. Malang which has an area of 110,6 square kilometers, is one of the major cities in East Java Province that is prone to over-land conversion due to development needs. In support of the green space program, calculation of green space is needed precisely so that remote sensing which has high accuracy is now used for measurement of green space. This study aims to analyze the area of green open space in Malang by using Landsat 8 image in 2015. The method used was the vegetation index that is Normalized Difference Vegetation Index (NDVI). From the study obtained the calculation of green open space was better to use the vegetation index method to avoid the occurrence of misclassification of other types of land use. The results of the calculation of green open space using NDVI found that the area of green open space in Malang City in 2015 reached 39% of the total area.
Zangwill, Linda M.; Chan, Kwokleung; Bowd, Christopher; Hao, Jicuang; Lee, Te-Won; Weinreb, Robert N.; Sejnowski, Terrence J.; Goldbaum, Michael H.
2010-01-01
Purpose To determine whether topographical measurements of the parapapillary region analyzed by machine learning classifiers can detect early to moderate glaucoma better than similarly processed measurements obtained within the disc margin and to improve methods for optimization of machine learning classifier feature selection. Methods One eye of each of 95 patients with early to moderate glaucomatous visual field damage and of each of 135 normal subjects older than 40 years participating in the longitudinal Diagnostic Innovations in Glaucoma Study (DIGS) were included. Heidelberg Retina Tomograph (HRT; Heidelberg Engineering, Dossenheim, Germany) mean height contour was measured in 36 equal sectors, both along the disc margin and in the parapapillary region (at a mean contour line radius of 1.7 mm). Each sector was evaluated individually and in combination with other sectors. Gaussian support vector machine (SVM) learning classifiers were used to interpret HRT sector measurements along the disc margin and in the parapapillary region, to differentiate between eyes with normal and glaucomatous visual fields and to compare the results with global and regional HRT parameter measurements. The area under the receiver operating characteristic (ROC) curve was used to measure diagnostic performance of the HRT parameters and to evaluate the cross-validation strategies and forward selection and backward elimination optimization techniques that were used to generate the reduced feature sets. Results The area under the ROC curve for mean height contour of the 36 sectors along the disc margin was larger than that for the mean height contour in the parapapillary region (0.97 and 0.85, respectively). Of the 36 individual sectors along the disc margin, those in the inferior region between 240° and 300°, had the largest area under the ROC curve (0.85–0.91). With SVM Gaussian techniques, the regional parameters showed the best ability to discriminate between normal eyes and eyes with glaucomatous visual field damage, followed by the global parameters, mean height contour measures along the disc margin, and mean height contour measures in the parapapillary region. The area under the ROC curve was 0.98, 0.94, 0.93, and 0.85, respectively. Cross-validation and optimization techniques demonstrated that good discrimination (99% of peak area under the ROC curve) can be obtained with a reduced number of HRT parameters. Conclusions Mean height contour measurements along the disc margin discriminated between normal and glaucomatous eyes better than measurements obtained in the parapapillary region. PMID:15326133
Gooseff, Michael N.; McKnight, Diane M.; Lyons, W. Berry; Blum, Alex E.
2002-01-01
In the McMurdo Dry Valleys, Antarctica, dilute glacial meltwater flows down well‐established streambeds to closed basin lakes during the austral summer. During the 6–12 week flow season, a hyporheic zone develops in the saturated sediment adjacent to the streams. Longer Dry Valley streams have higher concentrations of major ions than shorter streams. The longitudinal increases in Si and K suggest that primary weathering contributes to the downstream solute increase. The hypothesis that weathering reactions in the hyporheic zone control stream chemistry was tested by modeling the downstream increase in solute concentration in von Guerard Stream in Taylor Valley. The average rates of solute supplied from these sources over the 5.2 km length of the stream were 6.1 × 10−9 mol Si L−1 m−1 and 3.7 × 10−9 mol K L−1 m−1, yielding annual dissolved Si loads of 0.02–1.30 mol Si m−2 of watershed land surface. Silicate minerals in streambed sediment were analyzed to determine the representative surface area of minerals in the hyporheic zone subject to primary weathering. Two strategies were evaluated to compute sediment surface area normalized weathering rates. The first applies a best linear fit to synoptic data in order to calculate a constant downstream solute concentration gradient, dC/dx (constant weathering rate contribution, CRC method); the second uses a transient storage model to simulate dC/dx, representing both hyporheic exchange and chemical weathering (hydrologic exchange, HE method). Geometric surface area normalized dissolution rates of the silicate minerals in the stream ranged from 0.6 × 10−12 mol Si m−2 s−1 to 4.5 × 10−12 mol Si m−2 s−1 and 0.4 × 10−12 mol K m−2 s−1to 1.9 × 10−12 mol K m−2 s−1. These values are an order of magnitude lower than geometric surface area normalized weathering rates determined in laboratory studies and are an order of magnitude greater than geometric surface area normalized weathering rates determined in a warmer, wetter setting in temperate basins, despite the cold temperatures, lack of precipitation and lack of organic material. These results suggest that the continuous saturation and rapid flushing of the sediment due to hyporheic exchange facilitates weathering in Dry Valley streams.
Lee, Sang-Yoon; Lee, Eun Kyoung; Park, Ki Ho; Kim, Dong Myung
2016-01-01
Purpose To report an asymmetry analysis of macular inner retinal layers using swept-source optical coherence tomography (OCT) and to evaluate the utility for glaucoma diagnosis. Design Observational, cross-sectional study. Participants Seventy normal healthy subjects and 62 glaucoma patients. Methods Three-dimensional scans were acquired from 70 normal subjects and 62 open angle glaucoma patients by swept-source OCT. The thickness of the retinal nerve fiber layer, ganglion cell-inner plexiform layer (GCIPL), ganglion cell complex, and total retina were calculated within a 6.2×6.2 mm macular area divided into a 31×31 grid of 200×200 μm superpixels. For each of the corresponding superpixels, the thickness differences between the subject eyes and contra-lateral eyes and between the upper and lower macula halves of the subject eyes were determined. The negative differences were displayed on a gray-scale asymmetry map. Black superpixels were defined as thickness decreases over the cut-off values. Results The negative inter-ocular and inter-hemisphere differences in GCIPL thickness (mean ± standard deviation) were -2.78 ± 0.97 μm and -3.43 ± 0.71 μm in the normal group and -4.26 ± 2.23 μm and -4.88 ± 1.46 μm in the glaucoma group. The overall extent of the four layers’ thickness decrease was larger in the glaucoma group than in the normal group (all Ps<0.05). The numbers of black superpixels on all of the asymmetry maps were larger in the glaucoma group than in the normal group (all Ps<0.05). The area under receiver operating characteristic curves of average negative thickness differences in macular inner layers for glaucoma diagnosis ranged from 0.748 to 0.894. Conclusions The asymmetry analysis of macular inner retinal layers showed significant differences between the normal and glaucoma groups. The diagnostic performance of the asymmetry analysis was comparable to that of previous methods. These findings suggest that the asymmetry analysis can be a potential ancillary diagnostic tool. PMID:27764166
ERIC Educational Resources Information Center
Tsutakawa, Robert K.
This paper presents a method for estimating certain characteristics of test items which are designed to measure ability, or knowledge, in a particular area. Under the assumption that ability parameters are sampled from a normal distribution, the EM algorithm is used to derive maximum likelihood estimates to item parameters of the two-parameter…
Personal cooling in hot workings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuck, M.A.
1999-07-01
The number of mines experiencing climatic difficulties worldwide is increasing. In a large number of cases these climatic difficulties are confined to working areas only or to specific locations within working areas. Thus the problem in these mines can be described as highly localized, due to a large extent not to high rock temperatures but due to machine heat loads and low airflow rates. Under such situations conventional means of controlling the climate can be inapplicable and/or uneconomic. One possible means of achieving the required level of climatic control, to ensure worker health and safety whilst achieving economic gains, ismore » to adopt a system of active man cooling. This is the reverse of normal control techniques where the cooling power of the ventilating air is enhanced in some way. Current methods of active man cooling include ice jackets and various umbilical cord type systems. These have numerous drawbacks, such as limited useful exposure times and limitations to worker mobility. The paper suggests an alternative method of active man cooling than those currently available and reviews the design criteria for such a garment. The range of application of such a garment is discussed, under both normal and emergency situations.« less
A Method to Overcome Space Charge at Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derbenev, Ya.
2005-06-08
The transverse space charge forces in a high current, low energy beam can be reduced by mean of a large increase of the beam's transverse sizes while maintaining the beam area in the 4D phase space. This can be achieved by transforming the beam area in phase space of each of two normal 2D transverse (either plane or circular) modes from a spot shape into a narrow ring of a large amplitude, but homogeneous in phase. Such a transformation results from the beam evolution in the island of a dipole resonance when the amplitude width of the island shrinks adiabatically.more » After stacking (by using stripping foils or cooling) the beam in such a state and accelerating to energies sufficiently high that the space charge becomes insignificant, the beam then can be returned back to a normal spot shape by applying the reverse transformation. An arrangement that can provide such beam gymnastics along a transport line after a linac and before a booster and/or in a ring with circulating beam will be described and numerical estimates will be presented. Other potential applications of the method will be briefly discussed.« less
A Method to Overcome Space Charge at Injection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ya. Derbenev
2005-09-29
The transverse space charge forces in a high current, low energy beam can be reduced by mean of a large increase of the beam's transverse sizes while maintaining the beam area in the 4D phase space. This can be achieved by transforming the beam area in phase space of each of two normal 2D transverse (either plane or circular) modes from a spot shape into a narrow ring of a large amplitude, but homogeneous in phase. Such a transformation results from the beam evolution in the island of a dipole resonance when the amplitude width of the island shrinks adiabatically.more » After stacking (by using stripping foils or cooling) the beam in such a state and accelerating to energies sufficiently high that the space charge becomes insignificant, the beam then can be returned back to a normal spot shape by applying the reverse transformation. An arrangement that can provide such beam gymnastics along a transport line after a linac and before a booster and/or in a ring with circulating beam will be described and numerical estimates will be presented. Other potential applications of the method will be briefly discussed.« less
Temporal radiographic texture analysis in the detection of periprosthetic osteolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilkie, Joel R.; Giger, Maryellen L.; Chinander, Michael R.
2008-01-15
Periprosthetic osteolysis is one of the most serious long-term problems in total hip arthroplasty. It has been primarily attributed to the body's inflammatory response to submicron polyethylene particles worn from the hip implant, and it leads to bone loss and structural deterioration in the surrounding bone. It was previously demonstrated that radiographic texture analysis (RTA) has the ability to distinguish between osteolysis and normal cases at the time of clinical detection of the disease; however, that analysis did not take into account the changes in texture over time. The goal of this preliminary analysis, however, is to assess the abilitymore » of temporal radiographic texture analysis (tRTA) to distinguish between patients who develop osteolysis and normal cases. Two tRTA methods were used in the study: the RTA feature change from baseline at various follow-up intervals and the slope of the best-fit line to the RTA data series. These tRTA methods included Fourier-based and fractal-based features calculated from digitized images of 202 total hip replacement cases, including 70 that developed osteolysis. Results show that separation between the osteolysis and normal groups increased over time for the feature difference method, as the disease progressed, with area under the curve (AUC) values from receiver operating characteristic analysis of 0.65 to 0.72 at 15 years postsurgery. Separation for the slope method was also evident, with AUC values ranging from 0.65 to 0.76 for the task of distinguishing between osteolysis and normal cases. The results suggest that tRTA methods have the ability to measure changes in trabecular structure, and may be useful in the early detection of periprosthetic osteolysis.« less
Agger, Sean A.; Marney, Luke C.; Hoofnagle, Andrew N.
2011-01-01
BACKGROUND If liquid-chromatography–multiple-reaction–monitoring mass spectrometry (LC-MRM/MS) could be used in the large-scale preclinical verification of putative biomarkers, it would obviate the need for the development of expensive immunoassays. In addition, the translation of novel biomarkers to clinical use would be accelerated if the assays used in preclinical studies were the same as those used in the clinical laboratory. To validate this approach, we developed a multiplexed assay for the quantification of 2 clinically well-known biomarkers in human plasma, apolipoprotein A-I and apolipoprotein B (apoA-I and apoB). METHODS We used PeptideAtlas to identify candidate peptides. Human samples were denatured with urea or trifluoroethanol, reduced and alkylated, and digested with trypsin. We compared reversed-phase chromatographic separation of peptides with normal flow and microflow, and we normalized endogenous peptide peak areas to internal standard peptides. We evaluated different methods of calibration and compared the final method with a nephelometric immunoassay. RESULTS We developed a final method using trifluoroethanol denaturation, 21-h digestion, normal flow chromatography-electrospray ionization, and calibration with a single normal human plasma sample. For samples injected in duplicate, the method had intraassay CVs <6% and interassay CVs <12% for both proteins, and compared well with immunoassay (n = 47; Deming regression, LC-MRM/MS = 1.17 × immunoassay – 36.6; Sx|y = 10.3 for apoA-I and LC-MRM/MS = 1.21 × immunoassay + 7.0; Sx|y = 7.9 for apoB). CONCLUSIONS Multiplexed quantification of proteins in human plasma/serum by LC-MRM/MS is possible and compares well with clinically useful immunoassays. The potential application of single-point calibration to large clinical studies could simplify efforts to reduce day-to-day digestion variability. PMID:20923952
Comparison and evaluation of fusion methods used for GF-2 satellite image in coastal mangrove area
NASA Astrophysics Data System (ADS)
Ling, Chengxing; Ju, Hongbo; Liu, Hua; Zhang, Huaiqing; Sun, Hua
2018-04-01
GF-2 satellite is the highest spatial resolution Remote Sensing Satellite of the development history of China's satellite. In this study, three traditional fusion methods including Brovey, Gram-Schmidt and Color Normalized (CN were used to compare with the other new fusion method NNDiffuse, which used the qualitative assessment and quantitative fusion quality index, including information entropy, variance, mean gradient, deviation index, spectral correlation coefficient. Analysis results show that NNDiffuse method presented the optimum in qualitative and quantitative analysis. It had more effective for the follow up of remote sensing information extraction and forest, wetland resources monitoring applications.
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
NASA Astrophysics Data System (ADS)
Rahimi, D.; Movahedi, S.
2009-04-01
In the last decades, water crisis is one of the most important critical phenomenons in the environment planning and human society's management which affecting on development aspects in the international, national and regional levels. In this research, have been considered the Drought as the main parameter in water rare serious. For drought assessment, can treat the different methods, such as statistical model, meteorological and hydrological methods. In this research, have been used the Normal Precipitation index to meteorological analysis of drought severity in Sistan and Baluchistan province with high drought severity during recent years. According to the obtained result, the annual precipitation of studied area was between 36 to 52 percent more than mean precipitation of province. 10%-23 percent of precipitation amount involved the drought threshold border, 3%-13 percent of precipitations contain the weakness drought, 6.7% -23 percent were considered for moderate drought, 6%-20 percent involved the severe drought and ultimately, 6.7% to 23 percent of precipitations were considered as very severe drought. Keywords: Drought, Normal index, precipitation, Sistan and Baluchistan
Characterizing Normal Groundwater Chemistry in Hawaii
NASA Astrophysics Data System (ADS)
Tachera, D.; Lautze, N. C.; Thomas, D. M.; Whittier, R. B.; Frazer, L. N.
2017-12-01
Hawaii is dependent on groundwater resources, yet how water moves through the subsurface is not well understood in many locations across the state. As marine air moves across the islands water evaporates from the ocean, along with trace amounts of sea-salt ions, and interacts with the anthropogenic and volcanic aerosols (e.g. sulfuric acid, ammonium sulfate, HCl), creating a slightly more acidic rain. When this rain falls, it has a chemical signature distinctive of past processes. As this precipitation infiltrates through soil it may pick up another distinctive chemical signature associated with land use and degree of soil development, and as it flows through the underlying geology, its chemistry is influenced by the host rock. We are currently conducting an investigation of groundwater chemistry in selected aquifer areas of Hawaii, having diverse land use, land cover, and soil development conditions, in an effort to investigate and document what may be considered a "normal" water chemistry for an area. Through this effort, we believe we better assess anomalies due to contamination events, hydrothermal alteration, and other processes; and we can use this information to better understand groundwater flow direction. The project has compiled a large amount of precipitation, soil, and groundwater chemistry data in the three focus areas distributed across in the State of Hawaii. Statistical analyses of these data sets will be performed in an effort to determine what is "normal" and what is anomalous chemistry for a given area. Where possible, results will be used to trace groundwater flow paths. Methods and preliminary results will be presented.
A new approach for computing a flood vulnerability index using cluster analysis
NASA Astrophysics Data System (ADS)
Fernandez, Paulo; Mourato, Sandra; Moreira, Madalena; Pereira, Luísa
2016-08-01
A Flood Vulnerability Index (FloodVI) was developed using Principal Component Analysis (PCA) and a new aggregation method based on Cluster Analysis (CA). PCA simplifies a large number of variables into a few uncorrelated factors representing the social, economic, physical and environmental dimensions of vulnerability. CA groups areas that have the same characteristics in terms of vulnerability into vulnerability classes. The grouping of the areas determines their classification contrary to other aggregation methods in which the areas' classification determines their grouping. While other aggregation methods distribute the areas into classes, in an artificial manner, by imposing a certain probability for an area to belong to a certain class, as determined by the assumption that the aggregation measure used is normally distributed, CA does not constrain the distribution of the areas by the classes. FloodVI was designed at the neighbourhood level and was applied to the Portuguese municipality of Vila Nova de Gaia where several flood events have taken place in the recent past. The FloodVI sensitivity was assessed using three different aggregation methods: the sum of component scores, the first component score and the weighted sum of component scores. The results highlight the sensitivity of the FloodVI to different aggregation methods. Both sum of component scores and weighted sum of component scores have shown similar results. The first component score aggregation method classifies almost all areas as having medium vulnerability and finally the results obtained using the CA show a distinct differentiation of the vulnerability where hot spots can be clearly identified. The information provided by records of previous flood events corroborate the results obtained with CA, because the inundated areas with greater damages are those that are identified as high and very high vulnerability areas by CA. This supports the fact that CA provides a reliable FloodVI.
Laser targeted photo-occlusion of rat choroidal neovascularization without collateral damage.
Nishiwaki, Hirokazu; Zeimer, Ran; Goldberg, Morton F; D'Anna, Salvatore A; Vinores, Stanley A; Grebe, Rhonda
2002-02-01
Laser targeted photo-occlusion (LTO) is a novel method being developed to treat choroidal neovascular membranes (CNV) in age-related and other macular degenerations. A photosensitive agent, encapsulated in heat-sensitive liposomes, is administered intravenously. A low power laser warms the targeted tissue and releases a bolus of photosensitizer. The photosensitizer is activated after it clears from the normal choriocapillaris but not from the CNV. Forty-five experimental CNV were induced in seven rats. Five weeks after LTO, complete occlusion was observed by laser targeted angiography (LTA) in 76% of treated CNV, and partial occlusion was found in the remaining 24%. The tissues outside the CNV but within the area treated by LTO showed no flow alteration and no dye leakage. All untreated CNV were patent on LTA at 5 weeks. Light microscopy and electron microscopy confirmed the results in treated and control lesions. Moreover, treated areas next to lesions showed normal photoreceptors, retinal pigment epithelium (RPE), Bruch's membrane and choriocapillaris. These results indicate that LTO may improve current photodynamic therapy by alleviating the need for repeated treatments and by avoiding the long-term risks associated with damage to the RPE and occlusion of normal choriocapillaries.
NASA Astrophysics Data System (ADS)
Maeda, Yoshitaka; Urata, Shinya; Nakai, Hideo; Takeuchi, Yuuya; Yun, Kyyoul; Yanase, Shunji; Okazaki, Yasuo
2017-05-01
In designing motors, one must grasp the magnetic properties of electrical steel sheets considering actual conditions in motors. Especially important is grasping the stress dependence of magnetic power loss. This paper describes a newly developed apparatus to measure two-dimensional (2-D) magnetic properties (properties under the arbitrary alternating and the rotating flux conditions) of electrical steel sheets under compressive stress normal to the sheet surface. The apparatus has a 2-D magnetic excitation circuit to generate magnetic fields in arbitrary directions in the evaluation area. It also has a pressing unit to apply compressive stress normal to the sheet surface. During measurement, it is important to apply uniform stress throughout the evaluation area. Therefore, we have developed a new flux density sensor using needle probe method. It is composed of thin copper foils sputtered on electrical steel sheets. By using this sensor, the stress can be applied to the surface of the specimen without influence of this sensor. This paper described the details of newly developed apparatus with this sensor, and measurement results of iron loss by using are shown.
Visibility graph analysis of heart rate time series and bio-marker of congestive heart failure
NASA Astrophysics Data System (ADS)
Bhaduri, Anirban; Bhaduri, Susmita; Ghosh, Dipak
2017-09-01
Study of RR interval time series for Congestive Heart Failure had been an area of study with different methods including non-linear methods. In this article the cardiac dynamics of heart beat are explored in the light of complex network analysis, viz. visibility graph method. Heart beat (RR Interval) time series data taken from Physionet database [46, 47] belonging to two groups of subjects, diseased (congestive heart failure) (29 in number) and normal (54 in number) are analyzed with the technique. The overall results show that a quantitative parameter can significantly differentiate between the diseased subjects and the normal subjects as well as different stages of the disease. Further, the data when split into periods of around 1 hour each and analyzed separately, also shows the same consistent differences. This quantitative parameter obtained using the visibility graph analysis thereby can be used as a potential bio-marker as well as a subsequent alarm generation mechanism for predicting the onset of Congestive Heart Failure.
A Method for Determining Cloud-Droplet Impingement on Swept Wings
NASA Technical Reports Server (NTRS)
Dorsch, Robert G.; Brun, Rinaldo J.
1953-01-01
The general effect of wing sweep on cloud-droplet trajectories about swept wings of high aspect ratio moving at subsonic speeds is discussed. A method of computing droplet trajectories about yawed cylinders and swept wings is presented, and illustrative droplet trajectories are computed. A method of extending two-dimensional calculations of droplet impingement on nonswept wings to swept wings is presented. It is shown that the extent of impingement of cloud droplets on an airfoil surface, the total rate of collection of water, and the local rate of impingement per unit area of airfoil surface can be found for a swept wing from two-dimensional data for a nonswept wing. The impingement on a swept wing is obtained from impingement data for a nonswept airfoil section which is the same as the section in the normal plane of the swept wing by calculating all dimensionless parameters with respect to flow conditions in the normal plane of the swept wing.
Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju
2018-06-01
To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.
Li, Xiongwei; Wang, Zhe; Fu, Yangting; Li, Zheng; Liu, Jianmin; Ni, Weidou
2014-01-01
Measurement of coal carbon content using laser-induced breakdown spectroscopy (LIBS) is limited by its low precision and accuracy. A modified spectrum standardization method was proposed to achieve both reproducible and accurate results for the quantitative analysis of carbon content in coal using LIBS. The proposed method used the molecular emissions of diatomic carbon (C2) and cyanide (CN) to compensate for the diminution of atomic carbon emissions in high volatile content coal samples caused by matrix effect. The compensated carbon line intensities were further converted into an assumed standard state with standard plasma temperature, electron number density, and total number density of carbon, under which the carbon line intensity is proportional to its concentration in the coal samples. To obtain better compensation for fluctuations of total carbon number density, the segmental spectral area was used and an iterative algorithm was applied that is different from our previous spectrum standardization calculations. The modified spectrum standardization model was applied to the measurement of carbon content in 24 bituminous coal samples. The results demonstrate that the proposed method has superior performance over the generally applied normalization methods. The average relative standard deviation was 3.21%, the coefficient of determination was 0.90, the root mean square error of prediction was 2.24%, and the average maximum relative error for the modified model was 12.18%, showing an overall improvement over the corresponding values for the normalization with segmental spectrum area, 6.00%, 0.75, 3.77%, and 15.40%, respectively.
Background concentrations of metals in soils from selected regions in the State of Washington
Ames, K.C.; Prych, E.A.
1995-01-01
Soil samples from 60 sites in the State of Washington were collected and analyzed to determine the magnitude and variability of background concen- trations of metals in soils of the State. Samples were collected in areas that were relatively undisturbed by human activity from the most pre- dominant soils in 12 different regions that are representative of large areas of Washington State. Concentrations of metals were determined by five different laboratory methods. Concentrations of mercury and nickel determined by both the total and total-recoverable methods displayed the greatest variability, followed by chromium and copper determined by the total-recoverable method. Concentrations of other metals, such as aluminum and barium determined by the total method, varied less. Most metals concentrations were found to be more nearly log-normally than normally distributed. Total metals concentrations were not significantly different among the different regions. However, total-recoverable metals concentrations were not as similar among different regions. Cluster analysis revealed that sampling sites in three regions encompassing the Puget Sound could be regrouped to form two new regions and sites in three regions in south-central and southeastern Washington State could also be regrouped into two new regions. Concentrations for 7 of 11 total-recoverable metals correlated with total metals concentrations. Concen- trations of six total metals also correlated positively with organic carbon. Total-recoverable metals concentrations did not correlate with either organic carbon or particle size. Concentrations of metals determined by the leaching methods did not correlate with total or total-recoverable metals concentrations, nor did they correlate with organic carbon or particle size.
Quantitative analyses of variability in normal vaginal shape and dimension on MR images
Luo, Jiajia; Betschart, Cornelia; Ashton-Miller, James A.; DeLancey, John O. L.
2016-01-01
Introduction and hypothesis We present a technique for quantifying inter-individual variability in normal vaginal shape, axis, and dimension, and report findings in healthy women. Methods Eighty women (age: 28~70 years) with normal pelvic organ support underwent supine, multi-planar proton-density MRI. Vaginal width was assessed at five evenly-spaced locations, and vaginal axis, length, and surface area were quantified via ImageJ and MATLAB. Results The mid-sagittal plane angles, relative to the horizontal, of three vaginal axes were 90± 11, 72± 21, and 41± 22° (caudal to cranial, p < 0.001). The mean (± SD) vaginal widths were 17± 5, 24± 4, 30± 7, 41± 9, and 45± 12 mm at the five locations (caudal to cranial, p < 0.001). Mid-sagittal lengths for anterior and posterior vaginal walls were 63± 9 and 98 ± 18 mm respectively. The vaginal surface area was 72 ± 21 cm2 (range: 34 ~ 164 cm2). The coefficient of determination between any demographic variable and any vaginal dimension did not exceed 0.16. Conclusions Large variations in normal vaginal shape, axis, and dimensions were not explained by body size or other demographic variables. This variation has implications for reconstructive surgery, intravaginal and surgical product design, and vaginal drug delivery. PMID:26811115
Dynamic analysis of elastic rubber tired car wheel breaking under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The purpose of the paper is to analyze the dynamics of the braking of the wheel under normal load variations. The paper uses a mathematical simulation method according to which the calculation model of an object as a mechanical system is associated with a dynamically equivalent schematic structure of the automatic control. Transfer function tool analyzing structural and technical characteristics of an object as well as force disturbances were used. It was proved that the analysis of dynamic characteristics of the wheel subjected to external force disturbances has to take into account amplitude and phase-frequency characteristics. Normal load variations impact car wheel braking subjected to disturbances. The closer slip to the critical point is, the higher the impact is. In the super-critical area, load variations cause fast wheel blocking.
Sattarzadeh, Roya; Tavoosi, Anahita; Saadat, Mohammad; Derakhshan, Leila; Khosravi, Bakhtyar; Geraiely, Babak
2017-11-01
Accurate measurement of Mitral Valve Area (MVA) is essential to determining the Mitral Stenosis (MS) severity and to achieving the best management strategies for this disease. The goal of the present study is to compare mitral valve area (MVA) measurement by Continuity Equation (CE) and Pressure Half-Time (PHT) methods with that of 2D-Planimetry (PL) in patients with moderate to severe mitral stenosis (MS). This comparison also was performed in subgroups of patients with significant Aortic Insufficiency (AI), Mitral Regurgitation (MR) and Atrial Fibrillation (AF). We studied 70 patients with moderate to severe MS who were referred to echocardiography clinic. MVA was determined by PL, CE and PHT methods. The agreement and correlations between MVA's obtained from various methods were determined by kappa index, Bland-Altman analysis, and linear regression analysis. The mean values for MVA calculated by CE was 0.81 cm (±0.27) and showed good correlation with those calculated by PL (0.95 cm, ±0.26 ) in whole population (r=0.771, P<0.001) and MR subgroup (r=0.763, P<0.001) and normal sinus rhythm and normal valve subgroups (r=0.858, P<0.001 and r=0.867, P<0.001, respectively). But CE methods didn't show any correlation in AF and AI subgroups. MVA measured by PHT had a good correlation with that measured by PL in whole population (r=0.770, P<0.001) and also in NSR (r=0.814, P<0.001) and normal valve subgroup (r=0.781, P<0.001). Subgroup with significant AI and those with significant MR showed moderate correlation (r=0.625, P=0.017 and r=0.595, P=0.041, respectively). Bland Altman Analysis showed that CE would estimate MVA smaller in comparison with PL in the whole population and all subgroups and PHT would estimate MVA larger in comparison with PL in the whole population and all subgroups. The mean bias for CE and PHT are 0.14 cm and -0.06 cm respectively. In patients with moderate to severe mitral stenosis, in the absence of concomitant AF, AI or MR, the accuracy of CE or PHT method in measuring MVA is nearly equal. But in the presence of significant AI or MR, PHT method is obviously superior to CE and in the presence of AF neither have sufficient accuracy.
New method for analysis of facial growth in a pediatric reconstructed mandible.
Kau, Chung How; Kamel, Sherif Galal; Wilson, Jim; Wong, Mark E
2011-04-01
The aim of this article was to present a new method of analysis for the assessment of facial growth and morphology after surgical resection of the mandible in a growing patient. This was a 2-year longitudinal study of facial growth in a child who had undergone segmental resection of the mandible with immediate reconstruction as a treatment for juvenile aggressive fibromatosis. Three-dimensional digital stereo-photogrammteric cameras were used for image acquisition at several follow-up intervals: immediate, 6 months, and 2 years postresection. After processing and superimposition, shell-to-shell deviation maps were used for the analysis of the facial growth pattern and its deviation from normal growth. The changes were seen as mean surface changes and color maps. An average constructed female face from a previous study was used as a reference for a normal growth pattern. The patient showed significant growth during this period. Positive changes took place around the nose, lateral brow area, and lower lip and chin, whereas negative changes were evident at the lower lips and cheeks area. An increase in the vertical dimension of the face at the chin region was also seen prominently. Three-dimensional digital stereo-photogrammetry can be used as an objective, noninvasive method for quantifying and monitoring facial growth and its abnormalities. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
A Multi-Index Integrated Change detection method for updating the National Land Cover Database
Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.
2010-01-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
Codriansky, Andres; Hong, Jiaxu; Xu, Jianjian; Deng, Sophie X.
2016-01-01
Purpose To report the presence of normal limbal epithelium detected by in vivo confocal laser scanning microscopy (IVCM) in three cases of clinically diagnosed total limbal stem cell deficiency (LSCD). Methods This is a retrospective case report consists of three patients who were diagnosed with total LSCD based on clinical exam and/or impression cytology. Clinical data including ocular history, presentation, slit-lamp examination, IVCM and impression cytology were reviewed. Results The etiology was chemical burn in three cases. One patient has two failed penetrating keratoplasty. Another had allogeneic keratolimbal transplantation but the graft failed one year after surgery. The third patient had failed amniotic membrane transplantation. These three patients presented with signs of total LSCD including the absence of normal Vogt palisades, complete superficial vascularization of the peripheral cornea, non-healing epithelial defects, and corneal scarring. Impression cytology was performed in two cases to confirm the presence of goblet cells in two cases. Each patient however still had distinct areas of corneal and/or limbal epithelial cells detected by IVCM. Conclusions Residual normal limbal epithelial cells could be present in eyes with clinical features of total LSCD. IVCM appears to be a more accurate method to evaluate the degree of LSCD. PMID:27362882
Changes in Cerebral Cortex of Children Treated for Medulloblastoma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Arthur K.; Marcus, Karen J.; Department of Radiation Oncology, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA
2007-07-15
Purpose: Children with medulloblastoma undergo surgery, radiotherapy, and chemotherapy. After treatment, these children have numerous structural abnormalities. Using high-resolution magnetic resonance imaging, we measured the thickness of the cerebral cortex in a group of medulloblastoma patients and a group of normally developing children. Methods and Materials: We obtained magnetic resonance imaging scans and measured the cortical thickness in 9 children after treatment of medulloblastoma. The measurements from these children were compared with the measurements from age- and gender-matched normally developing children previously scanned. For additional comparison, the pattern of thickness change was compared with the cortical thickness maps from amore » larger group of 65 normally developing children. Results: In the left hemisphere, relatively thinner cortex was found in the perirolandic region and the parieto-occipital lobe. In the right hemisphere, relatively thinner cortex was found in the parietal lobe, posterior superior temporal gyrus, and lateral temporal lobe. These regions of cortical thinning overlapped with the regions of cortex that undergo normal age-related thinning. Conclusion: The spatial distribution of cortical thinning suggested that the areas of cortex that are undergoing development are more sensitive to the effects of treatment of medulloblastoma. Such quantitative methods may improve our understanding of the biologic effects that treatment has on the cerebral development and their neuropsychological implications.« less
Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Weng, Yi-Hsin
2015-01-01
Purpose. We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [99mTc]-TRODAT with SPECT imaging. Procedures. A normal [99mTc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. Results. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R 2 = 0.84. Conclusions. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients. PMID:26366413
Fang, Yu-Hua Dean; Chiu, Shao-Chieh; Lu, Chin-Song; Yen, Tzu-Chen; Weng, Yi-Hsin
2015-01-01
We aimed at improving the existing methods for the fully automatic quantification of striatal uptake of [(99m)Tc]-TRODAT with SPECT imaging. A normal [(99m)Tc]-TRODAT template was first formed based on 28 healthy controls. Images from PD patients (n = 365) and nPD subjects (28 healthy controls and 33 essential tremor patients) were spatially normalized to the normal template. We performed an inverse transform on the predefined striatal and reference volumes of interest (VOIs) and applied the transformed VOIs to the original image data to calculate the striatal-to-reference ratio (SRR). The diagnostic performance of the SRR was determined through receiver operating characteristic (ROC) analysis. The SRR measured with our new and automatic method demonstrated excellent diagnostic performance with 92% sensitivity, 90% specificity, 92% accuracy, and an area under the curve (AUC) of 0.94. For the evaluation of the mean SRR and the clinical duration, a quadratic function fit the data with R (2) = 0.84. We developed and validated a fully automatic method for the quantification of the SRR in a large study sample. This method has an excellent diagnostic performance and exhibits a strong correlation between the mean SRR and the clinical duration in PD patients.
Sakamoto, Rumi; Kakinuma, Eisuke; Masuda, Kentaro; Takeuchi, Yuko; Ito, Kosaku; Iketaki, Kentaro; Matsuzaki, Takahisa; Nakabayashi, Seiichiro; Yoshikawa, Hiroshi Y; Yamamoto, Hideaki; Sato, Yuko; Tanii, Takashi
2016-09-01
The main constituent of green tea, (-)-Epigallocatechin-3-O-gallate (EGCG), is known to have cancer-specific chemopreventive effects. In the present work, we investigated how EGCG suppresses cell adhesion by comparing the adhesion of human pancreatic cancer cells (AsPC-1 and BxPC-3) and their counterpart, normal human embryonic pancreas-derived cells (1C3D3), in catechin-containing media using organosilane monolayer templates (OMTs). The purpose of this work is (1) to evaluate the quantitativeness in the measurement of cell adhesion with the OMT and (2) to show how green-tea catechins suppress cell adhesion in a cancer-specific manner. For the first purpose, the adhesion of cancer and normal cells was compared using the OMT. The cell adhesion in different type of catechins such as EGCG, (-)-Epicatechin-3-O-gallate (ECG) and (-)-Epicatechin (EC) was also evaluated. The measurements revealed that the anti-adhesion effect of green-tea catechins is cancer-specific, and the order is EGCG≫ECG>EC. The results agree well with the data reported to date, showing the quantitativeness of the new method. For the second purpose, the contact area of cells on the OMT was measured by reflection interference contrast microscopy. The cell-OMT contact area of cancer cells decreases with increasing EGCG concentration, whereas that of normal cells remains constant. The results reveal a twofold action of EGCG on cancer cell adhesion-suppressing cell attachment to a candidate adhesion site and decreasing the contact area of the cells-and validates the use of OMT as a tool for screening cancer cell adhesion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q; Snyder, K; Liu, C
Purpose: To develop an optimization algorithm to reduce normal brain dose by optimizing couch and collimator angles for single isocenter multiple targets treatment of stereotactic radiosurgery. Methods: Three metastatic brain lesions were retrospectively planned using single-isocenter volumetric modulated arc therapy (VMAT). Three matrices were developed to calculate the projection of each lesion on Beam’s Eye View (BEV) by the rotating couch, collimator and gantry respectively. The island blocking problem was addressed by computing the total area of open space between any two lesions with shared MLC leaf pairs. The couch and collimator angles resulting in the smallest open areas weremore » the optimized angles for each treatment arc. Two treatment plans with and without couch and collimator angle optimization were developed using the same objective functions and to achieve 99% of each target volume receiving full prescription dose of 18Gy. Plan quality was evaluated by calculating each target’s Conformity Index (CI), Gradient Index (GI), and Homogeneity index (HI), and absolute volume of normal brain V8Gy, V10Gy, V12Gy, and V14Gy. Results: Using the new couch/collimator optimization strategy, dose to normal brain tissue was reduced substantially. V8, V10, V12, and V14 decreased by 2.3%, 3.6%, 3.5%, and 6%, respectively. There were no significant differences in the conformity index, gradient index, and homogeneity index between two treatment plans with and without the new optimization algorithm. Conclusion: We have developed a solution to the island blocking problem in delivering radiation to multiple brain metastases with shared isocenter. Significant reduction in dose to normal brain was achieved by using optimal couch and collimator angles that minimize total area of open space between any of the two lesions with shared MLC leaf pairs. This technique has been integrated into Eclipse treatment system using scripting API.« less
Optical Coherence Tomography Angiography of Optic Disc Perfusion in Glaucoma
Jia, Yali; Wei, Eric; Wang, Xiaogang; Zhang, Xinbo; Morrison, John C.; Parikh, Mansi; Lombardi, Lori H.; Gattey, Devin M.; Armour, Rebecca L.; Edmunds, Beth; Kraus, Martin F.; Fujimoto, James G.; Huang, David
2014-01-01
Purpose To compare optic disc perfusion between normal and glaucoma subjects using optical coherence tomography (OCT) angiography and detect optic disc perfusion changes in glaucoma. Design Observational, cross-sectional study. Participants Twenty-four normal subjects and 11 glaucoma patients were included. Methods One eye of each subject was scanned by a high-speed 1050 nm wavelength swept-source OCT instrument. The split-spectrum amplitude-decorrelation angiography algorithm (SSADA) was used to compute three-dimensional optic disc angiography. A disc flow index was computed from four registered scans. Confocal scanning laser ophthalmoscopy (cSLO) was used to measure disc rim area, and stereo photography was used to evaluate cup/disc ratios. Wide field OCT scans over the discs were used to measure retinal nerve fiber layer (NFL) thickness. Main Outcome Measurements Variability was assessed by coefficient of variation (CV). Diagnostic accuracy was assessed by sensitivity and specificity. Comparisons between glaucoma and normal groups were analyzed by Wilcoxon rank-sum test. Correlations between disc flow index, structural assessments, and visual field (VF) parameters were assessed by linear regression. Results In normal discs, a dense microvascular network was visible on OCT angiography. This network was visibly attenuated in glaucoma subjects. The intra-visit repeatability, inter-visit reproducibility, and normal population variability of the optic disc flow index were 1.2%, 4.2%, and 5.0% CV respectively. The disc flow index was reduced by 25% in the glaucoma group (p = 0.003). Sensitivity and specificity were both 100% using an optimized cutoff. The flow index was highly correlated with VF pattern standard deviation (R2 = 0.752, p = 0.001). These correlations were significant even after accounting for age, cup/disc area ratio, NFL, and rim area. Conclusions OCT angiography, generated by the new SSADA algorithm, repeatably measures optic disc perfusion. OCT angiography could be useful in the evaluation of glaucoma and glaucoma progression. PMID:24629312
Kiapour, Ata M.; Fleming, Braden C.; Murray, Martha M.
2017-01-01
Background: Abnormal joint motion has been linked to joint arthrosis after anterior cruciate ligament (ACL) reconstruction. However, the relationships between the graft properties (ie, structural and anatomic) and extent of posttraumatic osteoarthritis are not well defined. Hypotheses: (1) The structural (tensile) and anatomic (area and alignment) properties of the reconstructed graft or repaired ACL correlate with the total cartilage lesion area 1 year after ACL surgery, and (2) side-to-side differences in anterior-posterior (AP) knee laxity correlate with the total cartilage lesion area 1 year postoperatively. Study Design: Controlled laboratory study. Methods: Sixteen minipigs underwent unilateral ACL transection and were randomly treated with ACL reconstruction or bridge-enhanced ACL repair. The tensile properties, cross-sectional area, and multiplanar alignment of the healing ACL or graft, AP knee laxity, and cartilage lesion areas were assessed 1 year after surgery. Results: In the reconstructed group, the normalized graft yield and maximum failure loads, cross-sectional area, sagittal and coronal elevation angles, and side-to-side differences in AP knee laxity at 60° of flexion were associated with the total cartilage lesion area 1 year after surgery (R 2 > 0.5, P < .04). In the repaired group, normalized ACL yield load, linear stiffness, cross-sectional area, and the sagittal and coronal elevation angles were associated with the total cartilage lesion area (R 2 > 0.5, P < .05). Smaller cartilage lesion areas were observed in the surgically treated knees when the structural and anatomic properties of the ligament or graft and AP laxity values were closer to those of the contralateral ACL-intact knee. Reconstructed grafts had a significantly larger normalized cross-sectional area and sagittal elevation angle (more vertical) when compared with repaired ACLs (P < .02). Conclusion: The tensile properties, cross-sectional area, and multiplanar alignment of the healing ACLs or grafts and AP knee laxity in reconstructed knees were associated with the extent of tibiofemoral cartilage damage after ACL surgery. Clinical Relevance: These data highlight the need for novel ACL injury treatments that can restore the structural and anatomic properties of the torn ACL to those of the native ACL in an effort to minimize the risk of early-onset posttraumatic osteoarthritis. PMID:28875154
Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data
Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin
2014-01-01
Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563
Code of Federal Regulations, 2012 CFR
2012-04-01
... of this part. Area of intended employment means the geographic area within normal commuting distance... Metropolitan Statistical Area (MSA), any place within the MSA is deemed to be within normal commuting distance... a course of study at an established institution of learning or other recognized place of study in...
Code of Federal Regulations, 2013 CFR
2013-04-01
... of this part. Area of intended employment means the geographic area within normal commuting distance... Metropolitan Statistical Area (MSA), any place within the MSA is deemed to be within normal commuting distance... a course of study at an established institution of learning or other recognized place of study in...
Hemispheric dominance during the mental rotation task in patients with schizophrenia.
Chen, Jiu; Yang, Laiqi; Zhao, Jin; Li, Lanlan; Liu, Guangxiong; Ma, Wentao; Zhang, Yan; Wu, Xingqu; Deng, Zihe; Tuo, Ran
2012-04-01
Mental rotation is a spatial representation conversion capability using an imagined object and either object or self-rotation. This capability is impaired in schizophrenia. To provide a more detailed assessment of impaired cognitive functioning in schizophrenia by comparing the electrophysiological profiles of patients with schizophrenia and controls while completing a mental rotation task using both normally-oriented images and mirror images. This electroencephalographic study compared error rates, reaction times and the topographic map of event-related potentials in 32 participants with schizophrenia and 29 healthy controls during mental rotation tasks involving both normal images and mirror images. Among controls the mean error rate and the mean reaction time for normal images and mirror images were not significantly different but in the patient group the mean (sd) error rate was higher for mirror images than for normal images (42% [6%] vs. 32% [9%], t=2.64, p=0.031) and the mean reaction time was longer for mirror images than for normal images (587 [11] ms vs. 571 [18] ms, t=2.83, p=0.028). The amplitude of the P500 component at Pz (parietal area), Cz (central area), P3 (left parietal area) and P4 (right parietal area) were significantly lower in the patient group than in the control group for both normal images and mirror images. In both groups the P500 for both the normal and mirror images was significantly higher in the right parietal area (P4) compared with left parietal area (P3). The mental rotation abilities of patients with schizophrenia for both normally-oriented images and mirror images are impaired. Patients with schizophrenia show a diminished left cerebral contribution to the mental rotation task, a more rapid response time, and a differential response to normal images versus mirror images not seen in healthy controls. Specific topographic characteristics of the EEG during mental rotation tasks are potential biomarkers for schizophrenia.
Evaluation of cerebral ischemia using near-infrared spectroscopy with oxygen inhalation
NASA Astrophysics Data System (ADS)
Ebihara, Akira; Tanaka, Yuichi; Konno, Takehiko; Kawasaki, Shingo; Fujiwara, Michiyuki; Watanabe, Eiju
2012-09-01
Conventional methods presently used to evaluate cerebral hemodynamics are invasive, require physical restraint, and employ equipment that is not easily transportable. Therefore, it is difficult to take repeated measurements at the patient's bedside. An alternative method to evaluate cerebral hemodynamics was developed using near-infrared spectroscopy (NIRS) with oxygen inhalation. The bilateral fronto-temporal areas of 30 normal volunteers and 33 patients with cerebral ischemia were evaluated with the NIRS system. The subjects inhaled oxygen through a mask for 2 min at a flow rate of 8 L/min. Principal component analysis (PCA) was applied to the data, and a topogram was drawn using the calculated weights. NIRS findings were compared with those of single-photon-emission computed tomography (SPECT). In normal volunteers, no laterality of the PCA weights was observed in 25 of 30 cases (83%). In patients with cerebral ischemia, PCA weights in ischemic regions were lower than in normal regions. In 28 of 33 patients (85%) with cerebral ischemia, NIRS findings agreed with those of SPECT. The results suggest that transmission of the changes in systemic SpO2 were attenuated in ischemic regions. The method discussed here should be clinically useful because it can be used to measure cerebral ischemia easily, repeatedly, and noninvasively.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Design of neural networks for classification of remotely sensed imagery
NASA Technical Reports Server (NTRS)
Chettri, Samir R.; Cromp, Robert F.; Birmingham, Mark
1992-01-01
Classification accuracies of a backpropagation neural network are discussed and compared with a maximum likelihood classifier (MLC) with multivariate normal class models. We have found that, because of its nonparametric nature, the neural network outperforms the MLC in this area. In addition, we discuss techniques for constructing optimal neural nets on parallel hardware like the MasPar MP-1 currently at GSFC. Other important discussions are centered around training and classification times of the two methods, and sensitivity to the training data. Finally, we discuss future work in the area of classification and neural nets.
NASA Astrophysics Data System (ADS)
Baloloy, A. B.; Blanco, A. C.; Gana, B. S.; Sta. Ana, R. C.; Olalia, L. C.
2016-09-01
The Philippines has a booming sugarcane industry contributing about PHP 70 billion annually to the local economy through raw sugar, molasses and bioethanol production (SRA, 2012). Sugarcane planters adapt different farm practices in cultivating sugarcane, one of which is cane burning to eliminate unwanted plant material and facilitate easier harvest. Information on burned sugarcane extent is significant in yield estimation models to calculate total sugar lost during harvest. Pre-harvest burning can lessen sucrose by 2.7% - 5% of the potential yield (Gomez, et al 2006; Hiranyavasit, 2016). This study employs a method for detecting burn sugarcane area and determining burn severity through Differenced Normalized Burn Ratio (dNBR) using Landsat 8 Images acquired during the late milling season in Tarlac, Philippines. Total burned area was computed per burn severity based on pre-fire and post-fire images. Results show that 75.38% of the total sugarcane fields in Tarlac were burned with post-fire regrowth; 16.61% were recently burned; and only 8.01% were unburned. The monthly dNBR for February to March generated the largest area with low severity burn (1,436 ha) and high severity burn (31.14 ha) due to pre-harvest burning. Post-fire regrowth is highest in April to May when previously burned areas were already replanted with sugarcane. The maximum dNBR of the entire late milling season (February to May) recorded larger extent of areas with high and low post-fire regrowth compared to areas with low, moderate and high burn severity. Normalized Difference Vegetation Index (NDVI) was used to analyse vegetation dynamics between the burn severity classes. Significant positive correlation, rho = 0.99, was observed between dNBR and dNDVI at 5% level (p = 0.004). An accuracy of 89.03% was calculated for the Landsat-derived NBR validated using actual mill data for crop year 2015-2016.
A new evaluation of heat distribution on facial skin surface by infrared thermography
Brioschi, Marcos L; Baladi, Marina G; Arita, Emiko S
2016-01-01
Objective: The aim of this study was to identify the facial areas defined by thermal gradient, in individuals compatible with the pattern of normality, and to quantify and describe them anatomically. Methods: The sample consisted of 161 volunteers, of both genders, aged between 26 and 84 years (63 ± 15 years). Results: The results demonstrated that the thermal gradient areas suggested for the study were present in at least 95% of the thermograms evaluated and that there is significant difference in temperature between the genders, racial group and variables “odontalgia”, “dental prothesis” and “history of migraine” (p < 0.05). Moreover, there was no statistically significant difference in the absolute temperatures between ages, and right and left sides of the face, in individuals compatible with the pattern of normality (ΔT = 0.11°C). Conclusions: The authors concluded that according to the suggested areas of thermal gradients, these were present in at least 95% of all the thermograms evaluated, and the areas of high intensity found in the face were medial palpebral commissure, labial commissure, temporal, supratrochlear and external acoustic meatus, whereas the points of low intensity were inferior labial, lateral palpebral commissure and nasolabial. PMID:26891669
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2014-12-01
Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.
Isidro, Raymond A; Cruz, Myrella L; Isidro, Angel A; Baez, Axel; Arroyo, Axel; González-Marqués, William A; González-Keelan, Carmen; Torres, Esther A; Appleyard, Caroline B
2015-01-01
AIM: To determine the expression of neurokinin-1 receptor (NK-1R), phosphorylated epidermal growth factor receptor (pEGFR), cyclooxygenase-2 (Cox-2), and vitamin D receptor (VDR) in normal, inflammatory bowel disease (IBD), and colorectal neoplasia tissues from Puerto Ricans. METHODS: Tissues from patients with IBD, colitis-associated colorectal cancer (CAC), sporadic dysplasia, and sporadic colorectal cancer (CRC), as well as normal controls, were identified at several centers in Puerto Rico. Archival formalin-fixed, paraffin-embedded tissues were de-identified and processed by immunohistochemistry for NK-1R, pEGFR, Cox-2, and VDR. Pictures of representative areas of each tissues diagnosis were taken and scored by three observers using a 4-point scale that assessed intensity of staining. Tissues with CAC were further analyzed by photographing representative areas of IBD and the different grades of dysplasia, in addition to the areas of cancer, within each tissue. Differences in the average age between the five patient groups were assessed with one-way analysis of variance and Tukey-Kramer multiple comparisons test. The mean scores for normal tissues and tissues with IBD, dysplasia, CRC, and CAC were calculated and statistically compared using one-way analysis of variance and Dunnett’s multiple comparisons test. Correlations between protein expression patterns were analyzed with the Pearson’s product-moment correlation coefficient. Data are presented as mean ± SE. RESULTS: On average, patients with IBD were younger (34.60 ± 5.81) than normal (63.20 ± 6.13, P < 0.01), sporadic dysplasia (68.80 ± 4.42, P < 0.01), sporadic cancer (74.80 ± 4.91, P < 0.001), and CAC (57.50 ± 5.11, P < 0.05) patients. NK-1R in cancer tissue (sporadic CRC, 1.73 ± 0.34; CAC, 1.57 ± 0.53) and sporadic dysplasia (2.00 ± 0.45) were higher than in normal tissues (0.73 ± 0.19). pEGFR was significantly increased in sporadic CRC (1.53 ± 0.43) and CAC (2.25 ± 0.47) when compared to normal tissue (0.07 ± 0.25, P < 0.05, P < 0.001, respectively). Cox-2 was significantly increased in sporadic colorectal cancer (2.20 ± 0.23 vs 0.80 ± 0.37 for normal tissues, P < 0.05). In comparison to normal (2.80 ± 0.13) and CAC (2.50 ± 0.33) tissues, VDR was significantly decreased in sporadic dysplasia (0.00 ± 0.00, P < 0.001 vs normal, P < 0.001 vs CAC) and sporadic CRC (0.47 ± 0.23, P < 0.001 vs normal, P < 0.001 vs CAC). VDR levels negatively correlated with NK-1R (r = -0.48) and pEGFR (r = -0.56) in normal, IBD, sporadic dysplasia and sporadic CRC tissue, but not in CAC. CONCLUSION: Immunohistochemical NK-1R and pEGFR positivity with VDR negativity can be used to identify areas of sporadic colorectal neoplasia. VDR immunoreactivity can distinguish CAC from sporadic cancer. PMID:25684939
Yao, Yuan; Ding, Jian-Li; Zhang, Fang; Wang, Gang; Jiang, Hong-Nan
2013-11-01
Soil salinization is one of the most important eco-environment problems in arid area, which can not only induce land degradation, inhibit vegetation growth, but also impede regional agricultural production. To accurately and quickly obtain the information of regional saline soils by using remote sensing data is critical to monitor soil salinization and prevent its further development. Taking the Weigan-Kuqa River Delta Oasis in the northern Tarim River Basin of Xinjiang as test object, and based on the remote sensing data from Landsat-TM images of April 15, 2011 and September 22, 2011, in combining with the measured data from field survey, this paper extracted the characteristic variables modified normalized difference water index (MNDWI), normalized difference vegetation index (NDVI), and the third principal component from K-L transformation (K-L-3). The decision tree method was adopted to establish the extraction models of soil salinization in the two key seasons (dry and wet seasons) of the study area, and the classification maps of soil salinization in the two seasons were drawn. The results showed that the decision tree method had a higher discrimination precision, being 87.2% in dry season and 85.3% in wet season, which was able to be used for effectively monitoring the dynamics of soil salinization and its spatial distribution, and to provide scientific basis for the comprehensive management of saline soils in arid area and the rational utilization of oasis land resources.
Imaging Subsurface Structure of Tehran/Iran region using Ambient Seismic Noise Tomography
NASA Astrophysics Data System (ADS)
Shirzad Iraj, Taghi; Shmomali, Z. Hossein
2013-04-01
Tehran, capital of Iran, is surrounded by many active faults (including Mosha, North Tehran and North and/or South Rey faults), however our knowledge about the 3D velocity structure of the study area is limited. Recent developments in seismology have shown that cross-correlation of a long time ambient seismic noise recorded by pair of stations, contain information about the Green's function between the stations. Thus ambient seismic noise carries valuable information of propagation path which can be extracted. We obtained 2D model of shear wave velocity (Vs) for Tehran/Iran area using seismic ambient noise tomography (ANT) method. In this study, we use continuous vertical component of data recorded by TDMMO (Tehran Disaster Mitigation and Management Organization) and IRSC (Iranian Seismological Center) networks in the Tehran/Iran area. The TDMMO and IRSC networks are equipped with CMG-5TD Guralp sensor and SS-1 Kinemetrics sensor respectively. We use data from 25 stations for 12 months from 2009/Oct. to 2010/Oct. Data processing is similar to that explained in detail by Bensen et al. (2007) including processed daily base data. The mean, trend, and instrument response were removed and the data were decimated to 10 sps. One-bit time-domain normalization was then applied to suppress the influence of instrument irregularities and earthquake signals followed by spectral normalization between 0.1-1.0 Hz (period 1-10 sec). After cross-correlation processing, we implement a new stacking method to stack many cross-correlation functions bases on the highest energy in a time interval which we expect to receive the Rayleigh wave fundamental mode. We then obtained group velocity of Rayleigh wave by using phase match filtering and frequency-time analysis techniques. Finally, we applied iterative inversion method to extract Vs model of shallow structure in the Tehran/Iran area.
NASA Astrophysics Data System (ADS)
Zhuo, Yan-Qun; Ma, Jin; Guo, Yan-Shuang; Ji, Yun-Tao
In stick-slip experiments modeling the occurrence of earthquakes, the meta-instability stage (MIS) is the process that occurs between the peak differential stress and the onset of sudden stress drop. The MIS is the final stage before a fault becomes unstable. Thus, identification of the MIS can help to assess the proximity of the fault to the earthquake critical time. A series of stick-slip experiments on a simulated strike-slip fault were conducted using a biaxial servo-controlled press machine. Digital images of the sample surface were obtained via a high speed camera and processed using a digital image correlation method for analysis of the fault displacement field. Two parameters, A and S, are defined based on fault displacement. A, the normalized length of local pre-slip areas identified by the strike-slip component of fault displacement, is the ratio of the total length of the local pre-slip areas to the length of the fault within the observed areas and quantifies the growth of local unstable areas along the fault. S, the normalized entropy of fault displacement directions, is derived from Shannon entropy and quantifies the disorder of fault displacement directions along the fault. Based on the fault displacement field of three stick-slip events under different loading rates, the experimental results show the following: (1) Both A and S can be expressed as power functions of the normalized time during the non-linearity stage and the MIS. The peak curvatures of A and S represent the onsets of the distinct increase of A and the distinct reduction of S, respectively. (2) During each stick-slip event, the fault evolves into the MIS soon after the curvatures of both A and S reach their peak values, which indicates that the MIS is a synergetic process from independent to cooperative behavior among various parts of a fault and can be approximately identified via the peak curvatures of A and S. A possible application of these experimental results to field conditions is provided. However, further validation is required via additional experiments and exercises.
Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion.
Li, Hui; Jing, Linhai; Tang, Yunwei
2017-01-05
Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.
Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion
Li, Hui; Jing, Linhai; Tang, Yunwei
2017-01-01
Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies. PMID:28067770
Zykin, P A
2005-01-01
Comparative data on the structural-metabolic organization of field 4 of the cat brain in normal conditions and after unilateral enucleation of the eye are presented. Cytochrome oxidase was detected histochemically. Data were processed by a computerized method using an original video capture system. Data were obtained demonstrating the uneven distribution of enzyme along sublayer IlIb of field 4 in animals with unilateral enucleation. A hypothesis based on published data is suggested whereby the alternation of high- and low-reactive areas is evidence for the ordering of the retinal representations of the right and left eyes in the sensorimotor cortex.
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.
Björkström, S; Goldie, I F
1982-06-01
The hardness of bone is its property of withstanding the impact of a penetrating agent. It has been found that articular degenerative changes in, for example, the tibia (knee) are combined with a decrease in the hardness of the subchondral bone. In this investigation the hardness of subchondral bone in chondromalacia and osteoarthrosis of the patella has been analysed and compared with normal subchondral bone. Using an indentation method originally described by Brinell the hardness of the subchondral bone was evaluated in 7 normal patellae, in 20 with chondromalacia and in 33 with osteoarthrosis. A microscopic and microradiographic study of the subchondral bone was carried out simultaneously. Hardness was lowest in the normal material. The mean hardness value beneath the degenerated cartilage differed only slightly from that of the normal material, but the variation of values was increased. The hardness in bone in the chondromalacia area was lower than the hardness in bone covered by surrounding normal cartilage. The mean hardness value in bone beneath normal parts of cartilage in specimens with chondromalacia was higher than the mean hardness value of the normal material. In the microscopic and microradiographic examination it became evident that there was a relationship between trabecular structure and subchondral bone hardness; high values: coarse and solid structure; low values: slender and less regular structure.
NASA Astrophysics Data System (ADS)
Shaikh, Rubina; Dora, Tapas Kumar; Chopra, Supriya; Maheshwari, Amita; Kedar K., Deodhar; Bharat, Rekhi; Krishna, C. Murali
2014-08-01
In vivo Raman spectroscopy is being projected as a new, noninvasive method for cervical cancer diagnosis. In most of the reported studies, normal areas in the cancerous cervix were used as control. However, in the Indian subcontinent, the majority of cervical cancers are detected at advanced stages, leaving no normal sites for acquiring control spectra. Moreover, vagina and ectocervix are reported to have similar biochemical composition. Thus, in the present study, we have evaluated the feasibility of classifying normal and cancerous conditions in the Indian population and we have also explored the utility of the vagina as an internal control. A total of 228 normal and 181 tumor in vivo Raman spectra were acquired from 93 subjects under clinical supervision. The spectral features in normal conditions suggest the presence of collagen, while DNA and noncollagenous proteins were abundant in tumors. Principal-component linear discriminant analysis (PC-LDA) yielded 97% classification efficiency between normal and tumor groups. An analysis of a normal cervix and vaginal controls of cancerous and noncancerous subjects suggests similar spectral features between these groups. PC-LDA of tumor, normal cervix, and vaginal controls further support the utility of the vagina as an internal control. Overall, findings of the study corroborate with earlier studies and facilitate objective, noninvasive, and rapid Raman spectroscopic-based screening/diagnosis of cervical cancers.
Shaikh, Rubina; Dora, Tapas Kumar; Chopra, Supriya; Maheshwari, Amita; Kedar K, Deodhar; Bharat, Rekhi; Krishna, C Murali
2014-08-01
In vivo Raman spectroscopy is being projected as a new, noninvasive method for cervical cancer diagnosis. In most of the reported studies, normal areas in the cancerous cervix were used as control. However, in the Indian subcontinent, the majority of cervical cancers are detected at advanced stages, leaving no normal sites for acquiring control spectra. Moreover, vagina and ectocervix are reported to have similar biochemical composition. Thus, in the present study, we have evaluated the feasibility of classifying normal and cancerous conditions in the Indian population and we have also explored the utility of the vagina as an internal control. A total of 228 normal and 181 tumor in vivo Raman spectra were acquired from 93 subjects under clinical supervision. The spectral features in normal conditions suggest the presence of collagen, while DNA and noncollagenous proteins were abundant in tumors. Principal-component linear discriminant analysis (PC-LDA) yielded 97% classification efficiency between normal and tumor groups. An analysis of a normal cervix and vaginal controls of cancerous and noncancerous subjects suggests similar spectral features between these groups. PC-LDA of tumor, normal cervix, and vaginal controls further support the utility of the vagina as an internal control. Overall, findings of the study corroborate with earlier studies and facilitate objective, noninvasive, and rapid Raman spectroscopic-based screening/diagnosis of cervical cancers.
A transition-based joint model for disease named entity recognition and normalization.
Lou, Yinxia; Zhang, Yue; Qian, Tao; Li, Fei; Xiong, Shufeng; Ji, Donghong
2017-08-01
Disease named entities play a central role in many areas of biomedical research, and automatic recognition and normalization of such entities have received increasing attention in biomedical research communities. Existing methods typically used pipeline models with two independent phases: (i) a disease named entity recognition (DER) system is used to find the boundaries of mentions in text and (ii) a disease named entity normalization (DEN) system is used to connect the mentions recognized to concepts in a controlled vocabulary. The main problems of such models are: (i) there is error propagation from DER to DEN and (ii) DEN is useful for DER, but pipeline models cannot utilize this. We propose a transition-based model to jointly perform disease named entity recognition and normalization, casting the output construction process into an incremental state transition process, learning sequences of transition actions globally, which correspond to joint structural outputs. Beam search and online structured learning are used, with learning being designed to guide search. Compared with the only existing method for joint DEN and DER, our method allows non-local features to be used, which significantly improves the accuracies. We evaluate our model on two corpora: the BioCreative V Chemical Disease Relation (CDR) corpus and the NCBI disease corpus. Experiments show that our joint framework achieves significantly higher performances compared to competitive pipeline baselines. Our method compares favourably to other state-of-the-art approaches. Data and code are available at https://github.com/louyinxia/jointRN. dhji@whu.edu.cn. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks
NASA Astrophysics Data System (ADS)
Halicek, Martin; Little, James V.; Wang, Xu; Patel, Mihir; Griffith, Christopher C.; El-Deiry, Mark W.; Chen, Amy Y.; Fei, Baowei
2018-02-01
Successful outcomes of surgical cancer resection necessitate negative, cancer-free surgical margins. Currently, tissue samples are sent to pathology for diagnostic confirmation. Hyperspectral imaging (HSI) is an emerging, non-contact optical imaging technique. A reliable optical method could serve to diagnose and biopsy specimens in real-time. Using convolutional neural networks (CNNs) as a tissue classifier, we developed a method to use HSI to perform an optical biopsy of ex-vivo surgical specimens, collected from 21 patients undergoing surgical cancer resection. Training and testing on samples from different patients, the CNN can distinguish squamous cell carcinoma (SCCa) from normal aerodigestive tract tissues with an area under the curve (AUC) of 0.82, 81% accuracy, 81% sensitivity, and 80% specificity. Additionally, normal oral tissues can be sub-classified into epithelium, muscle, and glandular mucosa using a decision tree method, with an average AUC of 0.94, 90% accuracy, 93% sensitivity, and 89% specificity. After separately training on thyroid tissue, the CNN differentiates between thyroid carcinoma and normal thyroid with an AUC of 0.95, 92% accuracy, 92% sensitivity, and 92% specificity. Moreover, the CNN can discriminate medullary thyroid carcinoma from benign multi-nodular goiter (MNG) with an AUC of 0.93, 87% accuracy, 88% sensitivity, and 85% specificity. Classical-type papillary thyroid carcinoma is differentiated from benign MNG with an AUC of 0.91, 86% accuracy, 86% sensitivity, and 86% specificity. Our preliminary results demonstrate that an HSI-based optical biopsy method using CNNs can provide multi-category diagnostic information for normal head-and-neck tissue, SCCa, and thyroid carcinomas. More patient data are needed in order to fully investigate the proposed technique to establish reliability and generalizability of the work.
Oláh, Viktor; Hepp, Anna; Mészáros, Ilona
2016-05-01
In this study germination of Spirodela polyrhiza (L.) Schleiden (giant duckweed) turions was assessed under cadmium exposure to test applicability of a novel turion-based ecotoxicology method. Floating success of germinating turions, protrusion of the first and subsequent fronds as test endpoints were investigated and compared to results of standard duckweed growth inhibition tests with fronds of the same species. Our results indicate that turions can be used to characterize effects of toxic substances. Initial phase of turion germination (floating up and appearance of the first frond) was less sensitive to Cd treatments than the subsequent frond production. The calculated effective concentrations for growth rates in turion and normal frond tests were similar. Single frond area produced by germinating turions proved to be the most sensitive test endpoint. Single frond area and colony disintegration as additionally measured parameters in normal frond cultures also changed due to Cd treatments but the sensitivity of these parameters was lower than that of growth rates.
Effect of elongation in divertor tokamaks
NASA Astrophysics Data System (ADS)
Jones, Morgin; Ali, Halima; Punjabi, Alkesh
2008-04-01
Method of maps developed by Punjabi and Boozer [A. Punjabi, A. Verma, and A. Boozer, Phys.Rev. Lett. 69, 3322 (1992)] is used to calculate the effects of elongation on stochastic layer and magnetic footprint in divertor tokamaks. The parameters in the map are chosen such that the poloidal magnetic flux χSEP inside the ideal separatrix, the amplitude δ of magnetic perturbation, and the height H of the ideal separatrix surface are held fixed. The safety factor q for the flux surfaces that are nonchaotic as a function of normalized distance d from the O-point to the X-point is also held approximately constant. Under these conditions, the width W of the ideal separatrix surface in the midplane through the O-point is varied. The relative width w of stochastic layer near the X-point and the area A of magnetic footprint are then calculated. We find that the normalized width w of stochastic layer scales as W-7, and the area A of magnetic footprint on collector plate scales as W-10.
Ability of Cirrus™ HD-OCT Optic Nerve Head Parameters to Discriminate Normal from Glaucomatous Eyes
Mwanza, Jean-Claude; Oakley, Jonathan D; Budenz, Donald L; Anderson, Douglas R
2010-01-01
Purpose To determine the ability of optic nerve head (ONH) parameters measured with spectral domain Cirrus™ HD-OCT to discriminate between normal and glaucomatous eyes and to compare them to the discriminating ability of peripapillary retinal nerve fiber layer (RNFL) thickness measurements performed with Cirrus™ HD-OCT. Design Evaluation of diagnostic test or technology. Participants Seventy-three subjects with glaucoma and one hundred and forty-six age-matched normal subjects. Methods Peripapillary ONH parameters and RNFL thickness were measured in one randomly selected eye of each participant within a 200×200 pixel A-scan acquired with Cirrus™ HD-OCT centered on the ONH. Main Outcome Measures ONH topographic parameters, peripapillary RNFL thickness, and the area under receiver operating characteristic curves (AUCs). Results For distinguishing normal from glaucomatous eyes, regardless of disease stage, the six best parameters (expressed as AUC) were vertical rim thickness (VRT, 0.963), rim area (RA, 0.962), RNFL thickness at clock-hour 7 (0.957), RNFL thickness of the inferior quadrant (0.953), vertical cup-to-disc ratio (VCDR, 0.951) and average RNFL thickness (0.950). The AUC for distinguishing between normal and eyes with mild glaucoma was greatest for RNFL thickness of clock-hour 7 (0.918), VRT (0.914), RA (0.912), RNFL thickness of inferior quadrant (0.895), average RNFL thickness (0.893) and VCDR (0.890). There were no statistically significant differences between AUCs for the best ONH parameters and RNFL thickness measurements (p > 0.05). Conclusions Cirrus™ HD-OCT ONH parameters are able to discriminate between eyes that are normal from those with glaucoma or even mild glaucoma. There is no difference in the ability of ONH parameters and RNFL thickness measurement, as measured with Cirrus™ OCT, to distinguish between normal and glaucomatous eyes. PMID:20920824
A systematic evaluation of normalization methods in quantitative label-free proteomics.
Välikangas, Tommi; Suomi, Tomi; Elo, Laura L
2018-01-01
To date, mass spectrometry (MS) data remain inherently biased as a result of reasons ranging from sample handling to differences caused by the instrumentation. Normalization is the process that aims to account for the bias and make samples more comparable. The selection of a proper normalization method is a pivotal task for the reliability of the downstream analysis and results. Many normalization methods commonly used in proteomics have been adapted from the DNA microarray techniques. Previous studies comparing normalization methods in proteomics have focused mainly on intragroup variation. In this study, several popular and widely used normalization methods representing different strategies in normalization are evaluated using three spike-in and one experimental mouse label-free proteomic data sets. The normalization methods are evaluated in terms of their ability to reduce variation between technical replicates, their effect on differential expression analysis and their effect on the estimation of logarithmic fold changes. Additionally, we examined whether normalizing the whole data globally or in segments for the differential expression analysis has an effect on the performance of the normalization methods. We found that variance stabilization normalization (Vsn) reduced variation the most between technical replicates in all examined data sets. Vsn also performed consistently well in the differential expression analysis. Linear regression normalization and local regression normalization performed also systematically well. Finally, we discuss the choice of a normalization method and some qualities of a suitable normalization method in the light of the results of our evaluation. © The Author 2016. Published by Oxford University Press.
In vivo NMR imaging of sodium-23 in the human head.
Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K
1985-01-01
We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.
Regional cerebral blood flow and anxiety: a correlation study in neurologically normal patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, G.; Cogorno, P.; Gris, A.
1989-06-01
Regional CBF (rCBF) was evaluated by the /sup 133/Xe inhalation method in 60 neurologically normal patients (30 men and 30 women) and hemispheric and regional values were correlated with anxiety measurements collected by a self-rating questionnaire before and after the examination. Statistically significant negative correlations between rCBF and anxiety measures were found. rCBF reduction for high anxiety levels is in line with results previously reported by others and could be related to lower performance levels for moderately high anxiety scores as those reported in the present population. This could perhaps be explained by rearrangement of flow from cortical zones tomore » deeper areas of the brain, classically known to be implicated in the control of emotions. However, these results should be interpreted cautiously, since they were obtained in patients and not in normal subjects.« less
Khosroshahi, Mohamad E; Rahmani, Mahya
2012-01-01
The aim of this research is to study the normalized fluorescence spectra (intensity variations and area under the fluorescence signal), relative quantum yield, extinction coefficient and intracellular properties of normal and malignant human bone cells. Using Laser-Induced Fluorescence Spectroscopy (LIFS) upon excitation of 405 nm, the comparison of emission spectra of bone cells revealed that fluorescence intensity and the area under the spectra of malignant bone cells was less than that of normal. In addition, the area ratio and shape factor were changed. We obtained two emission bands in spectra of normal cells centered at about 486 and 575 nm and for malignant cells about 482 and 586 nm respectively, which are most likely attributed to NADH and riboflavins. Using fluorescein sodium emission spectrum, the relative quantum yield of bone cells is numerically determined.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Burghardt, Dirk
2018-05-01
This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.
Adaptation of rat soleus muscles to 4 wk of intermittent strain
NASA Technical Reports Server (NTRS)
Stauber, W. T.; Miller, G. R.; Grimmett, J. G.; Knack, K. K.
1994-01-01
The effect of repeated strains on rat soleus muscles was investigated by stretching active muscles 3 times/wk for 4 wk with two different methods of stretching. The adaptation of myofibers and noncontractile tissue was followed by histochemical techniques and computer-assisted image analysis. Muscle hypertrophy was seen in the slow-stretched muscles, which increased in mass by 13% and increased in myofiber cross-sectional area by 30%. In the fast-stretched muscle, mass increased by 10% but myofiber cross-sectional area actually decreased. This decrease in mean fiber area was the result of a population of very small fibers (population A) that coexisted with slightly smaller normal-sized fibers (population B). Fibers in population A did not have the distribution expected from atrophy compared with atrophic fibers from unloaded muscles; they were much smaller. In addition, there was a 44% increase in noncontractile tissue in the fast-stretched muscles. Thus, soleus muscles subjected to repeated strains respond differently to slow and fast stretching. Slow stretching results in typical muscle hypertrophy, whereas fast stretching produces somewhat larger muscles but with a mixture of small and normal-sized myofibers accompanied by a marked proliferation of noncontractile tissue.
Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K. W.; Yoshimura, Nagahisa
2016-01-01
Purpose To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. Methods The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Results Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. Conclusions The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction. PMID:26814541
NASA Technical Reports Server (NTRS)
Siewert, R. D.
1972-01-01
Evacuation areas for accidental spills of toxic propellants along rail and highway shipping routes are defined to help local authorities reduce risks to people from excessive vapor concentrations. These criteria along with other emergency information are shown in propellant spill cards. The evacuation areas are based on current best estimates of propellant evaporation rates from various areas of spill puddles. These rates are used together with a continuous point-source, bi-normal model of plume dispersion. The rate at which the toxic plume disperses is based on a neutral atmospheric condition. This condition, which results in slow plume dispersion, represents the widest range of weather parameters which could occur during the day and nighttime periods. Evacuation areas are defined by the ground level boundaries of the plume within which the concentrations exceed the toxic Threshold Limit Value (TLV) or in some cases the Emergency Exposure Limit (EEL).
Salt, A N; DeMott, J
1992-01-01
A physiologic technique was developed to measure endolymphatic cross-sectional area in vivo using tetramethylammonium (TMA) as a volume marker. The technique was evaluated in guinea pigs as an animal model. In the method, the cochlea was exposed surgically and TMA was injected into endolymph of the second turn at a constant rate by iontophoresis. The concentration of TMA was monitored during and after the injection using ion-selective electrodes. Cross-section estimates derived from the TMA concentration measurements were compared in normal animals and animals in which endolymphatic hydrops had been induced by ablation of the endolymphatic duct and sac 8 weeks earlier. The method demonstrated a mean increase in cross-sectional area of 258% in the hydropic group. Individually measured area values were compared with action potential threshold shifts and the magnitude of the endocochlear potential (EP). Hydropic animals typically showed an increase in threshold to 2 kHz stimuli and a decrease in EP. However, the degree of threshold shift or EP decrease did not correlate well with the degree of hydrops present.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Karimpour, Mohammad Hassan; Ghaderi, Majid
2014-12-01
Using fuzzy analytical hierarchy process (AHP) technique, we propose a method for mineral prospectivity mapping (MPM) which is commonly used for exploration of mineral deposits. The fuzzy AHP is a popular technique which has been applied for multi-criteria decision-making (MCDM) problems. In this paper we used fuzzy AHP and geospatial information system (GIS) to generate prospectivity model for Iron Oxide Copper-Gold (IOCG) mineralization on the basis of its conceptual model and geo-evidence layers derived from geological, geochemical, and geophysical data in Taherabad area, eastern Iran. The FuzzyAHP was used to determine the weights belonging to each criterion. Three geoscientists knowledge on exploration of IOCG-type mineralization have been applied to assign weights to evidence layers in fuzzy AHP MPM approach. After assigning normalized weights to all evidential layers, fuzzy operator was applied to integrate weighted evidence layers. Finally for evaluating the ability of the applied approach to delineate reliable target areas, locations of known mineral deposits in the study area were used. The results demonstrate the acceptable outcomes for IOCG exploration.
Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco
2016-01-01
Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643
NASA Astrophysics Data System (ADS)
Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.
2017-07-01
Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.
The Morphology of Smoke Inhalation Injury in Sheep,
1991-01-01
exposure. Segments of intact epithelium (E) areethane. The specimens were dried by the critical point method adjacent to necrotic areas ( N ). The tracheal...magnification: X325. essentially normal lung ( N ). Original magnification: x125. 1484 The Journal of Trauma November 1991 The extent and severity of the injury...obstruction by desqapate netic endofronclia n tise acuand esulant ypoxa.’by desquamated necrotic endobronchial tissue. Accu- and resultant hypoxia.5 mulation
Defining the Dormant Tumor Microenvironment for Breast Cancer Prevention and Treatment
2011-09-01
USAMRMC a. REPORT U b . ABSTRACT U c. THIS PAGE U UU 36 19b. TELEPHONE NUMBER (include area code) 2 TABLE OF CONTENTS...Each group will include 12 rats. b ) Breed rats in the parous group. After parturition, normalize the number of pups to eight. Ten days after...solution ultrasonication assisted tryptic digestion (UATD) method. Parity arm completed. b ) Analyzed the digested mammary ECM samples from
Olson, Charles W; Wagner, Galen S; Terkelsen, Christian Juhl; Stickney, Ronald; Lim, Tobin; Pahlm, Olle; Estes, E Harvey
2014-01-01
The purpose of this study is to present a new and improved method for translating the electrocardiographic changes of acute myocardial ischemia into a display which reflects the location and extent of the ischemic area and the associated culprit coronary artery. This method could be automated to present a graphic image of the ischemic area in a manner understandable by all levels of caregivers; from emergency transport personnel to the consulting cardiologist. Current methods for the ECG diagnosis of ST elevated myocardial infarction (STEMI) are criteria driven, and complex, and beyond the interpretive capability of many caregivers. New methods are needed to accurately diagnose the presence of acute transmural myocardial ischemia in order to accelerate a patient's clinical "door to balloon time." The proposed new method could potentially provide the information needed to accomplish this objective. The new method improves the precision of diagnosis and quantification of ischemia by normalizing the ST segment inputs from the standard 12 lead ECG, transforming these into a three dimensional vector representation of the ischemia at the electrical center of the heart. The myocardial areas likely to be involved in this ischemia are separately analyzed to assess the probability that they contributed to this event. The source of the ischemia is revealed as a specific region of the heart, and the likely location of the associated culprit coronary artery. Seventy 12 lead ECGs from subjects with known single artery occlusion in one of the three main coronary arteries were selected to test this new method. Graphic plots of the distribution of ischemia as indicated by the method are consistent with the known occlusion. The analysis of the distribution of ischemic areas in the myocardium reveals that the relationships between leads with either ST elevation or ST depression, provide critical information improving the current method. Copyright © 2014 Elsevier Inc. All rights reserved.
Turkbey, Evrim B.; Jain, Aditya; Johnson, Craig; Redheuil, Alban; Arai, Andrew E.; Gomes, Antoinette S.; Carr, James; Hundley, W. Gregory; Teixido-Tura, Gisela; Eng, John; Lima, Joao A.C.; Bluemke, David A.
2013-01-01
PURPOSE To determine the normal size and wall thickness of the ascending thoracic aorta (AA) and its relationship with cardiovascular risk factors in a large population-based study. MATERIALS AND METHODS The mean AA luminal diameter was measured in 3573 Multi-Ethnic Study of Atherosclerosis (MESA) participants (age: 45–84 years), using gradient echo phase contrast cine MRI. Multiple linear regression models were used to evaluate the associations between risk factors and AA diameter. The median and upper normal limit (95th percentile) was defined in a “healthy” subgroup as well as AA wall thickness. RESULTS The upper limits of body surface area indexed AA luminal diameter for age categories of 45–54, 55–64, 65–74, and 75–84 years are 21, 22, 22, and 28 mm/m2 in women and 20, 21, 22, 23 mm/m2 in men, respectively. The mean AA wall thickness was 2.8 mm. Age, gender and body surface area were major determinants of AA luminal diameter (~+1 mm/10 years; ~+1.9 mm in men than women; ~+1 mm/ 0.23 m2; p<0.001). The AA diameter in hypertensive subjects was +0.9 mm larger than in normotensives (p<0.001). CONCLUSION AA diameter increases gradually with aging for both genders, among all race/ethnicities. Normal value of AA diameter is provided. PMID:23681649
Sredar, Nripun; Ivers, Kevin M.; Queener, Hope M.; Zouridakis, George; Porter, Jason
2013-01-01
En face adaptive optics scanning laser ophthalmoscope (AOSLO) images of the anterior lamina cribrosa surface (ALCS) represent a 2D projected view of a 3D laminar surface. Using spectral domain optical coherence tomography images acquired in living monkey eyes, a thin plate spline was used to model the ALCS in 3D. The 2D AOSLO images were registered and projected onto the 3D surface that was then tessellated into a triangular mesh to characterize differences in pore geometry between 2D and 3D images. Following 3D transformation of the anterior laminar surface in 11 normal eyes, mean pore area increased by 5.1 ± 2.0% with a minimal change in pore elongation (mean change = 0.0 ± 0.2%). These small changes were due to the relatively flat laminar surfaces inherent in normal eyes (mean radius of curvature = 3.0 ± 0.5 mm). The mean increase in pore area was larger following 3D transformation in 4 glaucomatous eyes (16.2 ± 6.0%) due to their more steeply curved laminar surfaces (mean radius of curvature = 1.3 ± 0.1 mm), while the change in pore elongation was comparable to that in normal eyes (−0.2 ± 2.0%). This 3D transformation and tessellation method can be used to better characterize and track 3D changes in laminar pore and surface geometries in glaucoma. PMID:23847739
Constitutive properties of adult mammalian cardiac muscle cells
NASA Technical Reports Server (NTRS)
Zile, M. R.; Richardson, K.; Cowles, M. K.; Buckley, J. M.; Koide, M.; Cowles, B. A.; Gharpuray, V.; Cooper, G. 4th
1998-01-01
BACKGROUND: The purpose of this study was to determine whether changes in the constitutive properties of the cardiac muscle cell play a causative role in the development of diastolic dysfunction. METHODS AND RESULTS: Cardiocytes from normal and pressure-hypertrophied cats were embedded in an agarose gel, placed on a stretching device, and subjected to a change in stress (sigma), and resultant changes in cell strain (epsilon) were measured. These measurements were used to examine the passive elastic spring, viscous damping, and myofilament activation. The passive elastic spring was assessed in protocol A by increasing the sigma on the agarose gel at a constant rate to define the cardiocyte sigma-versus-epsilon relationship. Viscous damping was assessed in protocol B from the loop area between the cardiocyte sigma-versus-epsilon relationship during an increase and then a decrease in sigma. In both protocols, myofilament activation was minimized by a reduction in [Ca2+]i. Myofilament activation effects were assessed in protocol C by defining cardiocyte sigma versus epsilon during an increase in sigma with physiological [Ca2+]i. In protocol A, the cardiocyte sigma-versus-epsilon relationship was similar in normal and hypertrophied cells. In protocol B, the loop area was greater in hypertrophied than normal cardiocytes. In protocol C, the sigma-versus-epsilon relation in hypertrophied cardiocytes was shifted to the left compared with normal cells. CONCLUSIONS: Changes in viscous damping and myofilament activation in combination may cause pressure-hypertrophied cardiocytes to resist changes in shape during diastole and contribute to diastolic dysfunction.
NASA Astrophysics Data System (ADS)
Jevtić, Dubravka R.; Avramov Ivić, Milka L.; Reljin, Irini S.; Reljin, Branimir D.; Plavec, Goran I.; Petrović, Slobodan D.; Mijin, Dušan Ž.
2014-06-01
The automated, computer-aided method for differentiation and classification of malignant (M) from benign (B) cases, by analyzing the UV/VIS spectra of pleural effusions is described. It was shown that by two independent objective features, the maximum of Katz fractal dimension (KFDmax) and the area under normalized UV/VIS absorbance curve (Area), highly reliable M-B classification is possible. In the Area-KFDmax space M and B samples are linearly separable permitting thus the use of linear support vector machine as a classification tool. By analyzing 104 samples of UV/VIS spectra of pleural effusions (88 M and 16 B) collected from patients at the Clinic for Lung Diseases and Tuberculosis, Military Medical Academy in Belgrade, the accuracy of 95.45% for M cases and 100% for B cases are obtained by using the proposed method. It was shown that by applying some modifications, which are suggested in the paper, the accuracy of 100% for M cases can be reached.
Rudolph, G; Bechmann, M; Berninger, T; Kutschbach, E; Held, U; Tornow, R P; Kalpadakis, P; Zol'nikova, I V; Shamshinova, A M
2001-01-01
A new method of multifocal electroretinography making use of scanning laser ophthalmoscope with a wavelength of 630 nm (SLO-m-ERG), evoking short spatial visual stimuli on the retina, is proposed. Algorithm of presenting the visual stimuli and analysis of distribution of local electroretinograms on the surface of the retina is based on short m-sequences. Mathematical cross correlation analysis shows a three-dimensional distribution of bioelectrical activity of the retina in the central visual field. In normal subjects the cone bioelectrical activity is the maximum in the macular area (corresponding to the density of cone distribution) and absent in the blind spot. The method detects the slightest pathological changes in the retina under control of the site of stimulation and ophthalmoscopic picture of the fundus oculi. The site of the pathological process correlates with the topography of changes in bioelectrical activity of the examined retinal area in diseases of the macular area and pigmented retinitis detectable by ophthalmoscopy.
NASA Astrophysics Data System (ADS)
Soupios, P. M.; Loupasakis, C.; Vallianatos, F.
2008-06-01
Nowadays, geophysical prospecting is implemented in order to resolve a diversity of geological, hydrogeological, environmental and geotechnical problems. Although plenty of applications and a lot of research have been conducted in the countryside, only a few cases have been reported in the literature concerning urban areas, mainly due to high levels of noise present that aggravate most of the geophysical methods or due to spatial limitations that hinder normal method implementation. Among all geophysical methods, electrical resistivity tomography has proven to be a rapid technique and the most robust with regard to urban noise. This work presents a case study in the urban area of Chania (Crete Island, Greece), where electrical resistivity tomography (ERT) has been applied for the detection and identification of possible buried ancient ruins or other man-made structures, prior to the construction of a building. The results of the detailed geophysical survey indicated eight areas of interest providing resistivity anomalies. Those anomalies were analysed and interpreted combining the resistivity readings with the geotechnical borehole data and the historical bibliographic reports—referring to the 1940s (Xalkiadakis 1997 Industrial Archaeology in Chania Territory pp 51-62). The collected ERT-data were processed by applying advanced algorithms in order to obtain a 3D-model of the study area that depicts the interesting subsurface structures more clearly and accurately.
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Park, Sung Min; Lee, Jin Hong; Choi, Seong Wook
2014-12-01
The ventricular electrocardiogram (v-ECG) was developed for long-term monitoring of heartbeats in patients with a left ventricular assist device (LVAD) and does not normally have the functionality necessary to detect additional heart irregularities that can progress to critical arrhythmias. Although the v-ECG has the benefits of physiological optimization and counterpulsation control, when abnormal heartbeats occur, the v-ECG does not show the distinct abnormal waveform that enables easy detection of an abnormal heartbeat among normal heartbeats on the conventional ECG. In this study, the v-ECGs of normal and abnormal heartbeats are compared with each other with respect to peak-to-peak voltage, area, and maximal slopes, and a new method to detect abnormal heartbeats is suggested. In a series of animal experiments with three porcine models (Yorkshire pigs weighing 30-40 kg), a v-ECG and conventional ECG were taken simultaneously during LVAD perfusion. Clinical experts found 104 abnormal heartbeats from the saved conventional ECG data and confirmed that the other 3159 heartbeats were normal. Almost all of the abnormal heartbeats were premature ventricular contractions (PVCs), and there was short-term tachycardia for 3 s. A personal computer was used to automatically detect abnormal heartbeats with the v-ECG according to the new method, and its results were compared with the clinicians' results. The new method found abnormal heartbeats with 90% accuracy, and less than 15% of the total PVCs were missed. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Kwon, Junki; Choi, Jaewan; Shin, Joong Won; Lee, Jiyun; Kook, Michael S
2017-12-01
To assess the diagnostic ability of foveal avascular zone (FAZ) parameters to discriminate glaucomatous eyes with visual field defects (VFDs) in different locations (central vs. peripheral) from normal eyes. Totally, 125 participants were separated into 3 groups: normal (n=45), glaucoma with peripheral VFD (PVFD, n=45), and glaucoma with central VFD (CVFD, n=35). The FAZ area, perimeter, and circularity and parafoveal vessel density were calculated from optical coherence tomography angiography images. The diagnostic ability of the FAZ parameters and other structural parameters was determined according to glaucomatous VFD location. Associations between the FAZ parameters and central visual function were evaluated. A larger FAZ area and longer FAZ perimeter were observed in the CVFD group than in the PVFD and normal groups. The FAZ area, perimeter, and circularity were better in differentiating glaucomatous eyes with CVFDs from normal eyes [areas under the receiver operating characteristic curves (AUC), 0.78 to 0.88] than in differentiating PVFDs from normal eyes (AUC, 0.51 to 0.64). The FAZ perimeter had a similar AUC value to the circumpapillary retinal nerve fiber layer and macular ganglion cell-inner plexiform layer thickness for differentiating eyes with CVFDs from normal eyes (all P>0.05, the DeLong test). The FAZ area was significantly correlated with central visual function (β=-112.7, P=0.035, multivariate linear regression). The FAZ perimeter had good diagnostic capability in differentiating glaucomatous eyes with CVFDs from normal eyes, and may be a potential diagnostic biomarker for detecting glaucomatous patients with CVFDs.
NASA Astrophysics Data System (ADS)
Liu, Jing; Skidmore, Andrew K.; Heurich, Marco; Wang, Tiejun
2017-10-01
As an important metric for describing vertical forest structure, the plant area index (PAI) profile is used for many applications including biomass estimation and wildlife habitat assessment. PAI profiles can be estimated with the vertically resolved gap fraction from airborne LiDAR data. Most research utilizes a height normalization algorithm to retrieve local or relative height by assuming the terrain to be flat. However, for many forests this assumption is not valid. In this research, the effect of topographic normalization of airborne LiDAR data on the retrieval of PAI profile was studied in a mountainous forest area in Germany. Results show that, although individual tree height may be retained after topographic normalization, the spatial arrangement of trees is changed. Specifically, topographic normalization vertically condenses and distorts the PAI profile, which consequently alters the distribution pattern of plant area density in space. This effect becomes more evident as the slope increases. Furthermore, topographic normalization may also undermine the complexity (i.e., canopy layer number and entropy) of the PAI profile. The decrease in PAI profile complexity is not solely determined by local topography, but is determined by the interaction between local topography and the spatial distribution of each tree. This research demonstrates that when calculating the PAI profile from airborne LiDAR data, local topography needs to be taken into account. We therefore suggest that for ecological applications, such as vertical forest structure analysis and modeling of biodiversity, topographic normalization should not be applied in non-flat areas when using LiDAR data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti
Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less
A normalization model suggests that attention changes the weighting of inputs between visual areas
Cohen, Marlene R.
2017-01-01
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1–MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations. PMID:28461501
A normalization model suggests that attention changes the weighting of inputs between visual areas.
Ruff, Douglas A; Cohen, Marlene R
2017-05-16
Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.
Goedert, James J.; Gong, Yangming; Hua, Xing; Zhong, Huanzi; He, Yimin; Peng, Peng; Yu, Guoqin; Wang, Wenjing; Ravel, Jacques; Shi, Jianxin; Zheng, Ying
2015-01-01
Background Screening for colorectal cancer (CRC) and precancerous colorectal adenoma (CRA) can detect curable disease. However, participation in colonoscopy and sensitivity of fecal heme for CRA are low. Methods Microbiota metrics were determined by Illumina sequencing of 16S rRNA genes amplified from DNA extracted from feces self-collected in RNAlater. Among fecal immunochemical test-positive (FIT +) participants, colonoscopically-defined normal versus CRA patients were compared by regression, permutation, and random forest plus leave-one-out methods. Findings Of 95 FIT + participants, 61 had successful fecal microbiota profiling and colonoscopy, identifying 24 completely normal patients, 20 CRA patients, 2 CRC patients, and 15 with other conditions. Phylum-level fecal community composition differed significantly between CRA and normal patients (permutation P = 0.02). Rank phylum-level abundance distinguished CRA from normal patients (area under the curve = 0.767, permutation P = 0.006). CRA prevalence was 59% in phylum-level cluster B versus 20% in cluster A (exact P = 0.01). Most of the difference reflected 3-fold higher median relative abundance of Proteobacteria taxa (Wilcoxon signed-rank P = 0.03, positive predictive value = 67%). Antibiotic exposure and other potential confounders did not affect the associations. Interpretation If confirmed in larger, more diverse populations, fecal microbiota analysis might be employed to improve screening for CRA and ultimately to reduce mortality from CRC. PMID:26288821
Measurement of aspheric mirror by nanoprofiler using normal vector tracing
NASA Astrophysics Data System (ADS)
Kitayama, Takao; Shiraji, Hiroki; Yamamura, Kazuya; Endo, Katsuyoshi
2016-09-01
Aspheric or free-form optics with high accuracy are necessary in many fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Therefore the demand of measurement method for aspherical or free-form surface with nanometer accuracy increases. Purpose of our study is to develop a non-contact measurement technology for aspheric or free-form surfaces directly with high repeatability. To achieve this purpose we have developed threedimensional Nanoprofiler which detects normal vectors of sample surface. The measurement principle is based on the straightness of laser light and the accurate motion of rotational goniometers. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and laser source. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and their coordinates by surface reconstruction algorithm. To evaluate performance of this machine we measure a concave aspheric mirror with diameter of 150 mm. As a result we achieve to measure large area of 150mm diameter. And we observe influence of systematic errors which the machine has. Then we simulated the influence and subtracted it from measurement result.
Nitrate in the Mississippi River and its tributaries, 1980 to 2008: Are we making progress?
Sprague, Lori A.; Hirsch, Robert M.; Aulenbach, Brent T.
2011-01-01
Changes in nitrate concentration and flux between 1980 and 2008 at eight sites in the Mississippi River basin were determined using a new statistical method that accommodates evolving nitrate behavior over time and produces flow-normalized estimates of nitrate concentration and flux that are independent of random variations in streamflow. The results show that little consistent progress has been made in reducing riverine nitrate since 1980, and that flow-normalized concentration and flux are increasing in some areas. Flow-normalized nitrate concentration and flux increased between 9 and 76% at four sites on the Mississippi River and a tributary site on the Missouri River, but changed very little at tributary sites on the Ohio, Iowa, and Illinois Rivers. Increases in flow-normalized concentration and flux at the Mississippi River at Clinton and Missouri River at Hermann were more than three times larger than at any other site. The increases at these two sites contributed much of the 9% increase in flow-normalized nitrate flux leaving the Mississippi River basin. At most sites, concentrations increased more at low and moderate streamflows than at high streamflows, suggesting that increasing groundwater concentrations are having an effect on river concentrations.
[Objective measurement of normal nasality in the Saxony dialect].
Müller, R; Beleites, T; Hloucal, U; Kühn, M
2000-12-01
In the United States of America, the nasometer was developed by Fletcher as an objective method for measuring nasality. There are no accepted normal values for comparable test materials regarding the German language. The aim of this study was the examination of the auditively normal nasality of Saxon-speaking people with the nasometer. The nasalance of 51 healthy Saxon-speaking test persons with auditively normal nasality was measured with a model 6200 nasometer (Kay-Elemetrics, U.S.A.). The text materials used were the vowels "a", "e", "i", "o", and "u", the sentences "Die Schokolade ist sehr lecker" ("The chocolate is very tasty") and "Nenne meine Mama Mimi" ("Name my mama Mimi"), and the texts of "North wind and sun", "A children's birthday", and an arbitrary selection from Strittmatter. The mean nasalance for the vowels was 17.7%, for the sentence containing no nasal sounds 13.0%, and for the sentence containing many nasal sounds 67.2%. The mean value of the texts was 33-41%. The results for the texts agreed well with the results of Reuter (1997), who examined people from the state of Brandenburg. A range from 20% to 55% is suggested as the normal value for nasalance in the German-speaking area.
Evidence for insulin resistance in nonobese patients with polycystic ovarian disease.
Jialal, I; Naiker, P; Reddi, K; Moodley, J; Joubert, S M
1987-05-01
In this study seven normal weight Indian patients with polycystic ovarian disease (PCOD) with no evidence of acanthosis nigricans and 7 age- and weight-matched normal Indian women were studied to determine whether PCOD patients were insulin-resistant. While all 14 women had normal glucose tolerance, the PCOD women had significantly higher mean plasma glucose levels at 30 and 60 min and higher mean incremental glucose areas [incremental areas: PCOD, 9.0 +/- 2.2 (+/- SEM); normal women, 4.0 +/- 0.8 mmol/L; P less than 0.05]. Insulin responses were significantly higher in the PCOD compared to normal women (incremental areas: PCOD, 623.8 +/- 78.3; normal women, 226.2 +/- 30.3 microU/mL; P less than 0.001). Both serum testosterone and androstenedione levels correlated with the insulin areas (r = 0.82; P less than 0.001 and r = 0.86; P less than 0.001, respectively). [125I] Insulin binding to erythrocytes revealed decreased maximum specific binding in the PCOD women (6.9 +/- 0.6%) compared to that in normal women (9.2 +/- 0.7%; P less than 0.02). While Scatchard analysis revealed similar receptor numbers, ID50 values demonstrated decreased receptor affinity in the women with PCOD. In conclusion, in the absence of acanthosis nigricans, nonobese patients with PCOD are insulin resistant, and this insulin resistance correlates with the hyperandrogenism.
Ghezzi, Michele; Tenero, Laura; Piazza, Michele; Bodini, Alessandro; Piacentini, Giorgio
2017-01-01
Structured Light Plethysmography (SLP) is a non-invasive method to study chest and abdominal movement during breathing and can identify abnormal contributions of the different regions of the chest. M.D hospitalized for pneumonia, underwent SLP and spirometry at admission (T0), after 48 hours (T1), and after one month (T2). SLP parameters showed expiratory flow limitation, information consistent with the spirometric parameters collected, and reduced motion in the area effected by pneumonia, with improvement and normalization at T1 and T2. This method gave useful information about the contribution to the respiratory movement of the lung area affected by pneumonia so we can speculate a possible use in the follow-up of children affected by pneumonia or other respiratory diseases, and who are not able to perform a spirometric test.
Developing the Cleanliness Requirements for an Organic-detection Instrument MOMA-MS
NASA Technical Reports Server (NTRS)
Perry, Radford; Canham, John; Lalime, Erin
2015-01-01
The cleanliness requirements for an organic-detection instrument, like the Mars Organic Molecule Analyzer Mass Spectrometer (MOMA-MS), on a Planetary Protection Class IVb mission can be extremely stringent. These include surface molecular and particulate, outgassing, and bioburden. The prime contractor for the European Space Agencys ExoMars 2018 project, Thales Alenia Space Italy, provided requirements based on a standard, conservative approach of defining limits which yielded levels that are unverifiable by standard cleanliness verification methods. Additionally, the conservative method for determining contamination surface area uses underestimation while conservative bioburden surface area relies on overestimation, which results in inconsistencies for the normalized reporting. This presentation will provide a survey of the challenge to define requirements that can be reasonably verified and still remain appropriate to the core science of the ExoMars mission.
Thermal inertia mapping of below ground objects and voids
NASA Astrophysics Data System (ADS)
Del Grande, Nancy K.; Ascough, Brian M.; Rumpf, Richard L.
2013-05-01
Thermal inertia (effusivity) contrast marks the borders of naturally heated below ground object and void sites. The Dual Infrared Effusivity Computed Tomography (DIRECT) method, patent pending, detects and locates the presence of enhanced heat flows from below ground object and void sites at a given area. DIRECT maps view contrasting surface temperature differences between sites with normal soil and sites with soil disturbed by subsurface, hollow or semi-empty object voids (or air gaps) at varying depths. DIRECT utilizes an empirical database created to optimize the scheduling of daily airborne thermal surveys to view and characterize unseen object and void types, depths and volumes in "blind" areas.
Owsley, Cynthia; McGwin, Gerald; Elgin, Jennifer; Wood, Joanne M.
2014-01-01
Purpose. To compare self-assessed driving habits and skills of licensed drivers with central visual loss who use bioptic telescopes to those of age-matched normally sighted drivers, and to examine the association between bioptic drivers' impressions of the quality of their driving and ratings by a “backseat” evaluator. Methods. Participants were licensed bioptic drivers (n = 23) and age-matched normally sighted drivers (n = 23). A questionnaire was administered addressing driving difficulty, space, quality, exposure, and, for bioptic drivers, whether the telescope was helpful in on-road situations. Visual acuity and contrast sensitivity were assessed. Information on ocular diagnosis, telescope characteristics, and bioptic driving experience was collected from the medical record or in interview. On-road driving performance in regular traffic conditions was rated independently by two evaluators. Results. Like normally sighted drivers, bioptic drivers reported no or little difficulty in many driving situations (e.g., left turns, rush hour), but reported more difficulty under poor visibility conditions and in unfamiliar areas (P < 0.05). Driving exposure was reduced in bioptic drivers (driving 250 miles per week on average vs. 410 miles per week for normally sighted drivers, P = 0.02), but driving space was similar to that of normally sighted drivers (P = 0.29). All but one bioptic driver used the telescope in at least one driving task, and 56% used the telescope in three or more tasks. Bioptic drivers' judgments about the quality of their driving were very similar to backseat evaluators' ratings. Conclusions. Bioptic drivers show insight into the overall quality of their driving and areas in which they experience driving difficulty. They report using the bioptic telescope while driving, contrary to previous claims that it is primarily used to pass the vision screening test at licensure. PMID:24370830
Kehl, Sven; Eckert, Sven; Berlit, Sebastian; Tuschy, Benjamin; Sütterlin, Marc; Siemer, Jörn
2013-11-01
The purpose of this study was to develop new formulas for the expected fetal lung area-to-head circumference ratio in normal singleton pregnancies between 20 and 40 weeks' gestation. The lung-to-head ratio and complete fetal biometric parameters of 126 fetuses between 20 and 40 weeks' gestation were prospectively measured. The lung-to-head ratio was measured by 3 different methods (longest diameter, anteroposterior diameter, and tracing). Formulas for predicting right and left lung-to-head ratios with regard to gestational age and biometric parameters were derived by stepwise regression analysis. New formulas for calculating right and left lung-to-head ratios by each measurement method were derived. The formulas included gestational age only and no biometric parameters. The new formulas for estimating the expected lung-to-head ratio by the 3 different methods in normal singleton pregnancies up to 40 weeks' gestation may help improve the prognostic power of observed-to-expected lung-to-head ratio assessment in fetuses with congenital diaphragmatic hernias.
Bhat, Sujatha; Kartha, Vasudevan Bhaskaran; Rai, Lavanya; Chidangil, Santhosh
2015-01-01
Cervical cancer, the second most common cancer in women, progresses silently over long periods before producing any clinical manifestation. Research in early detection of this condition using proteomic techniques is of very recent origin. We used high-performance liquid chromatography combined with laser-induced fluorescence method in our lab to record the protein profiles of tissue homogenate, cell lysate and serum samples of normal and different stages of malignant conditions of the cervix. Information on protein markers in the protein profiles was derived using various data processing methods including curve resolution. The variations in relative intensities of different peaks with respect to peak height, width and area under the curve from different sample types were compared to get information regarding the concentration of the various proteins and their significance in the induction and metastasis of cervical cancer. The method can be used in diagnosis, follow-up with respect to the progression, remission and effective therapy, in cervical malignancy. © The Author [2014]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lai, Wenqing; Wang, Yuandong; Li, Wenpeng; Sun, Guang; Qu, Guomin; Cui, Shigang; Li, Mengke; Wang, Yongqiang
2017-10-01
Based on long term vibration monitoring of the No.2 oil-immersed fat wave reactor in the ±500kV converter station in East Mongolia, the vibration signals in normal state and in core loose fault state were saved. Through the time-frequency analysis of the signals, the vibration characteristics of the core loose fault were obtained, and a fault diagnosis method based on the dual tree complex wavelet (DT-CWT) and support vector machine (SVM) was proposed. The vibration signals were analyzed by DT-CWT, and the energy entropy of the vibration signals were taken as the feature vector; the support vector machine was used to train and test the feature vector, and the accurate identification of the core loose fault of the flat wave reactor was realized. Through the identification of many groups of normal and core loose fault state vibration signals, the diagnostic accuracy of the result reached 97.36%. The effectiveness and accuracy of the method in the fault diagnosis of the flat wave reactor core is verified.
NASA Astrophysics Data System (ADS)
Feng, Lili; Jia, Zhiqing; Li, Qingxue
2016-12-01
Aeolian desertification is poorly understood despite its importance for indicating environment change. Here we exploit Gaofen-1(GF-1) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to develop a quick and efficient method for large scale aeolian desertification dynamic monitoring in northern China. This method, which is based on Normalized Difference Desertification Index (NDDI) calculated by band1 & band2 of MODIS reflectance data (MODIS09A1). Then we analyze spatial-temporal change of aeolian desertification area and detect its possible influencing factors, such as precipitation, temperature, wind speed and population by Convergent Cross Mapping (CCM) model. It suggests that aeolian desertification area with population indicates feedback (bi-directional causality) between the two variables (P < 0.05), but forcing of aeolian desertification area by population is weak. Meanwhile, we find aeolian desertification area is significantly affected by temperature, as expected. However, there is no obvious forcing for the aeolian desertification area and precipitation. Aeolian desertification area with wind speed indicates feedback (bi-directional causality) between the two variables with significant signal (P < 0.01). We infer that aeolian desertification is greatly affected by natural factors compared with anthropogenic factors. For the desertification in China, we are greatly convinced that desertification prevention is better than control.
Feng, Lili; Jia, Zhiqing; Li, Qingxue
2016-01-01
Aeolian desertification is poorly understood despite its importance for indicating environment change. Here we exploit Gaofen-1(GF-1) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to develop a quick and efficient method for large scale aeolian desertification dynamic monitoring in northern China. This method, which is based on Normalized Difference Desertification Index (NDDI) calculated by band1 & band2 of MODIS reflectance data (MODIS09A1). Then we analyze spatial-temporal change of aeolian desertification area and detect its possible influencing factors, such as precipitation, temperature, wind speed and population by Convergent Cross Mapping (CCM) model. It suggests that aeolian desertification area with population indicates feedback (bi-directional causality) between the two variables (P < 0.05), but forcing of aeolian desertification area by population is weak. Meanwhile, we find aeolian desertification area is significantly affected by temperature, as expected. However, there is no obvious forcing for the aeolian desertification area and precipitation. Aeolian desertification area with wind speed indicates feedback (bi-directional causality) between the two variables with significant signal (P < 0.01). We infer that aeolian desertification is greatly affected by natural factors compared with anthropogenic factors. For the desertification in China, we are greatly convinced that desertification prevention is better than control. PMID:28004798
A Normalized Sunspot-Area Series Starting in 1832: An Update
NASA Astrophysics Data System (ADS)
Carrasco, V. M. S.; Vaquero, J. M.; Gallego, M. C.; Sánchez-Bajo, F.
2016-11-01
A new normalized sunspot-area series has been reconstructed from the series obtained by the Royal Greenwich Observatory and other contemporary institutions for the period 1874 - 2008 and the area series compiled by De la Rue, Stewart, and Loewy from 1832 to 1868. Since the two sets of series do not overlap in time, we used the new version of sunspot index number (Version 2) published by Sunspot Index and Long-term Solar Observations (SILSO) as a link between them. We also present a spectral analysis of the normalized-area series in search of periodicities beyond the well-known solar cycle of 11 years and a study of the Waldmeier effect in the new version of sunspot number and the sunspot-area series presented in this study. We conclude that while this effect is significant in the new series of sunspot number, it has a weak relationship with the sunspot-area series.
Automated retinal vessel type classification in color fundus images
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.
2013-02-01
Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Detection of Tree Crowns Based on Reclassification Using Aerial Images and LIDAR Data
NASA Astrophysics Data System (ADS)
Talebi, S.; Zarea, A.; Sadeghian, S.; Arefi, H.
2013-09-01
Tree detection using aerial sensors in early decades was focused by many researchers in different fields including Remote Sensing and Photogrammetry. This paper is intended to detect trees in complex city areas using aerial imagery and laser scanning data. Our methodology is a hierarchal unsupervised method consists of some primitive operations. This method could be divided into three sections, in which, first section uses aerial imagery and both second and third sections use laser scanners data. In the first section a vegetation cover mask is created in both sunny and shadowed areas. In the second section Rate of Slope Change (RSC) is used to eliminate grasses. In the third section a Digital Terrain Model (DTM) is obtained from LiDAR data. By using DTM and Digital Surface Model (DSM) we would get to Normalized Digital Surface Model (nDSM). Then objects which are lower than a specific height are eliminated. Now there are three result layers from three sections. At the end multiplication operation is used to get final result layer. This layer will be smoothed by morphological operations. The result layer is sent to WG III/4 to evaluate. The evaluation result shows that our method has a good rank in comparing to other participants' methods in ISPRS WG III/4, when assessed in terms of 5 indices including area base completeness, area base correctness, object base completeness, object base correctness and boundary RMS. With regarding of being unsupervised and automatic, this method is improvable and could be integrate with other methods to get best results.
Lessing, P.; Messina, C.P.; Fonner, R.F.
1983-01-01
Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.
1994-08-01
ANNUAL PRECIPITATION, 30-YEAR NORMALS (1951-1980) A-I-3 A-I-2 MEAN MONTHLY AND ANNUAL TEMPERATURE , 30-YEAR NORMALS (1951-1980) A-I-4 A-1-3 AVERAGE ...Environmental Quality (DEQ). CLIMATE The climate of the area is humid si!btropicl. AMual average temperature in the project area is 68°F, with monthly...normal temperatures varying from 82’F in July to 531F in Januwry. Average annual precipitation over tae area is 63 inche!, maiying from a monthly
Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.
Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah
2016-01-01
An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
Venkatesiah, Sowmya S; Kale, Alka D; Hallikeremath, Seema R; Kotrashetti, Vijayalakshmi S
2013-01-01
Lichen planus is a chronic inflammatory mucocutaneous disease that clinically and histologically resembles lichenoid lesions, although the latter has a different etiology. Though criteria have been suggested for differentiating oral lichen planus from lichenoid lesions, confusion still prevails. To study the cellular and nuclear volumetric features in the epithelium of normal mucosa, lichen planus, and lichenoid lesions to determine variations if any. A retrospective study was done on 25 histologically diagnosed cases each of oral lichen planus, oral lichenoid lesions, and normal oral mucosa. Cellular and nuclear morphometric measurements were assessed on hematoxylin and eosin sections using image analysis software. Analysis of variance test (ANOVA) and Tukey's post-hoc test. The basal cells of oral lichen planus showed a significant increase in the mean nuclear and cellular areas, and in nuclear volume; there was a significant decrease in the nuclear-cytoplasmic ratio as compared to normal mucosa. The suprabasal cells showed a significant increase in nuclear and cellular areas, nuclear diameter, and nuclear and cellular volumes as compared to normal mucosa. The basal cells of oral lichenoid lesions showed significant difference in the mean cellular area and the mean nuclear-cytoplasmic ratio as compared to normal mucosa, whereas the suprabasal cells differed significantly from normal mucosa in the mean nuclear area and the nuclear and cellular volumes. Morphometry can differentiate lesions of oral lichen planus and oral lichenoid lesions from normal oral mucosa. Thus, morphometry may serve to discriminate between normal and premalignant lichen planus and lichenoid lesions. These lesions might have a high risk for malignant transformation and may behave in a similar manner with respect to malignant transformation.
Rapid Pupil-Based Assessment of Glaucomatous Damage
Chen, Yanjun; Wyatt, Harry J.; Swanson, William H.; Dul, Mitchell W.
2010-01-01
Purpose To investigate the ability of a technique employing pupillometry and functionally-shaped stimuli to assess loss of visual function due to glaucomatous optic neuropathy. Methods Pairs of large stimuli, mirror images about the horizontal meridian, were displayed alternately in the upper and lower visual field. Pupil diameter was recorded and analyzed in terms of the “contrast balance” (relative sensitivity to the upper and lower stimuli), and the pupil constriction amplitude to upper and lower stimuli separately. A group of 40 patients with glaucoma was tested twice in a first session, and twice more in a second session, 1 to 3 weeks later. A group of 40 normal subjects was tested with the same protocol. Results Results for the normal subjects indicated functional symmetry in upper/lower retina, on average. Contrast balance results for the patients with glaucoma differed from normal: half the normal subjects had contrast balance within 0.06 log unit of equality and 80% had contrast balance within 0.1 log unit. Half the patients had contrast balances more than 0.1 log unit from equality. Patient contrast balances were moderately correlated with predictions from perimetric data (r = 0.37, p < 0.00001). Contrast balances correctly classified visual field damage in 28 patients (70%), and response amplitudes correctly classified 24 patients (60%). When contrast balance and response amplitude were combined, receiver operating characteristic area for discriminating glaucoma from normal was 0.83. Conclusions Pupillary evaluation of retinal asymmetry provides a rapid method for detecting and classifying visual field defects. In this patient population, classification agreed with perimetry in 70% of eyes. PMID:18521026
Normal Perceptual Sensitivity Arising From Weakly Reflective Cone Photoreceptors
Bruce, Kady S.; Harmening, Wolf M.; Langston, Bradley R.; Tuten, William S.; Roorda, Austin; Sincich, Lawrence C.
2015-01-01
Purpose To determine the light sensitivity of poorly reflective cones observed in retinas of normal subjects, and to establish a relationship between cone reflectivity and perceptual threshold. Methods Five subjects (four male, one female) with normal vision were imaged longitudinally (7–26 imaging sessions, representing 82–896 days) using adaptive optics scanning laser ophthalmoscopy (AOSLO) to monitor cone reflectance. Ten cones with unusually low reflectivity, as well as 10 normally reflective cones serving as controls, were targeted for perceptual testing. Cone-sized stimuli were delivered to the targeted cones and luminance increment thresholds were quantified. Thresholds were measured three to five times per session for each cone in the 10 pairs, all located 2.2 to 3.3° from the center of gaze. Results Compared with other cones in the same retinal area, three of 10 monitored dark cones were persistently poorly reflective, while seven occasionally manifested normal reflectance. Tested psychophysically, all 10 dark cones had thresholds comparable with those from normally reflecting cones measured concurrently (P = 0.49). The variation observed in dark cone thresholds also matched the wide variation seen in a large population (n = 56 cone pairs, six subjects) of normal cones; in the latter, no correlation was found between cone reflectivity and threshold (P = 0.0502). Conclusions Low cone reflectance cannot be used as a reliable indicator of cone sensitivity to light in normal retinas. To improve assessment of early retinal pathology, other diagnostic criteria should be employed along with imaging and cone-based microperimetry. PMID:26193919
A margin model to account for respiration-induced tumour motion and its variability
NASA Astrophysics Data System (ADS)
Coolens, Catherine; Webb, Steve; Shirato, H.; Nishioka, K.; Evans, Phil M.
2008-08-01
In order to reduce the sensitivity of radiotherapy treatments to organ motion, compensation methods are being investigated such as gating of treatment delivery, tracking of tumour position, 4D scanning and planning of the treatment, etc. An outstanding problem that would occur with all these methods is the assumption that breathing motion is reproducible throughout the planning and delivery process of treatment. This is obviously not a realistic assumption and is one that will introduce errors. A dynamic internal margin model (DIM) is presented that is designed to follow the tumour trajectory and account for the variability in respiratory motion. The model statistically describes the variation of the breathing cycle over time, i.e. the uncertainty in motion amplitude and phase reproducibility, in a polar coordinate system from which margins can be derived. This allows accounting for an additional gating window parameter for gated treatment delivery as well as minimizing the area of normal tissue irradiated. The model was illustrated with abdominal motion for a patient with liver cancer and tested with internal 3D lung tumour trajectories. The results confirm that the respiratory phases around exhale are most reproducible and have the smallest variation in motion amplitude and phase (approximately 2 mm). More importantly, the margin area covering normal tissue is significantly reduced by using trajectory-specific margins (as opposed to conventional margins) as the angular component is by far the largest contributor to the margin area. The statistical approach to margin calculation, in addition, offers the possibility for advanced online verification and updating of breathing variation as more data become available.
Laskowska-Macios, Karolina; Zapasnik, Monika; Hu, Tjing-Tjing; Kossut, Malgorzata; Arckens, Lutgarde; Burnat, Kalina
2015-10-01
Pattern vision deprivation (BD) can induce permanent deficits in global motion perception. The impact of timing and duration of BD on the maturation of the central and peripheral visual field representations in cat primary visual areas 17 and 18 remains unknown. We compared early BD, from eye opening for 2, 4, or 6 months, with late onset BD, after 2 months of normal vision, using the expression pattern of the visually driven activity reporter gene zif268 as readout. Decreasing zif268 mRNA levels between months 2 and 4 characterized the normal maturation of the (supra)granular layers of the central and peripheral visual field representations in areas 17 and 18. In general, all BD conditions had higher than normal zif268 levels. In area 17, early BD induced a delayed decrease, beginning later in peripheral than in central area 17. In contrast, the decrease occurred between months 2 and 4 throughout area 18. Lack of pattern vision stimulation during the first 4 months of life therefore has a different impact on the development of areas 17 and 18. A high zif268 expression level at a time when normal vision is restored seems to predict the capacity of a visual area to compensate for BD. © The Author 2014. Published by Oxford University Press.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
A cascade method for TFT-LCD defect detection
NASA Astrophysics Data System (ADS)
Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya
2017-07-01
In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.
Fabrication method for cores of structural sandwich materials including star shaped core cells
Christensen, Richard M.
1997-01-01
A method for fabricating structural sandwich materials having a core pattern which utilizes star and non-star shaped cells. The sheets of material are bonded together or a single folded sheet is used, and bonded or welded at specific locations, into a flat configuration, and are then mechanically pulled or expanded normal to the plane of the sheets which expand to form the cells. This method can be utilized to fabricate other geometric cell arrangements than the star/non-star shaped cells. Four sheets of material (either a pair of bonded sheets or a single folded sheet) are bonded so as to define an area therebetween, which forms the star shaped cell when expanded.
NASA Astrophysics Data System (ADS)
Godah, Walyeldeen; Szelachowska, Małgorzata; Krynski, Jan
2017-12-01
The dedicated gravity satellite missions, in particular the GRACE (Gravity Recovery and Climate Experiment) mission launched in 2002, provide unique data for studying temporal variations of mass distribution in the Earth's system, and thereby, the geometry and the gravity fi eld changes of the Earth. The main objective of this contribution is to estimate physical height (e.g. the orthometric/normal height) changes over Central Europe using GRACE satellite mission data as well as to analyse them and model over the selected study area. Physical height changes were estimated from temporal variations of height anomalies and vertical displacements of the Earth surface being determined over the investigated area. The release 5 (RL05) GRACE-based global geopotential models as well as load Love numbers from the Preliminary Reference Earth Model (PREM) were used as input data. Analysis of the estimated physical height changes and their modelling were performed using two methods: the seasonal decomposition method and the PCA/ EOF (Principal Component Analysis/Empirical Orthogonal Function) method and the differences obtained were discussed. The main fi ndings reveal that physical height changes over the selected study area reach up to 22.8 mm. The obtained physical height changes can be modelled with an accuracy of 1.4 mm using the seasonal decomposition method.
NASA Astrophysics Data System (ADS)
Wang, Qiongjie; Yan, Li
2016-06-01
With the rapid development of sensor networks and earth observation technology, a large quantity of high resolution remote sensing data is available. However, the influence of shadow has become increasingly greater due to the higher resolution shows more complex and detailed land cover, especially under the shadow. Shadow areas usually have lower intensity and fuzzy boundary, which make the images hard to interpret automatically. In this paper, a simple and effective shadow (including soft shadow) detection and compensation method is proposed based on normal data, Digital Elevation Model (DEM) and sun position. First, we use high accuracy DEM and sun position to rebuild the geometric relationship between surface and sun at the time the image shoot and get the hard shadow boundary and sky view factor (SVF) of each pixel. Anisotropic scattering assumption is accepted to determine the soft shadow factor mainly affected by diffuse radiation. Finally, an easy radiation transmission model is used to compensate the shadow area. Compared with the spectral detection method, our detection method has strict theoretical basis, reliable compensation result and minor affected by the image quality. The compensation strategy can effectively improve the radiation intensity of shadow area, reduce the information loss brought by shadow and improve the robustness and efficiency of the classification algorithms.
Uranium Pyrophoricity Phenomena and Prediction (FAI/00-39)
DOE Office of Scientific and Technical Information (OSTI.GOV)
PLYS, M.G.
2000-10-10
The purpose of this report is to provide a topical reference on the phenomena and prediction of uranium pyrophoricity for the Hanford Spent Nuclear Fuel (SNF) Project with specific applications to SNF Project processes and situations. Spent metallic uranium nuclear fuel is currently stored underwater at the K basins in the Hanford 100 area, and planned processing steps include: (1) At the basins, cleaning and placing fuel elements and scrap into stainless steel multi-canister overpacks (MCOs) holding about 6 MT of fuel apiece; (2) At nearby cold vacuum drying (CVD) stations, draining, vacuum drying, and mechanically sealing the MCOs; (3)more » Shipping the MCOs to the Canister Storage Building (CSB) on the 200 Area plateau; and (4) Welding shut and placing the MCOs for interim (40 year) dry storage in closed CSB storage tubes cooled by natural air circulation through the surrounding vault. Damaged fuel elements have exposed and corroded fuel surfaces, which can exothermically react with water vapor and oxygen during normal process steps and in off-normal situations, A key process safety concern is the rate of reaction of damaged fuel and the potential for self-sustaining or runaway reactions, also known as uranium fires or fuel ignition. Uranium metal and one of its corrosion products, uranium hydride, are potentially pyrophoric materials. Dangers of pyrophoricity of uranium and its hydride have long been known in the U.S. Department of Energy (Atomic Energy Commission/DOE) complex and will be discussed more below; it is sufficient here to note that there are numerous documented instances of uranium fires during normal operations. The motivation for this work is to place the safety of the present process in proper perspective given past operational experience. Steps in development of such a perspective are: (1) Description of underlying physical causes for runaway reactions, (2) Modeling physical processes to explain runaway reactions, (3) Validation of the method against experimental data, (4) Application of the method to plausibly explain operational experience, and (5) Application of the method to present process steps to demonstrate process safety and margin. Essentially, the logic above is used to demonstrate that runaway reactions cannot occur during normal SNF Project process steps, and to illustrate the depth of the technical basis for such a conclusion. Some off-normal conditions are identified here that could potentially lead to runaway reactions. However, this document is not intended to provide an exhaustive analysis of such cases. In summary, this report provides a ''toolkit'' of models and approaches for analysis of pyrophoricity safety issues at Hanford, and the technical basis for the recommended approaches. A summary of recommended methods appears in Section 9.0.« less
Estimating the Effect of Gypsy Moth Defloiation Using MODIS
NASA Technical Reports Server (NTRS)
deBeurs, K. M.; Townsend, P. A.
2008-01-01
The area of North American forests affected by gypsy moth defoliation continues to expand despite efforts to slow the spread. With the increased area of infestation, ecological, environmental and economic concerns about gypsy moth disturbance remain significant, necessitating coordinated, repeatable and comprehensive monitoring of the areas affected. In this study, our primary objective was to estimate the magnitude of defoliation using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery for a gypsy moth outbreak that occurred in the US central Appalachian Mountains in 2000 and 2001. We focused on determining the appropriate spectral MODIS indices and temporal compositing method to best monitor the effects of gypsy moth defoliation. We tested MODIS-based Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Normalized Difference Water Index (NDWI), and two versions of the Normalized Difference Infrared index (NDIIb6 and NDIIb7, using the channels centered on 1640 nm and 2130 nm respectively) for their capacity to map defoliation as estimated by ground observations. In addition, we evaluated three temporal resolutions: daily, 8-day and 16-day data. We validated the results through quantitative comparison to Landsat based defoliation estimates and traditional sketch maps. Our MODIS based defoliation estimates based on NDIIb6 and NDIIb7 closely matched Landsat defoliation estimates derived from field data as well as sketch maps. We conclude that daily MODIS data can be used with confidence to monitor insect defoliation on an annual time scale, at least for larger patches (greater than 0.63 km2). Eight-day and 16-day MODIS composites may be of lesser use due to the ephemeral character of disturbance by the gypsy moth.
Peripheral Prism Glasses: Effects of Dominance, Suppression and Background
Ross, Nicole C.; Bowers, Alex R.; Optom, M.C.; Peli, Eli
2012-01-01
Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) place different images on corresponding peripheral retinal points, a rivalrous situation in which local suppression of the prism image could occur and thus limit device functionality. Detection with peripheral prisms has primarily been evaluated using conventional perimetry where binocular rivalry is unlikely to occur. We quantified detection over more visually complex backgrounds and examined the effects of ocular dominance. Methods Detection rates of 8 participants with HH or quadranopia and normal binocularity wearing unilateral peripheral prism glasses were determined for static perimetry targets briefly presented in the prism expansion area (in the blind hemifield) and the seeing hemifield, under monocular and binocular viewing, over uniform gray and more complex patterned backgrounds. Results Participants with normal binocularity had mixed sensory ocular dominance, demonstrated no difference in detection rates when prisms were fitted on the side of the HH or the opposite side (p>0.2), and had detection rates in the expansion area that were not different for monocular and binocular viewing over both backgrounds (p>0.4). However, two participants with abnormal binocularity and strong ocular dominance demonstrated reduced detection in the expansion area when prisms were fitted in front of the non-dominant eye. Conclusions We found little evidence of local suppression of the peripheral prism image for HH patients with normal binocularity. However, in cases of strong ocular dominance, consideration should be given to fitting prisms before the dominant eye. Although these results are promising, further testing in more realistic conditions including image motion is needed. PMID:22885783
Expression of cyclooxygenase-2 in the endometrium of gilts with different stages of endometritis.
Roongsitthichai, Atthaporn; Srisuwatanasagul, Sayamon; Koonjaenak, Seri; Tummaruk, Padet
2011-11-01
The present study determined the association among the expression of COX-2, stages of endometritis, and types and number of local immune cells infiltrating into the gilts' endometrium. The uterine tissues from 24 Landrace x Yorkshire gilts identified as acute endometritis (n = 7), chronic endometritis (n = 7), and normal endometrium (n = 10) were included. The tissues were prepared for both histological and immunohistochemical investigations. The immunoexpression of COX-2 in every layer of the gilts' endometria was appraised by avidin-biotin-peroxidase complex method via image analysis; and was reported as percentage of positive area and staining index. The results revealed that the immunoexpression of COX-2 was found only in the surface epithelial layer. The gilts with acute endometritis possessed higher both percentage of positive area (68.99% versus 4.50% and 3.43%, P < 0.001) and staining index (1.13 versus 0.05 and 0.04, P < 0.001) than those with chronic endometritis and normal endometrium, respectively. Positive correlations between the number of surface epithelial neutrophils and percentage of COX-2 positive area (r = 0.47, P = 0.022), as well as mean staining index (r = 0.44, P = 0.032) were observed. In conclusion, the immunoexpression of COX-2 was found strongest in the gilts with acute endometritis, meanwhile it was not different between those with chronic endometritis and normal endometrium. This suggested that the expression of COX-2 might be dependent not only on the infiltration of local immune cells in the endometrium, but also on the duration of exposure with inflammatory agents.
Song, Youngkeun; Njoroge, John B; Morimoto, Yukihiro
2013-05-01
Drought-induced anomalies in vegetation condition over wide areas can be observed by using time-series satellite remote sensing data. Previous methods to assess the anomalies may include limitations in considering (1) the seasonality in terms of each vegetation-cover type, (2) cumulative damage during the drought event, and (3) the application to various types of land cover. This study proposed an improved methodology to assess drought impact from the annual vegetation responses, and discussed the result in terms of diverse landscape mosaics in the Mt. Kenya region (0.4° N 35.8° E ~ 1.6° S 38.4° E). From the 30-year annual rainfall records at the six meteorological stations in the study area, we identified 2000 as the drought year and 2001, 2004, and 2007 as the normal precipitation years. The time-series profiles of vegetation condition in the drought and normal precipitation years were obtained from the values of Enhanced Vegetation Index (EVI; Huete et al. 2002), which were acquired from Terra MODIS remote sensing dataset (MOD13Q1) taken every 16 days at the scale of 250-m spatial resolution. The drought impact was determined by integrating the annual differences in EVI profiles between drought and normal conditions, per pixel based on nearly same day of year. As a result, we successfully described the distribution of landscape vulnerability to drought, considering the seasonality of each vegetation-cover type at every MODIS pixel. This result will contribute to the large-scale landscape management of Mt. Kenya region. Future study should improve this method by considering land-use change occurred during the long-term monitoring period.
NASA Astrophysics Data System (ADS)
Siswanto, Agus; Gunadin, Indar Chaerah; Said, Sri Mawar; Suyuti, Ansar
2018-03-01
The purpose of this research is to improve the stability of interconnection of South Sulawesi system caused by penetration new wind turbine in Sidrap area on bus 2 and in Jeniponto area on bus 34. The method used in this research was via software Power System analysis Toolbox (PSAT) under MATLAB. In this research, there are two problems that are evaluated, the stability of the system before and after penetration wind turbine into the system South Sulawesi system. From the simulation result shows that penetration of wind turbine on bus 2 Sidrap, bus 37 Jeniponto give effect oscillation on the system. The oscillation was damped by installation of Power System Stabilizer (PSS) on bus 29 area Sungguminasa, that South Sulawesi system stable according to normal condition.
Metal intercalation-induced selective adatom mass transport on graphene
Liu, Xiaojie; Wang, Cai -Zhuang; Hupalo, Myron; ...
2016-03-29
Recent experiments indicate that metal intercalation is a very effective method to manipulate the graphene-adatom interaction and control metal nanostructure formation on graphene. A key question is mass transport, i.e., how atoms deposited uniformly on graphene populate different areas depending on the local intercalation. Using first-principles calculations, we show that partially intercalated graphene, with a mixture of intercalated and pristine areas, can induce an alternating electric field because of the spatial variations in electron doping, and thus, an oscillatory electrostatic potential. As a result, this alternating field can change normal stochastic adatom diffusion to biased diffusion, leading to selective massmore » transport and consequent nucleation, on either the intercalated or pristine areas, depending on the charge state of the adatoms.« less
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
van Niekerk, Cornelis G; van der Laak, Jeroen A W M; Börger, M Elisa; Huisman, Henk-Jan; Witjes, J Alfred; Barentsz, Jelle O; Hulsbergen-van de Kaa, Christina A
2009-01-01
Contrast enhanced imaging enables powerful, non-invasive diagnostics, important for detection and staging of early prostate cancer. The uptake of contrast agent is increased in prostate cancer as compared to normal prostate tissue. To reveal the underlying physiological mechanisms, quantification of tissue components in pathology specimens may yield important information. Aim of this study was to investigate whether microvascularity is increased in prostate confined cancer (pT2). Radical prostatectomy specimens of 26 patients were selected for organ confined peripheral zone tumors which were restricted to one side of the prostate. Microvessels were visualized by immunohistochemistry against CD31. Specimens were scanned using a computer controlled microscope and scanning stage and vessels were recognized automatically. Pseudocolor mappings were produced showing number of vascular profiles (MVD), vascular area (MVA) and perimeter (MVP) in an overview of the entire prostate transection. MVD is a common measure for vascularity, whereas MVA represents the 3D vascular volume and MVP the perfused surface area. Mean, coefficient of variation and 75th percentile of these parameters were calculated automatically in manually indicated areas, consisting of the entire tumor area and the corresponding normal area in the contra lateral side of the prostate. The mappings clearly indicate areas of increased vascularity in prostate transections. In tumor tissue an increase was found compared to normal tissue of 81%, 49%, and 62% for mean MVD, mean MVA and mean MVP, respectively (P < 0.001 for all comparisons). In contrast, the heterogeneity in tumor vasculature was significantly decreased as compared to normal prostate (P < 0.001). Characteristics of microvasculature deviated significantly in pT2 prostate tumor as compared to normal tissue. Copyright 2008 Wiley-Liss, Inc.
Characteristics of bowl-shaped coils for transcranial magnetic stimulation
NASA Astrophysics Data System (ADS)
Yamamoto, Keita; Suyama, Momoko; Takiyama, Yoshihiro; Kim, Dongmin; Saitoh, Youichi; Sekino, Masaki
2015-05-01
Transcranial magnetic stimulation (TMS) has recently been used as a method for the treatment of neurological and psychiatric diseases. Daily TMS sessions can provide continuous therapeutic effectiveness, and the installation of TMS systems at patients' homes has been proposed. A figure-eight coil, which is normally used for TMS therapy, induces a highly localized electric field; however, it is challenging to achieve accurate coil positioning above the targeted brain area using this coil. In this paper, a bowl-shaped coil for stimulating a localized but wider area of the brain is proposed. The coil's electromagnetic characteristics were analyzed using finite element methods, and the analysis showed that the bowl-shaped coil induced electric fields in a wider area of the brain model than a figure-eight coil. The expanded distribution of the electric field led to greater robustness of the coil to the coil-positioning error. To improve the efficiency of the coil, the relationship between individual coil design parameters and the resulting coil characteristics was numerically analyzed. It was concluded that lengthening the outer spherical radius and narrowing the width of the coil were effective methods for obtaining a more effective and more uniform distribution of the electric field.
NASA Astrophysics Data System (ADS)
Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel
2016-11-01
This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km2. The lake area decreased by -9.3% at an annual rate of -53.7 km2 yr-1 during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability. PMID:27007233
Kang, Sinkyu; Hong, Suk Young
2016-01-01
A minimum composite method was applied to produce a 15-day interval normalized difference vegetation index (NDVI) dataset from Moderate Resolution Imaging Spectroradiometer (MODIS) daily 250 m reflectance in the red and near-infrared bands. This dataset was applied to determine lake surface areas in Mongolia. A total of 73 lakes greater than 6.25 km2in area were selected, and 28 of these lakes were used to evaluate detection errors. The minimum composite NDVI showed a better detection performance on lake water pixels than did the official MODIS 16-day 250 m NDVI based on a maximum composite method. The overall lake area detection performance based on the 15-day minimum composite NDVI showed -2.5% error relative to the Landsat-derived lake area for the 28 evaluated lakes. The errors increased with increases in the perimeter-to-area ratio but decreased with lake size over 10 km(2). The lake area decreased by -9.3% at an annual rate of -53.7 km(2) yr(-1) during 2000 to 2011 for the 73 lakes. However, considerable spatial variations, such as slight-to-moderate lake area reductions in semi-arid regions and rapid lake area reductions in arid regions, were also detected. This study demonstrated applicability of MODIS 250 m reflectance data for biweekly monitoring of lake area change and diagnosed considerable lake area reduction and its spatial variability in arid and semi-arid regions of Mongolia. Future studies are required for explaining reasons of lake area changes and their spatial variability.
Todorovic, Vera; Sokic-Milutinovic, Aleksandra; Drndarevic, Neda; Micev, Marjan; Mitrovic, Olivera; Nikolic, Ivan; Wex, Thomas; Milosavljevic, Tomica; Malfertheiner, Peter
2006-01-01
AIM: To investigate the expression of different cytokeratins (CKs) in gastric epithelium of adult patients with chronic gastritis infected with Helicobacter pylori (H pylori) cagA+ strains. METHODS: The expression of CK 7, 8, 18, 19 and 20 was studied immunohistochemically in antral gastric biopsies of 84 patients. All the CKs were immunostained in cagA+H pylori gastritis (57 cases), non-H pylori gastritis (17 cases) and normal gastric mucosa (10 cases). RESULTS: In cagA+ H pylori gastritis, CK8 was expressed comparably to the normal antral mucosa from surface epithelium to deep glands. Distribution of CK18 and CK 19 was unchanged, i.e. transmucosal, but intensity of the expression was different in foveolar region in comparison to normal gastric mucosa. Cytokeratin 18 immunoreactivity was significantly higher in the foveolar epithelium of H pylori-positive gastritis compared to both H pylori-negative gastritis and controls. On the contrary, decrease in CK19 immunoreactivity occurred in foveolar epithelium of H pylori-positive gastritis. In both normal and inflamed antral mucosa without H pylori infection, CK20 was expressed strongly/moderately and homogenously in surface epithelium and upper foveolar region, but in H pylori -induced gastritis significant decrease of expression in foveolar region was noted. Generally, in both normal antral mucosa and H pylori-negative gastritis, expression of CK7 was not observed, while in about half cagA+ H pylori-infected patients, moderate focal CK7 immunoreactivity of the neck and coiled gland areas was registered, especially in areas with more severe inflammatory infiltrate. CONCLUSION: Alterations in expression of CK 7, 18, 19 and 20 together with normal expression of CK8 occur in antral mucosa of H pylori-associated chronic gastritis in adult patients infected with cagA+ strains. Alterations in different cytokeratins expression might contribute to weakening of epithelial tight junctions observed in H pylori-infected gastric mucosa. PMID:16609992
Abdul-Ghani, Rashad; Mahdy, Mohammed A K; Saif-Ali, Reyadh; Alkubati, Sameer A; Alqubaty, Abdulhabib R; Al-Mikhlafy, Abdullah A; Al-Eryani, Samira M; Al-Mekhlafi, Abdusalam M; Alhaj, Ali
2016-06-21
Glucose-6-phosphate dehydrogenase (G6PD) deficiency, the most common genetic enzymopathy worldwide, is associated with an acute haemolytic anaemia in individuals exposed to primaquine. The present study aimed to determine G6PD deficiency among Yemeni children in malaria-endemic areas as well as to assess the performance of the CareStart™ G6PD rapid diagnostic test (RDT) for its detection. A cross-sectional study recruiting 400 children from two rural districts in Hodeidah governorate was conducted. Socio-demographic data and blood samples were collected and G6PD deficiency was qualitatively detected in fresh blood in the field using the CareStart™ G6PD RDT, while the enzymatic assay was used to quantitatively measure enzyme activity. Performance of the CareStart™ G6PD RDT was assessed by calculating its sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV) against the reference enzymatic assay. The ranges of enzyme activity were 0.14-18.45 and 0.21-15.94 units/g haemoglobin (U/gHb) for males and females, respectively. However, adjusted male median G6PD activity was 5.0 U/gHb. Considering the adjusted male median as representing 100 % normal enzyme activity, the prevalence rates of G6PD deficiency were 12.0 and 2.3 % at the cut-off activities of ≤60 and ≤10 %, respectively. Multivariable analysis showed that gender, district of residence and consanguinity between parents were independent risk factors for G6PD deficiency at the cut-off activity of ≤30 % of normal. The CareStart™ G6PD RDT showed 100 % sensitivity and NPV for detecting G6PD deficiency at the cut-off activities of ≤10 and ≤20 % of normal activity compared to the reference enzymatic method. However, it showed specificity levels of 90.0 and 95.4 % as well as positive/deficient predictive values (PPVs) of 18.0 and 66.0 % at the cut-off activities of ≤10 and ≤20 %, respectively, compared to the reference method. G6PD deficiency with enzyme activity of ≤60 % of normal is prevalent among 12.0 % of children residing in malaria-endemic areas of Hodeidah governorate, with 2.3 % having severe G6PD deficiency. Gender, district of residence and consanguinity between parents are significant independent predictors of G6PD deficiency at the cut-off activity of ≤30 % of normal among children in malaria-endemic areas of Hodeidah. The CareStart™ G6PD RDT proved reliable as a point-of-care test to screen for severely G6PD-deficient patients, with 100 % sensitivity and NPV, and it can be used for making clinical decisions prior to the administration of primaquine in malaria elimination strategies.
Nagata, Kohei; Kilgore, Brian D.; Beeler, Nicholas M.; Nakatani, Masao
2014-01-01
During localized slip of a laboratory fault we simultaneously measure the contact area and the dynamic fault normal elastic stiffness. One objective is to determine conditions where stiffness may be used to infer changes in area of contact during sliding on nontransparent fault surfaces. Slip speeds between 0.01 and 10 µm/s and normal stresses between 1 and 2.5 MPa were imposed during velocity step, normal stress step, and slide-hold-slide tests. Stiffness and contact area have a linear interdependence during rate stepping tests and during the hold portion of slide-hold-slide tests. So long as linearity holds, measured fault stiffness can be used on nontransparent materials to infer changes in contact area. However, there are conditions where relations between contact area and stiffness are nonlinear and nonunique. A second objective is to make comparisons between the laboratory- and field-measured changes in fault properties. Time-dependent changes in fault zone normal stiffness made in stress relaxation tests imply postseismic wave speed changes on the order of 0.3% to 0.8% per year in the two or more years following an earthquake; these are smaller than postseismic increases seen within natural damage zones. Based on scaling of the experimental observations, natural postseismic fault normal contraction could be accommodated within a few decimeter wide fault core. Changes in the stiffness of laboratory shear zones exceed 10% per decade and might be detectable in the field postseismically.
NASA Technical Reports Server (NTRS)
McFerrin, Michael; Snell, Edward; Curreri, Peter A. (Technical Monitor)
2002-01-01
An X-ray based method for determining cryoprotectant concentrations necessary to protect solutions from crystalline ice formation was developed. X-ray images from a CCD area detector were integrated as powder patterns and quantified by determining the standard deviation of the slope of the normalized intensity curve in the resolution range where ice rings are known to occur. The method was tested determining the concentrations of glycerol, PEG400, ethylene glycol and 1,2-propanediol necessary to form an amorphous glass at 1OOK with each of the 98 crystallization solutions of Crystal Screens I and II (Hampton Research, Laguna Hills, California, USA). For conditions that required glycerol concentrations of 35% or above cryoprotectant conditions using 2,3-butanediol were determined. The method proved to be remarkably accurate. The results build on the work of [Garman and Mitchell] and extend the number, of suitable starting conditions to alternative cryoprotectants. In particular, 1,2-propanediol has emerged as a particularly good additive for glass formation upon flash cooling.
The impact of signal normalization on seizure detection using line length features.
Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J
2015-10-01
Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.
Yoneyama, Takeshi; Watanabe, Tetsuyo; Kagawa, Hiroyuki; Hayashi, Yutaka; Nakada, Mitsutoshi
2017-03-01
In photodynamic diagnosis using 5-aminolevulinic acid (5-ALA), discrimination between the tumor and normal tissue is very important for a precise resection. However, it is difficult to distinguish between infiltrating tumor and normal regions in the boundary area. In this study, fluorescent intensity and bright spot analyses using a confocal microscope is proposed for the precise discrimination between infiltrating tumor and normal regions. From the 5-ALA-resected brain tumor tissue, the red fluorescent and marginal regions were sliced for observation under a confocal microscope. Hematoxylin and eosin (H&E) staining were performed on serial slices of the same tissue. According to the pathological inspection of the H&E slides, the tumor and infiltrating and normal regions on confocal microscopy images were investigated. From the fluorescent intensity of the image pixels, a histogram of pixel number with the same fluorescent intensity was obtained. The fluorescent bright spot sizes and total number were compared between the marginal and normal regions. The fluorescence intensity distribution and average intensity in the tumor were different from those in the normal region. The probability of a difference from the dark enhanced the difference between the tumor and the normal region. The bright spot size and number in the infiltrating tumor were different from those in the normal region. Fluorescence intensity analysis is useful to distinguish a tumor region, and a bright spot analysis is useful to distinguish between infiltrating tumor and normal regions. These methods will be important for the precise resection or photodynamic therapy of brain tumors. Copyright © 2016 Elsevier B.V. All rights reserved.
Yang, X; Ding, H; Lu, J
2016-01-15
To investigate the feedback effect from area 7 to areas 17 and 18, intrinsic signal optical imaging combined with pharmacological, morphological methods and functional magnetic resonance imaging (fMRI) was employed. A spatial frequency-dependent decrease in response amplitude of orientation maps was observed in areas 17 and 18 when area 7 was inactivated by a local injection of GABA, or by a lesion induced by liquid nitrogen freezing. The pattern of orientation maps of areas 17 and 18 after the inactivation of area 7, if they were not totally blurred, paralleled the normal one. In morphological experiments, after one point at the shallow layers within the center of the cat's orientation column of area 17 was injected electrophoretically with HRP (horseradish peroxidase), three sequential patches in layers 1, 2 and 3 of area 7 were observed. Employing fMRI it was found that area 7 feedbacks mainly to areas 17 and 18 on ipsilateral hemisphere. Therefore, our conclusions are: (1) feedback from area 7 to areas 17 and 18 is spatial frequency modulated; (2) feedback from area 7 to areas 17 and 18 occurs mainly ipsilaterally; (3) histological feedback pattern from area 7 to area 17 is weblike. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Functional optical zone of the cornea.
Tabernero, Juan; Klyce, Stephen D; Sarver, Edwin J; Artal, Pablo
2007-03-01
When keratorefractive surgery is used to treat a central corneal diameter smaller than the resting pupil, visual symptoms of polyopia, ghosting, blur, haloes, and glare can be experienced. Progress has been made to enlarge the area of surgical treatment to extend beyond the photopic pupil; however, geometric limitations can pose restrictions to extend the treatment beyond the mesopic pupil diameter and can lead to impediments in night vision. The size of the treated area that has achieved good optical performance has been defined as the functional optical zone (FOZ). In this study the authors developed three objective methods to measure the FOZ. Corneal topography examination results from 1 eye of 34 unoperated normal eyes and 32 myopic eyes corrected by laser in situ keratomileusis (LASIK) were evaluated in three ways. First, a uniform axial power method (FOZ(A)) assessed the area of the postoperative cornea that was within a +/-0.5-D window centered on the mathematical mode. Second, FOZ was determined based on the corneal wavefront true RMS error as a function of the simulated pupil size (FOZ(R)). Third, FOZ was determined from the radial MTF, established at the retinal plane as a function of pupil size (FOZ(M)). Means for each of the FOZ methods (FOZ(A), FOZ(R), and FOZ(M)) were 7.6, 9.1, and 7.7 mm, respectively, for normal eyes. For LASIK-corrected eyes, these means were 6.0, 6.9, and 6.0 mm. Overall, an average decrease of 1.8 mm in the functional optical zone was found after the LASIK procedure. Correlations between the FOZ methods after LASIK showed acceptable and statistically significant values (R = 0.71, 0.70, and 0.61; P < 0.01). These methods will be useful to more fully characterize corneal treatment profiles after keratorefractive surgery. Because of its ease of implementation, direct spatial correspondence to corneal topography, and good correlation to the other more computationally intensive methods, the semiempiric uniform axial power method (FOZ(A)) appears to be most practical in use. The ability to measure the size of the FOZ should permit further evolution of keratorefractive surgical lasers and their algorithms to reduce the night vision impediments that can arise from functional optical zones that do not encompass the entire mesopic pupil.
Evaluation of CT-based SUV normalization
NASA Astrophysics Data System (ADS)
Devriese, Joke; Beels, Laurence; Maes, Alex; Van de Wiele, Christophe; Pottel, Hans
2016-09-01
The purpose of this study was to determine patients’ lean body mass (LBM) and lean tissue (LT) mass using a computed tomography (CT)-based method, and to compare standardized uptake value (SUV) normalized by these parameters to conventionally normalized SUVs. Head-to-toe positron emission tomography (PET)/CT examinations were retrospectively retrieved and semi-automatically segmented into tissue types based on thresholding of CT Hounsfield units (HU). The following HU ranges were used for determination of CT-estimated LBM and LT (LBMCT and LTCT): -180 to -7 for adipose tissue (AT), -6 to 142 for LT, and 143 to 3010 for bone tissue (BT). Formula-estimated LBMs were calculated using formulas of James (1976 Research on Obesity: a Report of the DHSS/MRC Group (London: HMSO)) and Janmahasatian et al (2005 Clin. Pharmacokinet. 44 1051-65), and body surface area (BSA) was calculated using the DuBois formula (Dubois and Dubois 1989 Nutrition 5 303-11). The CT segmentation method was validated by comparing total patient body weight (BW) to CT-estimated BW (BWCT). LBMCT was compared to formula-based estimates (LBMJames and LBMJanma). SUVs in two healthy reference tissues, liver and mediastinum, were normalized for the aforementioned parameters and compared to each other in terms of variability and dependence on normalization factors and BW. Comparison of actual BW to BWCT shows a non-significant difference of 0.8 kg. LBMJames estimates are significantly higher than LBMJanma with differences of 4.7 kg for female and 1.0 kg for male patients. Formula-based LBM estimates do not significantly differ from LBMCT, neither for men nor for women. The coefficient of variation (CV) of SUV normalized for LBMJames (SUVLBM-James) (12.3%) was significantly reduced in liver compared to SUVBW (15.4%). All SUV variances in mediastinum were significantly reduced (CVs were 11.1-12.2%) compared to SUVBW (15.5%), except SUVBSA (15.2%). Only SUVBW and SUVLBM-James show independence from normalization factors. LBMJames seems to be the only advantageous SUV normalization. No advantage of other SUV normalizations over BW could be demonstrated.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
Anorexia Nervosa: Analysis of Trabecular Texture with CT
Tabari, Azadeh; Torriani, Martin; Miller, Karen K.; Klibanski, Anne; Kalra, Mannudeep K.
2017-01-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016 PMID:27797678
Resolving the fault systems with the magnetotelluric method in the western Ilan plain of NE Taiwan
NASA Astrophysics Data System (ADS)
Chang, P. Y.; Chen, C. S.
2017-12-01
In the study we attempt to use the magnetotelluric (MT) surveys to delineate the basement topography of the western part of the Ilan plain. The triangular plain is located on the extension part of the Okinawa Trough, and is thought to be a subsidence basin bounded by the Hsueshan Range in the north and the Central Range in the south. The basement of the basin is composed of Tertiary metamorphic rocks such as argillites and slates. The recent extension of the Okinawa Trough started from approximately 0.1 Ma and involved ENE- and WSW-trending normal faults that may extended into the Ilan plain area. However, high sedimentation rates as well as the frequent human activities have resulted in unconsolidated sediments with a thickness of over 100 meters, and caused the difficulties in observing the surface traces of the active faults in the area. Hence we deployed about 70 MT stations across the southwestern tip of the triangular plain. We also tried to resolve the subsurface faults the relief variations of the basement with the inverted resistivity images, since the saturated sediments are relatively conductive and the consolidated rocks are resistive. With the inverted MT images, we found that there are a series of N-S trending horsts and grabens in addition to the ENE-WSW normal fault systems. The ENE-WSW trending faults are dipping mainly toward the north in our study area in the western tip of the Ilan plain. The preliminary results suggest that a younger N-S trending normal fault system may modify the relief of the basement in the recent stage after the activation of the ENE-WSW normal faults. The findings of the MT resistivity images provide new information to further review the tectonic explanations of the region in the future.
Anorexia Nervosa: Analysis of Trabecular Texture with CT.
Tabari, Azadeh; Torriani, Martin; Miller, Karen K; Klibanski, Anne; Kalra, Mannudeep K; Bredella, Miriam A
2017-04-01
Purpose To determine indexes of skeletal integrity by using computed tomographic (CT) trabecular texture analysis of the lumbar spine in patients with anorexia nervosa and normal-weight control subjects and to determine body composition predictors of trabecular texture. Materials and Methods This cross-sectional study was approved by the institutional review board and compliant with HIPAA. Written informed consent was obtained. The study included 30 women with anorexia nervosa (mean age ± standard deviation, 26 years ± 6) and 30 normal-weight age-matched women (control group). All participants underwent low-dose single-section quantitative CT of the L4 vertebral body with use of a calibration phantom. Trabecular texture analysis was performed by using software. Skewness (asymmetry of gray-level pixel distribution), kurtosis (pointiness of pixel distribution), entropy (inhomogeneity of pixel distribution), and mean value of positive pixels (MPP) were assessed. Bone mineral density and abdominal fat and paraspinal muscle areas were quantified with quantitative CT. Women with anorexia nervosa and normal-weight control subjects were compared by using the Student t test. Linear regression analyses were performed to determine associations between trabecular texture and body composition. Results Women with anorexia nervosa had higher skewness and kurtosis, lower MPP (P < .001), and a trend toward lower entropy (P = .07) compared with control subjects. Bone mineral density, abdominal fat area, and paraspinal muscle area were inversely associated with skewness and kurtosis and positively associated with MPP and entropy. Texture parameters, but not bone mineral density, were associated with lowest lifetime weight and duration of amenorrhea in anorexia nervosa. Conclusion Patients with anorexia nervosa had increased skewness and kurtosis and decreased entropy and MPP compared with normal-weight control subjects. These parameters were associated with lowest lifetime weight and duration of amenorrhea, but there were no such associations with bone mineral density. These findings suggest that trabecular texture analysis might contribute information about bone health in anorexia nervosa that is independent of that provided with bone mineral density. © RSNA, 2016.
Morphologic dating of fault scarps using airborne laser swath mapping (ALSM) data
Hilley, G.E.; Delong, S.; Prentice, C.; Blisniuk, K.; Arrowsmith, J.R.
2010-01-01
Models of fault scarp morphology have been previously used to infer the relative age of different fault scarps in a fault zone using labor-intensive ground surveying. We present a method for automatically extracting scarp morphologic ages within high-resolution digital topography. Scarp degradation is modeled as a diffusive mass transport process in the across-scarp direction. The second derivative of the modeled degraded fault scarp was normalized to yield the best-fitting (in a least-squared sense) scarp height at each point, and the signal-to-noise ratio identified those areas containing scarp-like topography. We applied this method to three areas along the San Andreas Fault and found correspondence between the mapped geometry of the fault and that extracted by our analysis. This suggests that the spatial distribution of scarp ages may be revealed by such an analysis, allowing the recent temporal development of a fault zone to be imaged along its length.
Structural changes of the corpus callosum in tinnitus
Diesch, Eugen; Schummer, Verena; Kramer, Martin; Rupp, Andre
2012-01-01
Objectives: In tinnitus, several brain regions seem to be structurally altered, including the medial partition of Heschl's gyrus (mHG), the site of the primary auditory cortex. The mHG is smaller in tinnitus patients than in healthy controls. The corpus callosum (CC) is the main interhemispheric commissure of the brain connecting the auditory areas of the left and the right hemisphere. Here, we investigate whether tinnitus status is associated with CC volume. Methods: The midsagittal cross-sectional area of the CC was examined in tinnitus patients and healthy controls in which an examination of the mHG had been carried out earlier. The CC was extracted and segmented into subregions which were defined according to the most common CC morphometry schemes introduced by Witelson (1989) and Hofer and Frahm (2006). Results: For both CC segmentation schemes, the CC posterior midbody was smaller in male patients than in male healthy controls and the isthmus, the anterior midbody, and the genou were larger in female patients than in female controls. With CC size normalized relative to mHG volume, the normalized CC splenium was larger in male patients than male controls and the normalized CC splenium, the isthmus and the genou were larger in female patients than female controls. Normalized CC segment size expresses callosal interconnectivity relative to auditory cortex volume. Conclusion: It may be argued that the predominant function of the CC is excitatory. The stronger callosal interconnectivity in tinnitus patients, compared to healthy controls, may facilitate the emergence and maintenance of a positive feedback loop between tinnitus generators located in the two hemispheres. PMID:22470322
Nucleolar Organizer Regions of Oral Epithelial Cells in Crack Cocaine Users
Carvalho de M. Thiele, Magna; Carlos Bohn, Joslei; Lima Chaiben, Cassiano; Trindade Grégio, Ana Maria; Ângela Naval Machado, Maria; Adilson Soares de Lima, Antonio
2013-01-01
Background: The health risks of crack cocaine smoking on the oral mucosa has not been widely researched and documented. Objective: The purpose of this study was to analyze the proliferative activity of oral epithelial cells exposed to crack cocaine smoke using silver nucleolar organizer region (AgNOR) staining. Methods: Oral smears were collected from clinically normal-appearing buccal mucosa by liquid-based exfoliative cytology of 60 individuals (30 crack cocaine users and 30 healthy controls matched for age and gender) and analyzed for cytomorphologic and cytomorphometric techniques. Results: Crack cocaine users consumed about 13.3 heat-stable rocks per day and the time consumption of the drug was of 5.2 (± 3.3) years. Mean values of AgNOR counting for case and control groups were 5.18 ± 1.83 and 3.38 ± 1.02 (P<0.05), respectively. AgNOR area and percentage of AgNOR-occupied nuclear area were increased in comparison with the control (P<0.05). There was no statistically significant difference in the mean values of the nuclear area between the groups (P>0.05). Conclusion: This study revealed that crack cocaine smoke increases the rate of cellular proliferation in cells of normal buccal mucosa. PMID:23567853
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damilakis, J; Stratakis, J; Solomou, G
Purpose: It is well known that pacemaker implantation is sometimes needed in pregnant patients with symptomatic bradycardia. To our knowledge, there is no reported experience regarding radiation doses to the unborn child resulting from fluoroscopy during pacemaker implantation. The purpose of the current study was to develop a method for estimating embryo/fetus dose from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all trimesters of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study. Three mathematical anthropomorphic phantoms representing the average pregnant patient at the first, second and third trimesters of gestationmore » were generated using Bodybuilder software (White Rock science, White Rock, NM). The normalized embryo/fetus dose from the posteroanterior (PA), the 30° left-anterior oblique (LAO) and the 30° right-anterior oblique (RAO) projections were calculated for a wide range of kVp (50–120 kVp) and total filtration values (2.5–9.0 mm Al). Results: The results consist of radiation doses normalized to a) entrance skin dose (ESD) and b) dose area product (DAP) so that the dose to the unborn child from any fluoroscopic technique and x-ray device used can be calculated. ESD normalized doses ranged from 0.008 (PA, first trimester) to 2.519 μGy/mGy (RAO, third trimester). DAP normalized doses ranged from 0.051 (PA, first trimester) to 12.852 μGy/Gycm2 (RAO, third trimester). Conclusion: Embryo/fetus doses from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all stages of gestation can be estimated using the method developed in this study. This study was supported by the Greek Ministry of Education and Religious Affairs, General Secretariat for Research and Technology, Operational Program ‘Education and Lifelong Learning’, ARISTIA (Research project: CONCERT)« less
Parenreng, Jumadi Mabe; Kitagawa, Akio
2018-05-17
Wireless Sensor Networks (WSNs) with limited battery, central processing units (CPUs), and memory resources are a widely implemented technology for early warning detection systems. The main advantage of WSNs is their ability to be deployed in areas that are difficult to access by humans. In such areas, regular maintenance may be impossible; therefore, WSN devices must utilize their limited resources to operate for as long as possible, but longer operations require maintenance. One method of maintenance is to apply a resource adaptation policy when a system reaches a critical threshold. This study discusses the application of a security level adaptation model, such as an ARSy Framework, for using resources more efficiently. A single node comprising a Raspberry Pi 3 Model B and a DS18B20 temperature sensor were tested in a laboratory under normal and stressful conditions. The result shows that under normal conditions, the system operates approximately three times longer than under stressful conditions. Maintaining the stability of the resources also enables the security level of a network's data output to stay at a high or medium level.
Easy-interactive and quick psoriasis lesion segmentation
NASA Astrophysics Data System (ADS)
Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang
2013-12-01
This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.
Kitagawa, Akio
2018-01-01
Wireless Sensor Networks (WSNs) with limited battery, central processing units (CPUs), and memory resources are a widely implemented technology for early warning detection systems. The main advantage of WSNs is their ability to be deployed in areas that are difficult to access by humans. In such areas, regular maintenance may be impossible; therefore, WSN devices must utilize their limited resources to operate for as long as possible, but longer operations require maintenance. One method of maintenance is to apply a resource adaptation policy when a system reaches a critical threshold. This study discusses the application of a security level adaptation model, such as an ARSy Framework, for using resources more efficiently. A single node comprising a Raspberry Pi 3 Model B and a DS18B20 temperature sensor were tested in a laboratory under normal and stressful conditions. The result shows that under normal conditions, the system operates approximately three times longer than under stressful conditions. Maintaining the stability of the resources also enables the security level of a network’s data output to stay at a high or medium level. PMID:29772773
Coban, Huseyin Oguz; Koc, Ayhan; Eker, Mehmet
2010-01-01
Previous studies have been able to successfully detect changes in gently-sloping forested areas with low-diversity and homogeneous vegetation cover using medium-resolution satellite data such as landsat. The aim of the present study is to examine the capacity of multi-temporal landsat data to identify changes in forested areas with mixed vegetation and generally located on steep slopes or non-uniform topography landsat thematic mapper (TM) and landsat enhanced thematic mapperplus (ETM+) data for the years 1987-2000 was used to detect changes within a 19,500 ha forested area in the Western Black sea region of Turkey. The data comply with the forest cover type maps previously created for forest management plans of the research area. The methods used to detect changes were: post-classification comparison, image differencing, image rationing and NDVI (Normalized Difference Vegetation Index) differencing methods. Following the supervised classification process, error matrices were used to evaluate the accuracy of classified images obtained. The overall accuracy has been calculated as 87.59% for 1987 image and as 91.81% for 2000 image. General kappa statistics have been calculated as 0.8543 and 0.9038 for 1987 and 2000, respectively. The changes identified via the post-classification comparison method were compared with other change detetion methods. Maximum coherence was found to be 74.95% at 4/3 band rate. The NDVI difference and 3rd band difference methods achieved the same coherence with slight variations. The results suggest that landsat satellite data accurately conveys the temporal changes which occur on steeply-sloping forested areas with a mixed structure, providing a limited amount of detail but with a high level of accuracy. Moreover it has been decided that the post-classification comparison method can meet the needs of forestry activities better than other methods as it provides information about the direction of these changes.
Eremenko, A I; Mogil'naia, G M; Giunter, V E; Sakhnov, S N; Stebliuk, A N
2006-01-01
Experimental (23 rabbits) and clinical (42 patients with operated glaucoma) studies were conducted to evaluate the effect of a titanium nickelide implant on ocular tissue in modified deep sclerectomy in order to normalize intraocular pressure. Scleral morphological studies in the area of implant placement revealed the formation of a capsule with "fissures" and vessels. Clinically, there was intraocular pressure compensation in 92.8% of the patients operated on.
Hemorrhage detection in MRI brain images using images features
NASA Astrophysics Data System (ADS)
Moraru, Luminita; Moldovanu, Simona; Bibicu, Dorin; Stratulat (Visan), Mirela
2013-11-01
The abnormalities appear frequently on Magnetic Resonance Images (MRI) of brain in elderly patients presenting either stroke or cognitive impairment. Detection of brain hemorrhage lesions in MRI is an important but very time-consuming task. This research aims to develop a method to extract brain tissue features from T2-weighted MR images of the brain using a selection of the most valuable texture features in order to discriminate between normal and affected areas of the brain. Due to textural similarity between normal and affected areas in brain MR images these operation are very challenging. A trauma may cause microstructural changes, which are not necessarily perceptible by visual inspection, but they could be detected by using a texture analysis. The proposed analysis is developed in five steps: i) in the pre-processing step: the de-noising operation is performed using the Daubechies wavelets; ii) the original images were transformed in image features using the first order descriptors; iii) the regions of interest (ROIs) were cropped from images feature following up the axial symmetry properties with respect to the mid - sagittal plan; iv) the variation in the measurement of features was quantified using the two descriptors of the co-occurrence matrix, namely energy and homogeneity; v) finally, the meaningful of the image features is analyzed by using the t-test method. P-value has been applied to the pair of features in order to measure they efficacy.
Wang, Guangye; Huang, Wenjun; Song, Qi; Liang, Jinfeng
2017-11-01
This study aims to analyze the contact areas and pressure distributions between the femoral head and mortar during normal walking using a three-dimensional finite element model (3D-FEM). Computed tomography (CT) scanning technology and a computer image processing system were used to establish the 3D-FEM. The acetabular mortar model was used to simulate the pressures during 32 consecutive normal walking phases and the contact areas at different phases were calculated. The distribution of the pressure peak values during the 32 consecutive normal walking phases was bimodal, which reached the peak (4.2 Mpa) at the initial phase where the contact area was significantly higher than that at the stepping phase. The sites that always kept contact were concentrated on the acetabular top and leaned inwards, while the anterior and posterior acetabular horns had no pressure concentration. The pressure distributions of acetabular cartilage at different phases were significantly different, the zone of increased pressure at the support phase distributed at the acetabular top area, while that at the stepping phase distributed in the inside of acetabular cartilage. The zones of increased contact pressure and the distributions of acetabular contact areas had important significance towards clinical researches, and could indicate the inductive factors of acetabular osteoarthritis. Copyright © 2016. Published by Elsevier Taiwan.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Improving the performance of univariate control charts for abnormal detection and classification
NASA Astrophysics Data System (ADS)
Yiakopoulos, Christos; Koutsoudaki, Maria; Gryllias, Konstantinos; Antoniadis, Ioannis
2017-03-01
Bearing failures in rotating machinery can cause machine breakdown and economical loss, if no effective actions are taken on time. Therefore, it is of prime importance to detect accurately the presence of faults, especially at their early stage, to prevent sequent damage and reduce costly downtime. The machinery fault diagnosis follows a roadmap of data acquisition, feature extraction and diagnostic decision making, in which mechanical vibration fault feature extraction is the foundation and the key to obtain an accurate diagnostic result. A challenge in this area is the selection of the most sensitive features for various types of fault, especially when the characteristics of failures are difficult to be extracted. Thus, a plethora of complex data-driven fault diagnosis methods are fed by prominent features, which are extracted and reduced through traditional or modern algorithms. Since most of the available datasets are captured during normal operating conditions, the last decade a number of novelty detection methods, able to work when only normal data are available, have been developed. In this study, a hybrid method combining univariate control charts and a feature extraction scheme is introduced focusing towards an abnormal change detection and classification, under the assumption that measurements under normal operating conditions of the machinery are available. The feature extraction method integrates the morphological operators and the Morlet wavelets. The effectiveness of the proposed methodology is validated on two different experimental cases with bearing faults, demonstrating that the proposed approach can improve the fault detection and classification performance of conventional control charts.
Percent area coverage through image analysis
NASA Astrophysics Data System (ADS)
Wong, Chung M.; Hong, Sung M.; Liu, De-Ling
2016-09-01
The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.
Benson, L.V.; Paillet, Frederick L.
1989-01-01
Variation in the size of lakes in the Lahontan basin is topographically constrained. River diversion also has played a major role in regulating lake size in Lahontan subbasins. The proper gage of lake response to change in the hydrologic balance is neither lake depth (level) nor lake volume but instead lake-surface area. Normalization of surface area is necessary when comparing surface areas of lakes in basins having different topographies. To a first approximation, normalization can be accomplished by dividing the paleosurface area of a lake by its mean-historical, reconstructed surface area. ?? 1989.
Wang, Xinmei; Cui, Dongmei; Zheng, Ling; Yang, Xiao; Yang, Hui
2012-01-01
Purpose To elucidate the different neuromechanisms of subjects with strabismic and anisometropic amblyopia compared with normal vision subjects using blood oxygen level–dependent functional magnetic resonance imaging (BOLD-fMRI) and pattern-reversal visual evoked potential (PR-VEP). Methods Fifty-three subjects, age range seven to 12 years, diagnosed with strabismic amblyopia (17 cases), anisometropic amblyopia (20 cases), and normal vision (16 cases), were examined using the BOLD-fMRI and PR-VEP of UTAS-E3000 techniques. Cortical activation by binocular viewing of reversal checkerboard patterns was examined in terms of the calcarine region of interest (ROI)-based and spatial frequency–dependent analysis. The correlation of cortical activation in fMRI and the P100 amplitude in VEP were analyzed using the SPSS 12.0 software package. Results In the BOLD-fMRI procedure, reduced areas and decreased activation levels were found in Brodmann area (BA) 17 and other extrastriate areas in subjects with amblyopia compared with the normal vision group. In general, the reduced areas mainly resided in the striate visual cortex in subjects with anisometropic amblyopia. In subjects with strabismic amblyopia, a more significant cortical impairment was found in bilateral BA 18 and BA 19 than that in subjects with anisometropic amblyopia. The activation by high-spatial-frequency stimuli was reduced in bilateral BA 18 and 19 as well as BA 17 in subjects with anisometropic amblyopia, whereas the activation was mainly reduced in BA 18 and BA 19 in subjects with strabismic amblyopia. These findings were further confirmed by the ROI-based analysis of BA 17. During spatial frequency–dependent VEP detection, subjects with anisometropic amblyopia had reduced sensitivity for high spatial frequency compared to subjects with strabismic amblyopia. The cortical activation in fMRI with the calcarine ROI-based analysis of BA 17 was significantly correlated with the P100 amplitude in VEP recording. Conclusions This study suggested that different types of amblyopia had different cortical responses and combinations of spatial frequency–dependent BOLD-fMRI with PR-VEP could differentiate among various kinds of amblyopia according to the different cortical responses. This study can supply new methods for amblyopia neurology study. PMID:22539870
A histopathological study of bulbar conjunctival flaps occurring in 2 contact lens wearers.
Markoulli, Maria; Francis, Ian C; Yong, Jim; Jalbert, Isabelle; Carnt, Nicole; Cole, Nerida; Papas, Eric
2011-09-01
To study the histopathology of paralimbal bulbar conjunctival flaps occurring secondary to soft contact lens wear. Slit-lamp biomicroscopy using sodium fluorescein, cobalt blue light, and a Wratten filter was used to observe the presence, location, and dimensions of bulbar conjunctival flaps presenting in a cohort of contact lens wearers. Two subjects who exhibited such flaps agreed to undergo conjunctival biopsy. Tissue samples, obtained from the region of the flap, and an adjacent unaffected area were processed by standard histopathological methods. In the first subject, analysis of the flap tissue showed even collagen distribution and overall normal histology. The flap of the second subject displayed a mild focal increase in collagen and mild degeneration of collagen, but no increase in elastic tissue. Conjunctival epithelium was normal in both cases. In these 2 subjects, conjunctival flap tissue either was normal or showed only minimal abnormality. There is insufficient evidence for significant pathological change on the time scale of this study.
Yu, Bin; Yang, Mei; Shi, Lei; Yao, Yandan; Jiang, Qinqin; Li, Xuefei; Tang, Lei-Han; Zheng, Bo-Jian; Yuen, Kwok-Yung; Smith, David K.; Song, Erwei; Huang, Jian-Dong
2012-01-01
Using bacteria as therapeutic agents against solid tumors is emerging as an area of great potential in the treatment of cancer. Obligate and facultative anaerobic bacteria have been shown to infiltrate the hypoxic regions of solid tumors, thereby reducing their growth rate or causing regression. However, a major challenge for bacterial therapy of cancer with facultative anaerobes is avoiding damage to normal tissues. Consequently the virulence of bacteria must be adequately attenuated for therapeutic use. By placing an essential gene under a hypoxia conditioned promoter, Salmonella Typhimurium strain SL7207 was engineered to survive only in anaerobic conditions (strain YB1) without otherwise affecting its functions. In breast tumor bearing nude mice, YB1 grew within the tumor, retarding its growth, while being rapidly eliminated from normal tissues. YB1 provides a safe bacterial vector for anti-tumor therapies without compromising the other functions or tumor fitness of the bacterium as attenuation methods normally do. PMID:22666539
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
On-chip self-assembly of cell embedded microstructures to vascular-like microtubes.
Yue, Tao; Nakajima, Masahiro; Takeuchi, Masaru; Hu, Chengzhi; Huang, Qiang; Fukuda, Toshio
2014-03-21
Currently, research on the construction of vascular-like tubular structures is a hot area of tissue engineering, since it has potential applications in the building of artificial blood vessels. In this paper, we report a fluidic self-assembly method using cell embedded microstructures to construct vascular-like microtubes. A novel 4-layer microfluidic device was fabricated using polydimethylsiloxane (PDMS), which contains fabrication, self-assembly and extraction areas inside one channel. Cell embedded microstructures were directly fabricated using poly(ethylene glycol) diacrylate (PEGDA) in the fabrication area, namely on-chip fabrication. Self-assembly of the fabricated microstructures was performed in the assembly area which has a micro well. Assembled tubular structures (microtubes) were extracted outside the channel into culture dishes using a normally closed (NC) micro valve in the extraction area. The self-assembly mechanism was experimentally demonstrated. The performance of the NC micro valve and embedded cell concentration were both evaluated. Fibroblast (NIH/3T3) embedded vascular-like microtubes were constructed inside this reusable microfluidic device.
Ultrasound of Inherited vs. Acquired Demyelinating Polyneuropathies
Zaidman, Craig M.; Harms, Matthew B.; Pestronk, Alan
2013-01-01
Introduction We compared features of nerve enlargement in inherited and acquired demyelinating neuropathies using ultrasound. Methods We measured median and ulnar nerve cross-sectional areas in proximal and distal regions in 128 children and adults with inherited (Charcot-Marie Tooth-1 (CMT-1) (n=35)) and acquired (Chronic Inflammatory Demyelinating Polyneuropathy (CIDP) (n=55), Guillaine-Barre Syndrome (GBS) (n=21) and Multifocal Motor Neuropathy (MMN) (n=17)) demyelinating neuropathies. We classified nerve enlargement by degree and number of regions affected. We defined patterns of nerve enlargement as: none- no enlargement; mild-nerves enlarged but never more than twice normal; regional- nerves normal at at least one region and enlarged more than twice normal at atleast one region; diffuse- nerves enlarged at all four regions with atleast one region more than twice normal size. Results Nerve enlargement was commonly diffuse (89%) and generally more than twice normal size in CMT-1, but not (p<0.001) in acquired disorders which mostly had either no, mild or regional nerve enlargement (CIDP (64%), GBS (95%), and MMN (100%)). In CIDP, subjects treated within three months of disease onset had less nerve enlargement than those treated later. Discussion Ultrasound identified patterns of diffuse nerve enlargement can be used to screen patients suspected of having CMT-1. Normal, mildly, or regionally enlarged nerves in demyelinating polyneuropathy suggests an acquired etiology. Early treatment in CIDP may impede nerve enlargement. PMID:24101129
Corticocortical feedback increases the spatial extent of normalization.
Nassi, Jonathan J; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T
2014-01-01
Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a "normalization pool." Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing.
Corticocortical feedback increases the spatial extent of normalization
Nassi, Jonathan J.; Gómez-Laberge, Camille; Kreiman, Gabriel; Born, Richard T.
2014-01-01
Normalization has been proposed as a canonical computation operating across different brain regions, sensory modalities, and species. It provides a good phenomenological description of non-linear response properties in primary visual cortex (V1), including the contrast response function and surround suppression. Despite its widespread application throughout the visual system, the underlying neural mechanisms remain largely unknown. We recently observed that corticocortical feedback contributes to surround suppression in V1, raising the possibility that feedback acts through normalization. To test this idea, we characterized area summation and contrast response properties in V1 with and without feedback from V2 and V3 in alert macaques and applied a standard normalization model to the data. Area summation properties were well explained by a form of divisive normalization, which computes the ratio between a neuron's driving input and the spatially integrated activity of a “normalization pool.” Feedback inactivation reduced surround suppression by shrinking the spatial extent of the normalization pool. This effect was independent of the gain modulation thought to mediate the influence of contrast on area summation, which remained intact during feedback inactivation. Contrast sensitivity within the receptive field center was also unaffected by feedback inactivation, providing further evidence that feedback participates in normalization independent of the circuit mechanisms involved in modulating contrast gain and saturation. These results suggest that corticocortical feedback contributes to surround suppression by increasing the visuotopic extent of normalization and, via this mechanism, feedback can play a critical role in contextual information processing. PMID:24910596
Dynamic drought risk assessment using crop model and remote sensing techniques
NASA Astrophysics Data System (ADS)
Sun, H.; Su, Z.; Lv, J.; Li, L.; Wang, Y.
2017-02-01
Drought risk assessment is of great significance to reduce the loss of agricultural drought and ensure food security. The normally drought risk assessment method is to evaluate its exposure to the hazard and the vulnerability to extended periods of water shortage for a specific region, which is a static evaluation method. The Dynamic Drought Risk Assessment (DDRA) is to estimate the drought risk according to the crop growth and water stress conditions in real time. In this study, a DDRA method using crop model and remote sensing techniques was proposed. The crop model we employed is DeNitrification and DeComposition (DNDC) model. The drought risk was quantified by the yield losses predicted by the crop model in a scenario-based method. The crop model was re-calibrated to improve the performance by the Leaf Area Index (LAI) retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) data. And the in-situ station-based crop model was extended to assess the regional drought risk by integrating crop planted mapping. The crop planted area was extracted with extended CPPI method from MODIS data. This study was implemented and validated on maize crop in Liaoning province, China.
NASA Astrophysics Data System (ADS)
Wei, S.; Fang, H.
2016-12-01
The Clumping index (CI) describes the spatial distribution pattern of foliage, and is a critical parameter used to characterize the terrestrial ecosystem and model land-surface processes. Global and regional scale CI maps have been generated from POLDER, MODIS, and MISR sensors based on an empirical relationship with the normalized difference between hotspot and darkspot (NDHD) index by previous studies. However, the hotspot and darkspot values and CI values can be considerably different from different bidirectional reflectance distribution function (BRDF) models and solar zenith angles (SZA). In this study, we evaluated the effects of different configurations of BRDF models and SZA values on CI estimation using the NDHD method. CI maps estimated from MISR and MODIS were compared with reference data at the VALERI sites. Results show that for moderate to least clumped vegetation (CI > 0.5), CIs retrieved with the observational SZA agree well with field values, while SZA =0° results in underestimates, and SZA = 60° results in overestimates. For highly clumped (CI < 0.5) and sparsely vegetated areas (FCOVER<25%), the Ross-Li model with 60° SZA is recommended for CI estimation. The suitable NDHD configuration was further used to estimate a 15-year time series CI from MODIS BRDF data. The time series CI shows a reasonable seasonal trajectory, and varies consistently with the MODIS leaf area index (LAI). This study enables better usage of the NDHD method for CI estimation, and can be a useful reference for research on CI validation.
Corneal endothelial cell density and morphology in normal Iranian eyes
Hashemian, Mohammad Nasser; Moghimi, Sasan; Fard, Masood Aghsaie; Fallah, Mohammad Reza; Mansouri, Mohammad Reza
2006-01-01
Background We describe corneal endothelial cell density and morphology in normal Iranian eyes and compare endothelial cell characteristics in the Iranian population with data available in the literature for American and Indian populations. Methods Specular microscopy was performed in 525 eyes of normal Iranian people aged 20 to 85 years old. The studied parameters including mean endothelial cell density (MCD), mean cell area (MCA) and coefficient of variation (CV) in cell area were analyzed in all of the 525 eyes. Results MCD was 1961 ± 457 cell/mm2 and MCA was 537.0 ± 137.4 μm2. There was no statistically significant difference in MCD, MCA and CV between genders (Student t-test, P = 0.85, P = 0.97 and P = 0.15 respectively). There was a statistically significant decrease in MCD with age (P < 0.001, r = -0.64). The rate of cell loss was 0.6% per year. There was also a statistically significant increase in MCA (P < 0.001,r = 0.56) and CV (P < 0.001, r = 0.30) from 20 to 85 years of age. Conclusion The first normative data for the endothelium of Iranian eyes seems to confirm that there are no differences in MCD, MCA and CV between genders. Nevertheless, the values obtained in Iranian eyes seem to be different to those reported by the literature in Indian and American populations. PMID:16519812
Zierler, R E; Phillips, D J; Beach, K W; Primozich, J F; Strandness, D E
1987-08-01
The combination of a B-mode imaging system and a single range-gate pulsed Doppler flow velocity detector (duplex scanner) has become the standard noninvasive method for assessing the extracranial carotid artery. However, a significant limitation of this approach is the small area of vessel lumen that can be evaluated at any one time. This report describes a new duplex instrument that displays blood flow as colors superimposed on a real-time B-mode image. Returning echoes from a linear array of transducers are continuously processed for amplitude and phase. Changes in phase are produced by tissue motion and are used to calculate Doppler shift frequency. This results in a color assignment: red and blue indicate direction of flow with respect to the ultrasound beam, and lighter shades represent higher velocities. The carotid bifurcations of 10 normal subjects were studied. Changes in flow velocities across the arterial lumen were clearly visualized as varying shades of red or blue during the cardiac cycle. A region of flow separation was observed in all proximal internal carotids as a blue area located along the outer wall of the bulb. Thus, it is possible to detect the localized flow patterns that characterize normal carotid arteries. Other advantages of color-flow imaging include the ability to rapidly identify the carotid bifurcation branches and any associated anatomic variations.
Effects of education on aging-related cortical thinning among cognitively normal individuals.
Kim, Jun Pyo; Seo, Sang Won; Shin, Hee Young; Ye, Byoung Seok; Yang, Jin-Ju; Kim, Changsoo; Kang, Mira; Jeon, Seun; Kim, Hee Jin; Cho, Hanna; Kim, Jung-Hyun; Lee, Jong-Min; Kim, Sung Tae; Na, Duk L; Guallar, Eliseo
2015-09-01
We aimed to investigate the relationship between education and cortical thickness in cognitively normal individuals to determine whether education attenuated the association of advanced aging and cortical thinning. A total of 1,959 participants, in whom education levels were available, were included in the final analysis. Cortical thickness was measured on high-resolution MRIs using a surface-based method. Multiple linear regression analysis was performed for education level and cortical thickness, after controlling for possible confounders. High levels of education were correlated with increased mean cortical thickness throughout the entire cortex (p = 0.003). This association persisted after controlling for vascular risk factors. Statistical maps of cortical thickness showed that the high levels of education were correlated with increased cortical thickness in the bilateral premotor areas, anterior cingulate cortices, perisylvian areas, right superior parietal lobule, left lingual gyrus, and occipital pole. There were also interactive effects of age and education on the mean cortical thickness (p = 0.019). Our findings suggest the protective effect of education on cortical thinning in cognitively normal older individuals, regardless of vascular risk factors. This effect was found only in the older participants, suggesting that the protective effects of education on cortical thickness might be achieved by increased resistance to structural loss from aging rather than by simply providing a fixed advantage in the brain. © 2015 American Academy of Neurology.
Ito, Y; Hasegawa, S; Yamaguchi, H; Yoshioka, J; Uehara, T; Nishimura, T
2000-01-01
Clinical studies have shown discrepancies in the distribution of thallium-201 and iodine 123-beta-methyl-iodophenylpentadecanoic acid (BMIPP) in patients with hypertrophic cardiomyopathy (HCM). Myocardial uptake of fluorine 18 deoxyglucose (FDG) is increased in the hypertrophic area in HCM. We examined whether the distribution of a Tl-201/BMIPP subtraction polar map correlates with that of an FDG polar map. We normalized to maximum count each Tl-201 and BMIPP bull's-eye polar map of 6 volunteers and obtained a standard Tl-201/BMIPP subtraction polar map by subtracting a normalized BMIPP bull's-eye polar map from a normalized Tl-201 bull's-eye polar map. The Tl-201/BMIPP subtraction polar map was then applied to 8 patients with HCM (mean age 65+/-12 years) to evaluate the discrepancy between Tl-201 and BMIPP distribution. We compared the Tl-201/BMIPP subtraction polar map with an FDG polar map. In patients with HCM, the Tl-201/BMIPP subtraction polar map showed a focal uptake pattern in the hypertrophic area similar to that of the FDG polar map. By quantitative analysis, the severity score of the Tl-201/BMIPP subtraction polar map was significantly correlated with the percent dose uptake of the FDG polar map. These results suggest that this new quantitative method may be an alternative to FDG positron emission tomography for the routine evaluation of HCM.
Zink, Jean-Vincent; Souteyrand, Philippe; Guis, Sandrine; Chagnaud, Christophe; Fur, Yann Le; Militianu, Daniela; Mattei, Jean-Pierre; Rozenbaum, Michael; Rosner, Itzhak; Guye, Maxime; Bernard, Monique; Bendahan, David
2015-01-01
AIM: To quantify the wrist cartilage cross-sectional area in humans from a 3D magnetic resonance imaging (MRI) dataset and to assess the corresponding reproducibility. METHODS: The study was conducted in 14 healthy volunteers (6 females and 8 males) between 30 and 58 years old and devoid of articular pain. Subjects were asked to lie down in the supine position with the right hand positioned above the pelvic region on top of a home-built rigid platform attached to the scanner bed. The wrist was wrapped with a flexible surface coil. MRI investigations were performed at 3T (Verio-Siemens) using volume interpolated breath hold examination (VIBE) and dual echo steady state (DESS) MRI sequences. Cartilage cross sectional area (CSA) was measured on a slice of interest selected from a 3D dataset of the entire carpus and metacarpal-phalangeal areas on the basis of anatomical criteria using conventional image processing radiology software. Cartilage cross-sectional areas between opposite bones in the carpal region were manually selected and quantified using a thresholding method. RESULTS: Cartilage CSA measurements performed on a selected predefined slice were 292.4 ± 39 mm2 using the VIBE sequence and slightly lower, 270.4 ± 50.6 mm2, with the DESS sequence. The inter (14.1%) and intra (2.4%) subject variability was similar for both MRI methods. The coefficients of variation computed for the repeated measurements were also comparable for the VIBE (2.4%) and the DESS (4.8%) sequences. The carpus length averaged over the group was 37.5 ± 2.8 mm with a 7.45% between-subjects coefficient of variation. Of note, wrist cartilage CSA measured with either the VIBE or the DESS sequences was linearly related to the carpal bone length. The variability between subjects was significantly reduced to 8.4% when the CSA was normalized with respect to the carpal bone length. CONCLUSION: The ratio between wrist cartilage CSA and carpal bone length is a highly reproducible standardized measurement which normalizes the natural diversity between individuals. PMID:26396941
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Beamer, Gillian; Gurcan, Metin N.
2017-03-01
Accurate detection and quantification of normal lung tissue in the context of Mycobacterium tuberculosis infection is of interest from a biological perspective. The automatic detection and quantification of normal lung will allow the biologists to focus more intensely on regions of interest within normal and infected tissues. We present a computational framework to extract individual tissue sections from whole slide images having multiple tissue sections. It automatically detects the background, red blood cells and handwritten digits to bring efficiency as well as accuracy in quantification of tissue sections. For efficiency, we model our framework with logical and morphological operations as they can be performed in linear time. We further divide these individual tissue sections into normal and infected areas using deep neural network. The computational framework was trained on 60 whole slide images. The proposed computational framework resulted in an overall accuracy of 99.2% when extracting individual tissue sections from 120 whole slide images in the test dataset. The framework resulted in a relatively higher accuracy (99.7%) while classifying individual lung sections into normal and infected areas. Our preliminary findings suggest that the proposed framework has good agreement with biologists on how define normal and infected lung areas.
Chen, Hsin-Yi; Huang, Mei-Ling; Huang, Wei-Cheng
2010-01-01
Purpose To study the capability of scanning laser polarimetry with variable corneal compensation (GDx VCC) to detect differences in retinal nerve fiber layer thickness between normal and glaucomatous eyes in a Taiwan Chinese population. Methods This study included 44 normal eyes and 107 glaucomatous eyes. The glaucomatous eyes were divided into three subgroups on the basis of its visual field defects (early, moderate, severe). Each subject underwent a GDx-VCC exam and visual field testing. The area under the receiver-operating characteristic curve (AROC) of each relevant parameter was used to differentiate normal from each glaucoma subgroup, respectively. The correlation between visual field index and each parameter was evaluated for the eyes in the glaucoma group. Results For normal vs. early glaucoma, the parameter with the best AROC was Nerve fiber indicator (NFI) (0.942). For normal vs. moderate glaucoma, the parameter showing the best AROC was NFI (0.985). For normal vs. severe glaucoma, the parameter that had the best AROC was NFI (1.000). For early vs. moderate glaucoma, the parameter with the best AROC was NFI (0.732). For moderate vs. severe, the parameter showing the best AROC was temporal-superior-nasal-inferior-temporal average (0.652). For early vs. severe, the parameter with the best AROC was NFI (0.852). Conclusions GDx-VCC-measured parameters may serve as a useful tool to distinguish normal from glaucomatous eyes; in particular, NFI turned out to be the best discriminating parameter.
NASA Astrophysics Data System (ADS)
Alexander, Troy A.; Pellegrino, Paul M.; Gillespie, James B.
2003-08-01
A novel methodology has been developed for the investigation of bacterial spores. Specifically, this method has been used to probe the spore coat composition of two different Bacillus stearothermophilus variants. This technique may be useful in many applications; most notably, development of novel detection schemes toward potentially harmful bacteria. This method would also be useful as an ancillary environmental monitoring system where sterility is of importance (i.e., food preparation areas as well as invasive and minimally invasive medical applications). This unique detection scheme is based on the near-infrared (NIR) Surface-Enhanced-Raman-Scattering (SERS) from single, optically trapped, bacterial spores. The SERS spectra of bacterial spores in aqueous media have been measured using SERS substrates based on ~60-nm diameter gold colloids bound to 3-Aminopropyltriethoxysilane derivatized glass. The light from a 787-nm laser diode was used to trap/manipulate as well as simultaneously excite the SERS of an individual bacterial spore. The collected SERS spectra were examined for uniqueness and the applicability of this technique for the strain discrimination of Bacillus stearothermophilus spores. Comparison of normal Raman and SERS spectra reveal not only an enhancement of the normal Raman spectral features but also the appearance of spectral features absent in the normal Raman spectrum.
NASA Astrophysics Data System (ADS)
Alexander, Troy A.; Pellegrino, Paul M.; Gillespie, James B.
2004-03-01
A novel methodology has been developed for the investigation of bacterial spores. Specifically, this method has been used to probe the spore coat composition of two different Bacillus stearothermophilus variants. This technique may be useful in many applications; most notably, development of novel detection schemes toward potentially harmful bacteria. This method would also be useful as an ancillary environmental monitoring system where sterility is of importance (i.e., food preparation areas as well as invasive and minimally invasive medical applications). This unique detection scheme is based on the near-infrared (NIR) Surface-Enhanced-Raman- Scattering (SERS) from single, optically trapped, bacterial spores. The SERS spectra of bacterial spores in aqueous media have been measured using SERS substrates based on ~60-nm diameter gold colloids bound to 3-Aminopropyltriethoxysilane derivatized glass. The light from a 787-nm laser diode was used to trap/manipulate as well as simultaneously excite the SERS of an individual bacterial spore. The collected SERS spectra were examined for uniqueness and the applicability of this technique for the strain discrimination of Bacillus stearothermophilus spores. Comparison of normal Raman and SERS spectra reveal not only an enhancement of the normal Raman spectral features but also the appearance of spectral features absent in the normal Raman spectrum.
Study on the abnormal data rejection and normal condition evaluation applied in wind turbine farm
NASA Astrophysics Data System (ADS)
Zhang, Ying; Qian, Zheng; Tian, Shuangshu
2016-01-01
The condition detection of wind turbine is always an important issue which attract more and more attentions because of the rapid development of wind farm. And the on-line data analysis is also difficult since a lot of measured data is collected. In this paper, the abnormal data rejection and normal condition evaluation of wind turbine is processed. At first, since there are large amounts of abnormal data in the normal operation of wind turbine, which is probably caused by fault, maintenance downtime, power-limited operation and failure of wind speed sensor, a novel method is proposed to reject abnormal data in order to make more accurate analysis for the wind turbine condition. The core principle of this method is to fit the wind power curves by using the scatter diagram. The data outside the area covered by wind power curves is the abnormal data. The calculation shows that the abnormal data is rejected effectively. After the rejection, the vibration signals of wind turbine bearing which is a critical component are analyzed and the relationship between the vibration characteristic value and the operating condition of wind turbine is discussed. It will provide powerful support for the accurate fault analysis of wind turbine.
On the classification of normally distributed neurons: an application to human dentate nucleus.
Ristanović, Dušan; Milošević, Nebojša T; Marić, Dušica L
2011-03-01
One of the major goals in cellular neurobiology is the meaningful cell classification. However, in cell classification there are many unresolved issues that need to be addressed. Neuronal classification usually starts with grouping cells into classes according to their main morphological features. If one tries to test quantitatively such a qualitative classification, a considerable overlap in cell types often appears. There is little published information on it. In order to remove the above-mentioned shortcoming, we undertook the present study with the aim to offer a novel method for solving the class overlapping problem. To illustrate our method, we analyzed a sample of 124 neurons from adult human dentate nucleus. Among them we qualitatively selected 55 neurons with small dendritic fields (the small neurons), and 69 asymmetrical neurons with large dendritic fields (the large neurons). We showed that these two samples are normally and independently distributed. By measuring the neuronal soma areas of both samples, we observed that the corresponding normal curves cut each other. We proved that the abscissa of the point of intersection of the curves could represent the boundary between the two adjacent overlapping neuronal classes, since the error done by such division is minimal. Statistical evaluation of the division was also performed.
Nejand, Bahram Abdollahi; Gharibzadeh, Saba; Ahmadi, Vahid; Shahverdi, H. Reza
2016-01-01
We introduced a new approach to deposit perovskite layer with no need for dissolving perovskite precursors. Deposition of Solution-free perovskite (SFP) layer is a key method for deposition of perovskite layer on the hole or electron transport layers that are strongly sensitive to perovskite precursors. Using deposition of SFP layer in the perovskite solar cells would extend possibility of using many electron and hole transport materials in both normal and invert architectures of perovskite solar cells. In the present work, we synthesized crystalline perovskite powder followed by successful deposition on TiO2 and cuprous iodide as the non-sensitve and sensitive charge transport layers to PbI2 and CH3NH3I solution in DMF. The post compressing step enhanced the efficiency of the devices by increasing the interface area between perovskite and charge transport layers. The 9.07% and 7.71% cell efficiencies of the device prepared by SFP layer was achieved in respective normal (using TiO2 as a deposition substrate) and inverted structure (using CuI as deposition substrate) of perovskite solar cell. This method can be efficient in large-scale and low cost fabrication of new generation perovskite solar cells. PMID:27640991
Angioni, Stefano; Sanna, Stefania; Magnini, Roberta; Melis, Gian Benedetto; Fulghesu, Anna Maria
2011-07-01
To verify whether QUICKY is a suitable method for the identification of metabolic deterioration in normal weight patients affected by polycystic ovarian syndrome (PCOS). Prospective clinical study. Seventy-nine PCOS normal weight adolescent subjects, 50 eumenorrheic, normal weight, non-hirsute controls matched for age and BMI. Quantitative insulin sensitivity check index (QUICKY) and integrated secretory area under the curve of insulin values (I-AUC) during oral glucose tolerance test were calculated. Seventy-nine PCOS and 50 controls were studied. Normal insulin sensitivity was defined as upper control 95th percentile by QUICKY values <0.31, I-AUC at 180 min < 16,645. When applying the calculated I-AUC cut-off, 41 PCOS were classified as normoinsulinemic and 38 as hyperinsulinemic, whereas using the calculated QUICKY cut-off, only 19 PCOS could be classified as insulin resistant (IR). Fifteen out of the 60 non-IR PCOS presented hyperinsulinemia; fasting glucose and insulin levels and QUICKY were not sufficient to identify these subjects. Thus, QUICKY displayed a low sensitivity (44%) and specificity (91%) in the diagnosis of the metabolic disorder disclosed by I-AUC. CONCLUSIONS.: In young normal weight patients with PCOS the prevalence of early alterations of insulin metabolism are not detectable by QUICKY studies.
Carvalho, Luis Felipe C. S.; Nogueira, Marcelo Saito; Neto, Lázaro P. M.; Bhattacharjee, Tanmoy T.; Martin, Airton A.
2017-01-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings. PMID:29188115
Carvalho, Luis Felipe C S; Nogueira, Marcelo Saito; Neto, Lázaro P M; Bhattacharjee, Tanmoy T; Martin, Airton A
2017-11-01
Most oral injuries are diagnosed by histopathological analysis of a biopsy, which is an invasive procedure and does not give immediate results. On the other hand, Raman spectroscopy is a real time and minimally invasive analytical tool with potential for the diagnosis of diseases. The potential for diagnostics can be improved by data post-processing. Hence, this study aims to evaluate the performance of preprocessing steps and multivariate analysis methods for the classification of normal tissues and pathological oral lesion spectra. A total of 80 spectra acquired from normal and abnormal tissues using optical fiber Raman-based spectroscopy (OFRS) were subjected to PCA preprocessing in the z-scored data set, and the KNN (K-nearest neighbors), J48 (unpruned C4.5 decision tree), RBF (radial basis function), RF (random forest), and MLP (multilayer perceptron) classifiers at WEKA software (Waikato environment for knowledge analysis), after area normalization or maximum intensity normalization. Our results suggest the best classification was achieved by using maximum intensity normalization followed by MLP. Based on these results, software for automated analysis can be generated and validated using larger data sets. This would aid quick comprehension of spectroscopic data and easy diagnosis by medical practitioners in clinical settings.
Automatic building extraction from LiDAR data fusion of point and grid-based features
NASA Astrophysics Data System (ADS)
Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang
2017-08-01
This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.
Experimental studies of breaking of elastic tired wheel under variable normal load
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Zedgenizov, V. G.; Ovchinnikova, N. I.
2017-10-01
The paper analyzes the braking of a vehicle wheel subjected to disturbances of normal load variations. Experimental tests and methods for developing test modes as sinusoidal force disturbances of the normal wheel load were used. Measuring methods for digital and analogue signals were used as well. Stabilization of vehicle wheel braking subjected to disturbances of normal load variations is a topical issue. The paper suggests a method for analyzing wheel braking processes under disturbances of normal load variations. A method to control wheel baking processes subjected to disturbances of normal load variations was developed.
The impact on midlevel vision of statistically optimal divisive normalization in V1.
Coen-Cagli, Ruben; Schwartz, Odelia
2013-07-15
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.
Dalal, Ankur; Moss, Randy H.; Stanley, R. Joe; Stoecker, William V.; Gupta, Kapil; Calcara, David A.; Xu, Jin; Shrestha, Bijaya; Drugge, Rhett; Malters, Joseph M.; Perry, Lindall A.
2011-01-01
Dermoscopy, also known as dermatoscopy or epiluminescence microscopy (ELM), permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. White areas, prominent in early malignant melanoma and melanoma in situ, contribute to early detection of these lesions. An adaptive detection method has been investigated to identify white and hypopigmented areas based on lesion histogram statistics. Using the Euclidean distance transform, the lesion is segmented in concentric deciles. Overlays of the white areas on the lesion deciles are determined. Calculated features of automatically detected white areas include lesion decile ratios, normalized number of white areas, absolute and relative size of largest white area, relative size of all white areas, and white area eccentricity, dispersion, and irregularity. Using a back-propagation neural network, the white area statistics yield over 95% diagnostic accuracy of melanomas from benign nevi. White and hypopigmented areas in melanomas tend to be central or paracentral. The four most powerful features on multivariate analysis are lesion decile ratios. Automatic detection of white and hypopigmented areas in melanoma can be accomplished using lesion statistics. A neural network can achieve good discrimination of melanomas from benign nevi using these areas. Lesion decile ratios are useful white area features. PMID:21074971
Relocating alcohol advertising research: examining socially mediated relationships with alcohol.
Cherrington, Jane; Chamberlain, Kerry; Grixti, Joe
2006-03-01
This article reviews, critiques and politicises the positivist approaches that presently dominate alcohol advertising health research, and considers the benefits of a culturalist alternative. Positivist research in this area is identified as: (1) atheoretical and methods-driven; (2) restricted in focus, leaving critical issues unconsidered; and (3) inappropriately conceptualizing the 'normal' drinking person as rational and safe. The culturist alternative proposed is argued to present a more adequate framework, which can include and address problematic issues that are presently excluded, including: the pleasures associated with alcohol use, the involvements of 'normal' people in problem drinking, the inadequacy of present risk categories and the complexities of wider mediatory processes about alcohol in society. We argue for the adoption of more informed, culturalist approaches to alcohol advertising research.
Quaternion normalization in spacecraft attitude determination
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Galal, Ken
1992-01-01
Methods are presented to normalize the attitude quaternion in two extended Kalman filters (EKF), namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). It is concluded that all the normalization methods work well and yield comparable results. In the AEKF, normalization is not essential, since the data chosen for the test do not have a rapidly varying attitude. In the MEKF, normalization is necessary to avoid divergence of the attitude estimate. All of the methods of the methods behave similarly when the spacecraft experiences low angular rates.
Mao, Yingming; Sang, Shuxun; Liu, Shiqi; Jia, Jinlong
2014-05-01
The spatial variation of soil pH and soil organic matter (SOM) in the urban area of Xuzhou, China, was investigated in this study. Conventional statistics, geostatistics, and a geographical information system (GIS) were used to produce spatial distribution maps and to provide information about land use types. A total of 172 soil samples were collected based on grid method in the study area. Soil pH ranged from 6.47 to 8.48, with an average of 7.62. SOM content was very variable, ranging from 3.51 g/kg to 17.12 g/kg, with an average of 8.26 g/kg. Soil pH followed a normal distribution, while SOM followed a log-normal distribution. The results of semi-variograms indicated that soil pH and SOM had strong (21%) and moderate (44%) spatial dependence, respectively. The variogram model was spherical for soil pH and exponential for SOM. The spatial distribution maps were achieved using kriging interpolation. The high pH and high SOM tended to occur in the mixed forest land cover areas such as those in the southwestern part of the urban area, while the low values were found in the eastern and the northern parts, probably due to the effect of industrial and human activities. In the central urban area, the soil pH was low, but the SOM content was high, which is mainly attributed to the disturbance of regional resident activities and urban transportation. Furthermore, anthropogenic organic particles are possible sources of organic matter after entering the soil ecosystem in urban areas. These maps provide useful information for urban planning and environmental management. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.
Fractal-Based Image Analysis In Radiological Applications
NASA Astrophysics Data System (ADS)
Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.
1987-10-01
We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.
Rockfall hazard and risk assessments along roads at a regional scale: example in Swiss Alps
NASA Astrophysics Data System (ADS)
Michoud, C.; Derron, M.-H.; Horton, P.; Jaboyedoff, M.; Baillifard, F.-J.; Loye, A.; Nicolet, P.; Pedrazzini, A.; Queyrel, A.
2012-03-01
Unlike fragmental rockfall runout assessments, there are only few robust methods to quantify rock-mass-failure susceptibilities at regional scale. A detailed slope angle analysis of recent Digital Elevation Models (DEM) can be used to detect potential rockfall source areas, thanks to the Slope Angle Distribution procedure. However, this method does not provide any information on block-release frequencies inside identified areas. The present paper adds to the Slope Angle Distribution of cliffs unit its normalized cumulative distribution function. This improvement is assimilated to a quantitative weighting of slope angles, introducing rock-mass-failure susceptibilities inside rockfall source areas previously detected. Then rockfall runout assessment is performed using the GIS- and process-based software Flow-R, providing relative frequencies for runout. Thus, taking into consideration both susceptibility results, this approach can be used to establish, after calibration, hazard and risk maps at regional scale. As an example, a risk analysis of vehicle traffic exposed to rockfalls is performed along the main roads of the Swiss alpine valley of Bagnes.
A risk evaluation model and its application in online retailing trustfulness
NASA Astrophysics Data System (ADS)
Ye, Ruyi; Xu, Yingcheng
2017-08-01
Building a general model for risks evaluation in advance could improve the convenience, normality and comparability of the results of repeating risks evaluation in the case that the repeating risks evaluating are in the same area and for a similar purpose. One of the most convenient and common risks evaluation models is an index system including of several index, according weights and crediting method. One method to build a risk evaluation index system that guarantees the proportional relationship between the resulting credit and the expected risk loss is proposed and an application example is provided in online retailing in this article.
Tissue Viscoelasticity Imaging Using Vibration and Ultrasound Coupler Gel
NASA Astrophysics Data System (ADS)
Yamakawa, Makoto; Shiina, Tsuyoshi
2012-07-01
In tissue diagnosis, both elasticity and viscosity are important indexes. Therefore, we propose a method for evaluating tissue viscoelasticity by applying vibration that is usually performed in elastography and using an ultrasound coupler gel with known viscoelasticity. In this method, we use three viscoelasticity parameters based on the coupler strain and tissue strain: the strain ratio as an elasticity parameter, and the phase difference and the normalized hysteresis loop area as viscosity parameters. In the agar phantom experiment, using these viscoelasticity parameters, we were able to estimate the viscoelasticity distribution of the phantom. In particular, the strain ratio and the phase difference were robust to strain estimation error.
a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection
NASA Astrophysics Data System (ADS)
Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.
2016-06-01
Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.
NASA Astrophysics Data System (ADS)
Agoes Nugroho, Indra; Kurniawahidayati, Beta; Syahputra Mulyana, Reza; Saepuloh, Asep
2017-12-01
Remote sensing is one of the methods for geothermal exploration. This method can be used to map the geological structures, manifestations, and predict the geothermal potential area. The results from remote sensing were used as guidance for the next step exploration. Analysis of target in remote sensing is an efficient method to delineate geothermal surface manifestation without direct contact to the object. The study took a place in District Merangin, Jambi Province, Indonesia. The area was selected due to existing of Merangin volcanic complex composed by Mounts Sumbing and Hulunilo with surface geothermal manifestations presented by hot springs and hot pools. The location of surface manifestations could be related with local and regional structures of Great Sumatra Fault. The methods used in this study were included identification of volcanic products, lineament extraction, and lineament density quantification. The objective of this study is to delineate the potential zones for sitting the geothermal working site based on Thermal Infrared and Synthetic Aperture Radar (SAR) sensors. The lineament-related to geological structures, was aimed for high lineament density, is using ALOS - PALSAR (Advanced Land Observing Satellite - The Phased Array type L-band Synthetic Aperture Radar) level 1.1. The Normalized Difference Vegetation Index (NDVI) analysis was used to predict the vegetation condition using Landsat 8 OLI-TIRS (The Operational Land Imager - Thermal Infrared Sensor). The brightness temperature was extracted from TIR band to estimate the surface temperature. Geothermal working area identified based on index overlay method from extracted parameter of remote sensing data was located at the western part of study area (Graho Nyabu area). This location was identified because of the existence of high surface temperature about 30°C, high lineament density about 4 - 4.5 km/km2 and low NDVI values less than 0.3.
Classification of optical coherence tomography images for diagnosing different ocular diseases
NASA Astrophysics Data System (ADS)
Gholami, Peyman; Sheikh Hassani, Mohsen; Kuppuswamy Parthasarathy, Mohana; Zelek, John S.; Lakshminarayanan, Vasudevan
2018-03-01
Optical Coherence tomography (OCT) images provide several indicators, e.g., the shape and the thickness of different retinal layers, which can be used for various clinical and non-clinical purposes. We propose an automated classification method to identify different ocular diseases, based on the local binary pattern features. The database consists of normal and diseased human eye SD-OCT images. We use a multiphase approach for building our classifier, including preprocessing, Meta learning, and active learning. Pre-processing is applied to the data to handle missing features from images and replace them with the mean or median of the corresponding feature. All the features are run through a Correlation-based Feature Subset Selection algorithm to detect the most informative features and omit the less informative ones. A Meta learning approach is applied to the data, in which a SVM and random forest are combined to obtain a more robust classifier. Active learning is also applied to strengthen our classifier around the decision boundary. The primary experimental results indicate that our method is able to differentiate between the normal and non-normal retina with an area under the ROC curve (AUC) of 98.6% and also to diagnose the three common retina-related diseases, i.e., Age-related Macular Degeneration, Diabetic Retinopathy, and Macular Hole, with an AUC of 100%, 95% and 83.8% respectively. These results indicate a better performance of the proposed method compared to most of the previous works in the literature.
Miller, Donald L.; Kwon, Deukwoo; Bonavia, Grant H.
2009-01-01
Purpose: To propose initial values for patient reference levels for fluoroscopically guided procedures in the United States. Materials and Methods: This secondary analysis of data from the Radiation Doses in Interventional Radiology Procedures (RAD-IR) study was conducted under a protocol approved by the institutional review board and was HIPAA compliant. Dose distributions (percentiles) were calculated for each type of procedure in the RAD-IR study where there were data from at least 30 cases. Confidence intervals for the dose distributions were determined by using bootstrap resampling. Weight banding and size correction methods for normalizing dose to patient body habitus were tested. Results: The different methods for normalizing patient radiation dose according to patient weight gave results that were not significantly different (P > .05). The 75th percentile patient radiation doses normalized with weight banding were not significantly different from those that were uncorrected for body habitus. Proposed initial reference levels for various interventional procedures are provided for reference air kerma, kerma-area product, fluoroscopy time, and number of images. Conclusion: Sufficient data exist to permit an initial proposal of values for reference levels for interventional radiologic procedures in the United States. For ease of use, reference levels without correction for body habitus are recommended. A national registry of radiation-dose data for interventional radiologic procedures is a necessary next step to refine these reference levels. © RSNA, 2009 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.2533090354/-/DC1 PMID:19789226
Determining density of maize canopy. 3: Temporal considerations
NASA Technical Reports Server (NTRS)
Stoner, E. R.; Baumgardner, M. F.; Anuta, P. E.; Cipra, J. E.
1972-01-01
Multispectral scanner data were collected in two flights over ground cover plots at an altitude of 305 m. Eight ground reflectance panels in close proximity to the ground cover plots were used to normalize the scanner data obtained on different dates. Separate prediction equations were obtained for both flight dates for all eleven reflective wavelength bands of the multispectral scanner. Ratios of normalized scanner data were related to leaf area index over time. Normalized scanner data were used to plot relative reflectance versus wavelength for the ground cover plots. Spectral response curves were similar to those for bare soil and green vegetation as determined by laboratory measurements. The spectral response curves from the normalized scanner data indicated that reflectance in the 0.72 to 1.3 micron wavelength range increased as leaf area index increased. A decrease in reflectance was observed in the 0.65 micron chlorophyll absorption band as leaf area index increased.
Effect of Relaxin Expressing Adenovirus on Scar Remodeling: A Preliminary Study
Jung, Bok Ki; Lee, Won Jai; Kang, Eunhye; Ahn, Hyo Min; Kim, Yong Oock; Rah, Dong Kyun; Yun, Chae-Ok
2017-01-01
Background Relaxin is a transforming growth factor β1 antagonist. To determine the effects of relaxin on scar reduction, we investigated the scar remodeling process by injecting relaxin-expressing adenoviruses using a pig scar model. Methods Scars with full thickness were generated on the backs of Yorkshire pigs. Scars were divided into two groups (relaxin [RLX] and Control). Adenoviruses were injected into the RLX (expressing relaxin) and Control (not expressing relaxin) groups. Changes in the surface areas, color index and pliability of scars were compared. Results Fifty days after treatment, the surface areas of scars decreased, the color of scars was normalized, and the pliability of scars increased in RLX group. Conclusion Relaxin-expressing adenoviruses improved the surface area, color, and pliability of scars. The mechanism of therapeutic effects on scar formation should be further investigated. PMID:28913296
NASA Technical Reports Server (NTRS)
R.Neigh, Christopher S.; Bolton, Douglas K.; Williams, Jennifer J.; Diabate, Mouhamad
2014-01-01
Forests are the largest aboveground sink for atmospheric carbon (C), and understanding how they change through time is critical to reduce our C-cycle uncertainties. We investigated a strong decline in Normalized Difference Vegetation Index (NDVI) from 1982 to 1991 in Pacific Northwest forests, observed with the National Ocean and Atmospheric Administration's (NOAA) series of Advanced Very High Resolution Radiometers (AVHRRs). To understand the causal factors of this decline, we evaluated an automated classification method developed for Landsat time series stacks (LTSS) to map forest change. This method included: (1) multiple disturbance index thresholds; and (2) a spectral trajectory-based image analysis with multiple confidence thresholds. We produced 48 maps and verified their accuracy with air photos, monitoring trends in burn severity data and insect aerial detection survey data. Area-based accuracy estimates for change in forest cover resulted in producer's and user's accuracies of 0.21 +/- 0.06 to 0.38 +/- 0.05 for insect disturbance, 0.23 +/- 0.07 to 1 +/- 0 for burned area and 0.74 +/- 0.03 to 0.76 +/- 0.03 for logging. We believe that accuracy was low for insect disturbance because air photo reference data were temporally sparse, hence missing some outbreaks, and the annual anniversary time step is not dense enough to track defoliation and progressive stand mortality. Producer's and user's accuracy for burned area was low due to the temporally abrupt nature of fire and harvest with a similar response of spectral indices between the disturbance index and normalized burn ratio. We conclude that the spectral trajectory approach also captures multi-year stress that could be caused by climate, acid deposition, pathogens, partial harvest, thinning, etc. Our study focused on understanding the transferability of previously successful methods to new ecosystems and found that this automated method does not perform with the same accuracy in Pacific Northwest forests. Using a robust accuracy assessment, we demonstrate the difficulty of transferring change attribution methods to other ecosystems, which has implications for the development of automated detection/attribution approaches. Widespread disturbance was found within AVHRR-negative anomalies, but identifying causal factors in LTSS with adequate mapping accuracy for fire and insects proved to be elusive. Our results provide a background framework for future studies to improve methods for the accuracy assessment of automated LTSS classifications.
Wolever, Thomas M S
2004-02-01
To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-12-13
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental &biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data.
Li, Bo; Tang, Jing; Yang, Qingxia; Cui, Xuejiao; Li, Shuang; Chen, Sijie; Cao, Quanxing; Xue, Weiwei; Chen, Na; Zhu, Feng
2016-01-01
In untargeted metabolomics analysis, several factors (e.g., unwanted experimental & biological variations and technical errors) may hamper the identification of differential metabolic features, which requires the data-driven normalization approaches before feature selection. So far, ≥16 normalization methods have been widely applied for processing the LC/MS based metabolomics data. However, the performance and the sample size dependence of those methods have not yet been exhaustively compared and no online tool for comparatively and comprehensively evaluating the performance of all 16 normalization methods has been provided. In this study, a comprehensive comparison on these methods was conducted. As a result, 16 methods were categorized into three groups based on their normalization performances across various sample sizes. The VSN, the Log Transformation and the PQN were identified as methods of the best normalization performance, while the Contrast consistently underperformed across all sub-datasets of different benchmark data. Moreover, an interactive web tool comprehensively evaluating the performance of 16 methods specifically for normalizing LC/MS based metabolomics data was constructed and hosted at http://server.idrb.cqu.edu.cn/MetaPre/. In summary, this study could serve as a useful guidance to the selection of suitable normalization methods in analyzing the LC/MS based metabolomics data. PMID:27958387
Fire frequency, area burned, and severity: A quantitative approach to defining a normal fire year
Lutz, J.A.; Key, C.H.; Kolden, C.A.; Kane, J.T.; van Wagtendonk, J.W.
2011-01-01
Fire frequency, area burned, and fire severity are important attributes of a fire regime, but few studies have quantified the interrelationships among them in evaluating a fire year. Although area burned is often used to summarize a fire season, burned area may not be well correlated with either the number or ecological effect of fires. Using the Landsat data archive, we examined all 148 wildland fires (prescribed fires and wildfires) >40 ha from 1984 through 2009 for the portion of the Sierra Nevada centered on Yosemite National Park, California, USA. We calculated mean fire frequency and mean annual area burned from a combination of field- and satellite-derived data. We used the continuous probability distribution of the differenced Normalized Burn Ratio (dNBR) values to describe fire severity. For fires >40 ha, fire frequency, annual area burned, and cumulative severity were consistent in only 13 of 26 years (50 %), but all pair-wise comparisons among these fire regime attributes were significant. Borrowing from long-established practice in climate science, we defined "fire normals" to be the 26 year means of fire frequency, annual area burned, and the area under the cumulative probability distribution of dNBR. Fire severity normals were significantly lower when they were aggregated by year compared to aggregation by area. Cumulative severity distributions for each year were best modeled with Weibull functions (all 26 years, r2 ??? 0.99; P < 0.001). Explicit modeling of the cumulative severity distributions may allow more comprehensive modeling of climate-severity and area-severity relationships. Together, the three metrics of number of fires, size of fires, and severity of fires provide land managers with a more comprehensive summary of a given fire year than any single metric.
New Mathematical Model for the Surface Area of the Left Ventricle by the Truncated Prolate Spheroid
Vale, Marcos de Paula; Martinez, Carlos Barreira
2017-01-01
The main aim of this study was the formula application of the superficial area of a truncated prolate spheroid (TPS) in Cartesian coordinates in obtaining a cardiac parameter that is not so much discussed in literature, related to the left ventricle (LV) surface area of the human heart, by age and sex. First we obtain a formula for the area of a TPS. Then a simple mathematical model of association of the axes measures of a TPS with the axes of the LV is built. Finally real values of the average dimensions of the humans LV are used to measure surface areas approximations of this heart chamber. As a result, the average superficial area of LV for normal patients is obtained and it is observed that the percentage differences of areas between men and women and their consecutive age groups are constant. A strong linear correlation between the obtained areas and the ventricular volumes normalized by the body areas was observed. The obtained results indicate that the superficial area of the LV, besides enabling a greater knowledge of the geometrical characteristics of the human LV, may be used as one of the normality cardiac verification criteria and be useful for medical and biological applications. PMID:28547001
Fabrication method for cores of structural sandwich materials including star shaped core cells
Christensen, R.M.
1997-07-15
A method for fabricating structural sandwich materials having a core pattern which utilizes star and non-star shaped cells is disclosed. The sheets of material are bonded together or a single folded sheet is used, and bonded or welded at specific locations, into a flat configuration, and are then mechanically pulled or expanded normal to the plane of the sheets which expand to form the cells. This method can be utilized to fabricate other geometric cell arrangements than the star/non-star shaped cells. Four sheets of material (either a pair of bonded sheets or a single folded sheet) are bonded so as to define an area therebetween, which forms the star shaped cell when expanded. 3 figs.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
NASA Astrophysics Data System (ADS)
Akdemir, Bayram; Güneş, Salih; Yosunkaya, Şebnem
Sleep disorders are a very common unawareness illness among public. Obstructive Sleep Apnea Syndrome (OSAS) is characterized with decreased oxygen saturation level and repetitive upper respiratory tract obstruction episodes during full night sleep. In the present study, we have proposed a novel data normalization method called Line Based Normalization Method (LBNM) to evaluate OSAS using real data set obtained from Polysomnography device as a diagnostic tool in patients and clinically suspected of suffering OSAS. Here, we have combined the LBNM and classification methods comprising C4.5 decision tree classifier and Artificial Neural Network (ANN) to diagnose the OSAS. Firstly, each clinical feature in OSAS dataset is scaled by LBNM method in the range of [0,1]. Secondly, normalized OSAS dataset is classified using different classifier algorithms including C4.5 decision tree classifier and ANN, respectively. The proposed normalization method was compared with min-max normalization, z-score normalization, and decimal scaling methods existing in literature on the diagnosis of OSAS. LBNM has produced very promising results on the assessing of OSAS. Also, this method could be applied to other biomedical datasets.
40 CFR 152.6 - Substances excluded from regulation by FIFRA.
Code of Federal Regulations, 2011 CFR
2011-07-01
... introduced directly into the human body, either into or in contact with the bloodstream or normally sterile areas of the body. A semi-critical device is any device which contacts intact mucous membranes but which does not ordinarily penetrate the blood barrier or otherwise enter normally sterile areas of the body...
40 CFR 152.6 - Substances excluded from regulation by FIFRA.
Code of Federal Regulations, 2010 CFR
2010-07-01
... introduced directly into the human body, either into or in contact with the bloodstream or normally sterile areas of the body. A semi-critical device is any device which contacts intact mucous membranes but which does not ordinarily penetrate the blood barrier or otherwise enter normally sterile areas of the body...
NASA Astrophysics Data System (ADS)
Zhu, Keyong; Huang, Yong; Pruvost, Jeremy; Legrand, Jack; Pilon, Laurent
2017-06-01
This study aims to quantify systematically the effect of non-absorbing cap-shaped droplets condensed on the backside of transparent windows on their directional-hemispherical transmittance and reflectance. Condensed water droplets have been blamed to reduce light transfer through windows in greenhouses, solar desalination plants, and photobioreactors. Here, the directional-hemispherical transmittance was predicted by Monte Carlo ray-tracing method. For the first time, both monodisperse and polydisperse droplets were considered, with contact angle between 0 and 180°, arranged either in an ordered hexagonal pattern or randomly distributed on the window backside with projected surface area coverage between 0 and 90%. The directional-hemispherical transmittance was found to be independent of the size and spatial distributions of the droplets. Instead, it depended on (i) the incident angle, (ii) the optical properties of the window and droplets, and on (iii) the droplet contact angle and (iv) projected surface area coverage. In fact, the directional-hemispherical transmittance decreased with increasing incident angle. Four optical regimes were identified in the normal-hemispherical transmittance. It was nearly constant for droplet contact angles either smaller than the critical angle θcr (predicted by Snell's law) for total internal reflection at the droplet/air interface or larger than 180°-θcr. However, between these critical contact angles, the normal-hemispherical transmittance decreased rapidly to reach a minimum at 90° and increased rapidly with increasing contact angles up to 180°-θcr. This was attributed to total internal reflection at the droplet/air interface which led to increasing reflectance. In addition, the normal-hemispherical transmittance increased slightly with increasing projected surface area coverage for contact angle was smaller than θcr. However, it decreased monotonously with increasing droplet projected surface area coverage for contact angle larger than θcr. These results can be used to select the material or surface coating with advantageous surface properties for applications when dropwise condensation may otherwise have a negative effect on light transmittance.
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The relationship between consanguineous marriage and death in fetus and infants
Mohammadi, Majid Mehr; Hooman, Heidar Ali; Afrooz, Gholam Ali; Daramadi, Parviz Sharifi
2012-01-01
Background: Given the high prevalence of consanguineous marriages in rural and urban areas of Iran, the aim of this study was to identify its role in increasing fetal and infant deaths. Materials ans Methods: This was a cross-sectional study in which 494 mothers with more than one exceptional child (mentally retarded and physically-dynamically disabled) or with normal children were selected based on multi-stage random sampling method. Data was gathered using the features of parents with more than one exceptional child questionnaire. The validity and reliability of this questionnaire was acceptable. Hierarchical log-linear method was used for statistical analysis. Results: Consanguineous marriage significantly increased the number of births of exceptional children. Moreover, there was a significant relation between the history of fetal/infant death and belonging to the group. There was a significant relation between consanguineous marriage and the history of fetal/infant death which means consanguineous marriage increased the prevalence of fetal/infant death in parents with exceptional children rather than in parents with normal children. Conclusions: The rate of fetal/infant death in exceptional births of consanguineous marriages was higher than that of non-consanguineous marriages. PMID:23626609
Ronald, John A.; Chen, John W.; Chen, Yuanxin; Hamilton, Amanda M.; Rodriguez, Elisenda; Reynolds, Fred; Hegele, Robert A.; Rogers, Kem A.; Querol, Manel; Bogdanov, Alexei; Weissleder, Ralph; Rutt, Brian K.
2009-01-01
Background Inflammation undermines the stability of atherosclerotic plaques, rendering them susceptible to acute rupture, the cataclysmic event that underlies clinical expression of this disease. Myeloperoxidase (MPO) is a central inflammatory enzyme secreted by activated macrophages, and is involved in multiple stages of plaque destabilization and patient outcome. We report here that a unique functional in vivo magnetic resonance (MR) agent can visualize MPO activity in atherosclerotic plaques in a rabbit model. Methods and Results We performed MR imaging of the thoracic aorta of New Zealand white (NZW) rabbits fed a cholesterol (n=11) or normal (n=4) diet up to 2 hours after injection of the MPO sensor bis-5HT-DTPA(Gd) (MPO(Gd)), the conventional agent, DTPA(Gd), or an MPO (Gd) analog, bis-tyr-DTPA(Gd), as controls. Delayed MPO(Gd) images (2 hour post injection) showed focal areas of increased contrast (>2-fold) in diseased wall, but not in normal wall (p=0.84), compared to both DTPA(Gd) (n=11; p<0.001) and bis-tyr-DTPA(Gd) (n=3; p<0.05). Biochemical assays confirmed that diseased wall possessed three-fold elevated MPO activity compared to normal wall (p<0.01). Areas detected by MPO(Gd) imaging co-localized and correlated with MPO-rich areas infiltrated by macrophages on histopathological evaluations (r=0.91, p<0.0001). While macrophages were the main source of MPO, not all macrophages secreted MPO, suggesting that distinct subpopulations contribute differently to atherogenesis and supporting our functional approach. Conclusions Our study represents a unique approach in the detection of inflammation in atherosclerotic plaques by examining macrophage function and the activity of an effector enzyme, to noninvasively provide both anatomic and functional information in vivo. PMID:19652086
Iseda, T; Nishio, T; Kawaguchi, S; Yamanoto, M; Kawasaki, T; Wakisaka, S
2004-01-01
We demonstrated the occurrence of marked regeneration of the corticospinal tract (CST) after a single transection and failure of regeneration after a repeated transection in young rats. To provide convincing evidence for the complete transection and regeneration we used retrograde neuronal double labeling. Double-labeled neurons that took up the first tracer from the transection site and the second tracer from the injection site caudal to the transection site were observed in the sensorimotor cortex. The anterograde tracing method revealed various patterns of regeneration. In the most successful cases the vast majority of regenerated fibers descended in the normal tract and terminated normally whereas a trace amount of fibers coursed aberrantly. In the less successful cases fibers descended partly normally and partly aberrantly or totally aberrantly. To clarify the role of astrocytes in determining the success or failure of regeneration we compared expression of glial fibrillary acidic protein (GFAP), vimentin and neurofilament (NF) immunoreactivity (IR) in the lesion between single and repeated transections. In either transection, astrocytes disappeared from the CST near the lesion site as early as 3 h after lesioning. However, by 24 h after a single transection, immature astrocytes coexpressing GFAP- and vimentin-IR appeared in the former astrocyte-free area and NF-positive axons crossed the lesion. By contrast, after a repeated transection the astrocyte-free area spread and NF-positive axons never crossed the lesion. It appears likely that the major sign, and possibly cause of failure of regeneration is the prolonged disappearance of astrocytes in the lesioned tract area. Copyright 2004 IBRO
Quantification of left ventricular myocardial mass in humans by nuclear magnetic resonance imaging.
Ostrzega, E; Maddahi, J; Honma, H; Crues, J V; Resser, K J; Charuzi, Y; Berman, D S
1989-02-01
The ability of NMRI to assess LV mass was studied in 20 normal males. By means of a 1.5 Tesla GE superconducting magnet and a standard spin-echo pulse sequence, multiple gated short-axis and axial slices of the entire left ventricle were obtained. LV mass was determined by Simpson's rule with the use of a previous experimentally validated method. The weight of the LV apex (subject to partial volume effect in the short-axis images) was derived from axial slices and that of the remaining left ventricle from short-axis slices. The weight of each slice was calculated by multiplying the planimetered surface area of the LV myocardium by slice thickness and by myocardial specific gravity (1.05). Mean +/- standard deviation of LV mass and LV mass index were 146 +/- 23.1 gm (range 92.3 to 190.4 gm) and 78.4 +/- 7.8 gm/m2 (range 57.7 to 89.4 gm/m2), respectively. Interobserver agreement as assessed by ICC was high for determining 161 individual slice masses (ICC = 0.99) and for total LV mass (ICC = 0.97). Intraobserver agreement for total LV mass was also high (ICC = 0.96). NMRI-determined LV mass correlated with body surface area: LV mass = 55 + 108 body surface area, r = 0.83; with body weight: LV mass = 26 + 0.77 body weight, r = 0.82; and with body height: LV mass = 262 +/- 5.9 body height, r = 0.75. Normal limits were developed for these relationships. NMRI-determined LV mass as related to body weight was in agreement with normal limits derived from autopsy literature data.(ABSTRACT TRUNCATED AT 250 WORDS)
Secondary trauma from occlusion: three-dimensional analysis using the finite element method.
Geramy, Allahyar; Faghihi, Sharieh
2004-01-01
Clinical effects of forces applied by dental occlusion on the periodontium have been evaluated for decades. Historically, trauma from occlusion has been considered as a major etiologic factor of inflammatory periodontal diseases, while some researchers have interpreted it to be of less importance or without any detectable importance in periodontics. In this study, five three-dimensional models of a maxillary central incisor were created using ANSYS 5.40. The only difference in each model was the height of the alveolar bone that showed from normal height (13 mm of alveolar bone height) to 8 mm of alveolar bone loss (5 mm of alveolar bone height). Five-point forces of 0.3 N summing up to 1.5 N were applied in a parallel line, 1 mm apical to the incisal edge on the palatal side in a palatolabial direction. The maximum (S1) and minimum (S3) principal stresses in the nodes of the labial side of the periodontal ligament (apical to the alveolar crest) were assessed. Analysis was done using the finite element method. An increase of S1 (up to 16 times in the cervical and 11.25 times in the apical area) and S3 (up to 17.13 times in the cervical and 9.9 times in the apical area) in comparison to the normal model was shown. The highest stress levels were traced in the subcervical area, except for the last model (8 mm of the alveolar bone loss). According to the results of this study, 2.5 mm of alveolar bone loss can be considered as a limit beyond which stress alterations were accelerated. Based on the FEM analysis, alveolar bone loss increases stress (S1 and S3) produced in the PDL, in spite of applying the same force vector.
SU-G-JeP2-07: Fusion Optimization of Multi-Contrast MRI Scans for MR-Based Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, L; Yin, F; Liang, X
Purpose: To develop an image fusion method using multiple contrast MRI scans for MR-based treatment planning. Methods: T1 weighted (T1-w), T2 weighted (T2-w) and diffusion weighted images (DWI) were acquired from liver cancer patient with breath-holding. Image fade correction and deformable image registration were performed using VelocityAI (Varian Medical Systems, CA). Registered images were normalized to mean voxel intensity for each image dataset. Contrast to noise ratio (CNR) between tumor and liver was quantified. Tumor area was defined as the GTV contoured by physicians. Normal liver area with equivalent dimension was used as background. Noise was defined by the standardmore » deviation of voxel intensities in the same liver area. Linear weightings were applied to T1-w, T2-w and DWI images to generate composite image and CNR was calculated for each composite image. Optimization process were performed to achieve different clinical goals. Results: With a goal of maximizing tumor contrast, the composite image achieved a 7–12 fold increase in tumor CNR (142.8 vs. −2.3, 11.4 and 20.6 for T1-w, T2-w and DWI only, respectively), while anatomical details were largely invisible. With a weighting combination of 100%, −10% and −10%, respectively, tumor contrast was enhanced from −2.3 to −5.4, while the anatomical details were clear. With a weighting combination of 25%, 20% and 55%, balanced tumor contrast and anatomy was achieved. Conclusion: We have investigated the feasibility of performing image fusion optimization on multiple contrast MRI images. This mechanism could help utilize multiple contrast MRI scans to potentially facilitate future MR-based treatment planning.« less
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
TOPEX/El Nino Watch - Warm Water Pool is Thinning, Feb, 5, 1998
NASA Technical Reports Server (NTRS)
1998-01-01
This image of the Pacific Ocean was produced using sea surface height measurements taken by the U.S.-French TOPEX/Poseidon satellite. The image shows sea surface height relative to normal ocean conditions on Feb. 5, 1998 and sea surface height is an indicator of the heat content of the ocean. The area and volume of the El Nino warm water pool that is affecting global weather patterns remains extremely large, but the pool has thinned along the equator and near the coast of South America. This 'thinning' means that the warm water is not as deep as it was a few months ago. Oceanographers indicate this is a classic pattern, typical of a mature El Nino condition that they would expect to see during the ocean's gradual transition back to normal sea level. In this image, the white and red areas indicate unusual patterns of heat storage; in the white areas, the sea surface is between 14 and 32 centimeters (6 to 13 inches) above normal; in the red areas, it's about 10 centimeters (4 inches) above normal. The green areas indicate normal conditions, while purple (the western Pacific) means at least 18 centimeters (7 inches) below normal sea level. The El Nino phenomenon is thought to be triggered when the steady westward blowing trade winds weaken and even reverse direction. This change in the winds allows a large mass of warm water (the red and white area) that is normally located near Australia to move eastward along the equator until it reaches the coast of South America. The displacement of so much warm water affects evaporation, where rain clouds form and, consequently, alters the typical atmospheric jet stream patterns around the world. Using satellite imagery, buoy and ship data, and a forecasting model of the ocean-atmosphere system, the National Oceanic and Atmospheric Administration, (NOAA), has continued to issue an advisory indicating the so-called El Nino weather conditions that have impacted much of the United States and the world are expected to remain through the spring.
For more information, please visit the TOPEX/Poseidon project web page at http://topex-www.jpl.nasa.govAvionics Corrosion Control Study
1974-01-01
organic materials that produce * corrusl’Q vapors include: a Adhv’sives -Ureaformaldehyde, Phenol Formaldehyde a Gaskets - Neoprene/Asbestos, Resin /Cork 8...P1C Ohw o.x’gen area is anolic to the high oxygen urea . 1-3 g net-rvl ona metal conLaifls mre mea osa one point than~at ani~iner, SIntergranulir...better done by glucing than by normal mechanical methods involving s rewing or bolting. The glue to be used is a two-pot epoxy resin which is vyi to ugh
Innovative Methods to Acquire and Adapt Soldier Skills (INMASS) in the Operational Environment
2009-08-01
handing out soccer balls to kids in the street one minute, then fighting an insurgent stronghold the next, and then quickly reacting to an improvised...send their kids to school. Soldiers also must use situational cues to assess whether there is danger in the immediate area, such as from an emplaced...cues is normal or not. For example, if kids should be getting out of school right now and there are no kids in sight – then it is likely that a
Wang, Anru; Yang, Fangling; Yu, Baosheng; Shan, Ye; Gao, Lanying; Zhang, Xiaoxiao; Peng, Ya
2013-07-01
To investigate the maturation of individual bones on the hand and wrist in children with central precocious puberty (CPP) and idiopathic short stature (ISS). Hand and wrist films of 25 children with CPP, 29 children with ISS and 21 normal controls were evaluated by conventional Greulich-Pyle (GP) atlas method and individual bone assessment method, in which all twenty bones of the hand and wrist were evaluated based on GP atlas, including 2 radius and ulna, 7 carpal bones, 11 metacarpal and phalangeal bones, the average bone age (BA) was calculated. The differences in groups were analyzed by independent samples t test. The differences between the two methods were analyzed by paired sample t test. The differences between BA and chronological age (CA) were analyzed by ROC with SPSS 17.0. Compared with the normal control group, the advance of BA in the CPP group was 0.70-2.26 y (1.48 ±0.78) by the GP atlas method, while that was 0.28-2.00 y(1.14 ±0.86) by the individual bone evaluation method. In all twenty bones, the advance of metacarpal and phalangeal BA was the greatest [0.34-2.06 y(1.2±0.86)]. In the ISS group,the delay of BA was 0.47-2.91 y(-1.69±1.22) by the GP atlas method, while that was 0.48-2.50 y (-1.49±1.01) by individual bone evaluation method.The delay of carpal BA was the greatest [0.59-2.73 y(-1.66±1.07)] in all twenty bones. In the ISS group and the normal control group, there were no statistic differences between the two methods. In the CPP group, statistic difference was found between two methods. There were no statistic differences for the areas under ROC curves between two methods. The advance of metacarpal and phalangeal BA is the greatest in CPP group and the delay of carpal BA is the greatest in ISS group.Both methods provide diagnostic information for bone age in CPP and ISS children.
Wan, Jun-Hui; Tian, Pei-Ling; Luo, Wei-Hao; Wu, Bing-Yi; Xiong, Fu; Zhou, Wan-Jun; Wei, Xiang-Cai; Xu, Xiang-Min
2012-07-15
Reversed-phase high-performance liquid chromatography (RP-HPLC) of human globin chains is an important tool for detecting thalassemias and hemoglobin variants. The challenges of this method that limit its clinical application are a long analytical time and complex sample preparation. The aim of this study was to establish a simple, rapid and high-resolution RP-HPLC method for the separation of globin chains in human blood. Red blood cells from newborns and adults were diluted in deionized water and injected directly onto a micro-jupiter C18 reversed-phase column (250 mm × 4.6 mm) with UV detection at 280 nm. Under the conditions of varying pH or the HPLC gradient, the globin chains (pre-β, β, δ, α, (G)γ and (A)γ) were denatured and separated from the heme groups in 12 min with a retention time coefficient of variation (CV) ranging from 0.11 to 1.29% and a peak area CV between 0.32% and 4.86%. Significant differences (P<0.05) among three groups (normal, Hb H and β thalassemia) were found in the area ratio of α/pre-β+β applying the rapid elution procedure, while P≥0.05 was obtained between the normal and α thalassemia silent/trait group. Based on the ANOVA results, receiver operating characteristic (ROC) curve analysis of the δ/β and α/pre-β+β area ratios showed a sensitivity of 100.0%, and a specificity of 100.0% for indicating β thalassemia carriers, and a sensitivity of 96.6% and a specificity of 89.6% for the prediction of hemoglobin H (Hb H) disease. The proposed cut-off was 0.026 of δ/β for β thalassemia carriers and 0.626 of α/pre-β+β for Hb H disease. In addition, abnormal hemoglobin hemoglobin E (Hb E) and Hb Westmead (Hb WS) were successfully identified using this RP-HPLC method. Our experience in developing this RP-HPLC method for the rapid separation of human globin chains could be of use for similar work. Copyright © 2012 Elsevier B.V. All rights reserved.
Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie
2016-01-01
The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13. Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract PMID:27504009
Warm Ocean Temperatures Blanket the Far-Western Pacific
NASA Technical Reports Server (NTRS)
2001-01-01
These data, taken during a 10-day collection cycle ending March 9, 2001, show that above-normal sea-surface heights and warmer ocean temperatures(indicated by the red and white areas) still blanket the far-western tropical Pacific and much of the north (and south) mid-Pacific. Red areas are about 10centimeters (4 inches) above normal; white areas show the sea-surface height is between 14 and 32 centimeters (6 to 13 inches) above normal.
This build-up of heat dominating the Western Pacific was first noted by TOPEX/Poseidon oceanographers more than two years ago and has outlasted the El Nino and La Nina events of the past few years. See: http://www.jpl.nasa.gov/elnino/990127.html . This warmth contrasts with the Bering Sea, Gulf of Alaska and tropical Pacific where lower-than-normal sea levels and cool ocean temperatures continue (indicated by blue areas). The blue areas are between 5 and 13centimeters (2 and 5 inches) below normal, whereas the purple areas range from 14 to 18 centimeters (6 to 7 inches) below normal. Actually, the near-equatorial ocean cooled through the fall of 2000 and into mid-winter and continues almost La Nina-like.Looking at the entire Pacific basin, the Pacific Decadal Oscillation's warm horseshoe and cool wedge pattern still dominates this sea-level height image. Most recent National Oceanic and Atmospheric Administration (NOAA) sea-surface temperature data also clearly illustrate the persistence of this basin-wide pattern. They are available at http://psbsgi1.nesdis.noaa.gov:8080/PSB/EPS/SST/climo.htmlThe U.S.-French TOPEX/Poseidon mission is managed by JPL for NASA's Earth Science Enterprise, Washington, D.C. JPL is a division of the California Institute of Technology in Pasadena. For more information on the TOPEX/Poseidon project, see: http://topex-www.jpl.nasa.govTOPEX/El Nino Watch - Satellite Shows Pacific Running Hot and Cold, September 12, 1998
NASA Technical Reports Server (NTRS)
1998-01-01
This image of the Pacific Ocean was produced using sea-surface height measurements taken by the U.S.-French TOPEX/Poseidon satellite. The image shows sea surface height relative to normal ocean conditions on September 12, 1998; these sea surface heights are an indicator of the changing amount of heat stored in the ocean. The tropical Pacific Ocean continues to exhibit the complicated characteristics of both a lingering El Nino, and a possibly waning La Nina situation. This image shows that the rapid cooling of the central tropical Pacific has slowed and this area of low sea level (shown in purple) has decreased slightly since last month. It is still uncertain, scientists say, that this cold pool will evolve into a long-lasting La Nina situation. Remnants of the El Nino warm water pool, shown here in red and white, are still lingering to the north and south of the equator. The coexistence of these two contrasting conditions indicates that the ocean and the climate system remain in transition. These strong patterns have remained in the climate system for many months and will continue to influence weather conditions around the world in the coming fall and winter. The satellite's sea-surface height measurements have provided scientists with a detailed view of the 1997-98 El Nino because the TOPEX/Poseidon satellite measures the changing sea-surface height with unprecedented precision. The purple areas are about 18 centimeters (7 inches) below normal, creating a deficit in the heat supply to the surface waters. The white areas show the sea surface is between 14 and 32 centimeters (6 to 13 inches) above normal; in the red areas, it's about 10 centimeters (4 inches) above normal. The green areas indicate normal conditions. The purple areas are 14 to 18 centimeters (6 to 7 inches) below normal and the blue areas are 5 to 13 centimeters (2 to 5 inches) below normal. The El Nino phenomenon is thought to be triggered when the steady westward blowing trade winds weaken and even reverse direction. This change in the winds allows a large mass of warm water (the red and white area) that is normally located near Australia to move eastward along the equator until it reaches the coast of South America. The displacement of so much warm water affects evaporation, where rain clouds form and, consequently, alters the typical atmospheric jet stream patterns around the world. A La Nina situation is essentially the opposite of an El Nino condition, but during La Nina the trade winds are stronger than normal and the cold water that normally exists along the coast of South America extends to the central equatorial Pacific. A La Nina situation also changes global weather patterns, and is associated with less moisture in the air resulting in less rain along the west coasts of North and South America.
For more information, please visit the TOPEX/Poseidon project web page at http://topex-www.jpl.nasa.govA field evaluation of a SO 2 passive sampler in tropical industrial and urban air
NASA Astrophysics Data System (ADS)
Cruz, Lícia P. S.; Campos, Vânia P.; Silva, Adriana M. C.; Tavares, Tania M.
Passive samplers have been widely used for over 30 years in the measurement of personal exposure to vapours and gases in the workplace. These samplers have just recently been applied in the monitoring of ambient air, which presents concentrations that are normally much smaller than those found in occupational environments. The locally constructed passive sampler was based on gas molecular diffusion through static air layer. The design used minimizes particle interference and turbulent diffusion. After exposure, the SO 2 trapped in impregnated filters with Na 2CO 3 was extracted by means of an ultrasonic bath, for 15 min, using 1.0×10 -2 mol L -1 H 2O 2. It was determined as SO 4-2 by ion chromatography. The performance of the passive sampler was evaluated at different exposure periods, being applied in industrial and urban areas. Method precision as relative standard deviation for three simultaneously applied passive samplers was within 10%. Passive sampling, when compared to active monitoring methods under real conditions, used in urban and industrial areas, showed an overall accuracy of 15%. A statistical comparison with an active method was performed to demonstrate the validity of the passive method. Sampler capacity varied between 98 and 421 μg SO 2 m -3 for exposure periods of one month and one week, respectively, which allows its use in highly polluted areas.
Analysis of using the tongue deviation angle as a warning sign of a stroke
2012-01-01
Background The symptom of tongue deviation is observed in a stroke or transient ischemic attack. Nevertheless, there is much room for the interpretation of the tongue deviation test. The crucial factor is the lack of an effective quantification method of tongue deviation. If we can quantify the features of the tongue deviation and scientifically verify the relationship between the deviation angle and a stroke, the information provided by the tongue will be helpful in recognizing a warning of a stroke. Methods In this study, a quantification method of the tongue deviation angle was proposed for the first time to characterize stroke patients. We captured the tongue images of stroke patients (15 males and 10 females, ranging between 55 and 82 years of age); transient ischemic attack (TIA) patients (16 males and 9 females, ranging between 53 and 79 years of age); and normal subjects (14 males and 11 females, ranging between 52 and 80 years of age) to analyze whether the method is effective. In addition, we used the receiver operating characteristic curve (ROC) for the sensitivity analysis, and determined the threshold value of the tongue deviation angle for the warning sign of a stroke. Results The means and standard deviations of the tongue deviation angles of the stroke, TIA, and normal groups were: 6.9 ± 3.1, 4.9 ± 2.1 and 1.4 ± 0.8 degrees, respectively. Analyzed by the unpaired Student’s t-test, the p-value between the stroke group and the TIA group was 0.015 (>0.01), indicating no significant difference in the tongue deviation angle. The p-values between the stroke group and the normal group, as well as between the TIA group and the normal group were both less than 0.01. These results show the significant differences in the tongue deviation angle between the patient groups (stroke and TIA patients) and the normal group. These results also imply that the tongue deviation angle can effectively identify the patient group (stroke and TIA patients) and the normal group. With respect to the visual examination, 40% and 32% of stroke patients, 24% and 16% of TIA patients, and 4% and 0% of normal subjects were found to have tongue deviations when physicians “A” and “B” examined them. The variation showed the essentiality of the quantification method in a clinical setting. In the receiver operating characteristic curve (ROC), the Area Under Curve (AUC, = 0.96) indicates good discrimination. The tongue deviation angle more than the optimum threshold value (= 3.2°) predicts a risk of stroke. Conclusions In summary, we developed an effective quantification method to characterize the tongue deviation angle, and we confirmed the feasibility of recognizing the tongue deviation angle as an early warning sign of an impending stroke. PMID:22908956
Cholewicki, Jacek; van Dieën, Jaap; Lee, Angela S.; Reeves, N. Peter
2011-01-01
The problem with normalizing EMG data from patients with painful symptoms (e.g. low back pain) is that such patients may be unwilling or unable to perform maximum exertions. Furthermore, the normalization to a reference signal, obtained from a maximal or sub-maximal task, tends to mask differences that might exist as a result of pathology. Therefore, we presented a novel method (GAIN method) for normalizing trunk EMG data that overcomes both problems. The GAIN method does not require maximal exertions (MVC) and tends to preserve distinct features in the muscle recruitment patterns for various tasks. Ten healthy subjects performed various isometric trunk exertions, while EMG data from 10 muscles were recorded and later normalized using the GAIN and MVC methods. The MVC method resulted in smaller variation between subjects when tasks were executed at the three relative force levels (10%, 20%, and 30% MVC), while the GAIN method resulted in smaller variation between subjects when the tasks were executed at the three absolute force levels (50 N, 100 N, and 145 N). This outcome implies that the MVC method provides a relative measure of muscle effort, while the GAIN-normalized EMG data gives an estimate of the absolute muscle force. Therefore, the GAIN-normalized EMG data tends to preserve the EMG differences between subjects in the way they recruit their muscles to execute various tasks, while the MVC-normalized data will tend to suppress such differences. The appropriate choice of the EMG normalization method will depend on the specific question that an experimenter is attempting to answer. PMID:21665489
Elgin, Ufuk; Cankaya, Bülent; Simsek, Tulay; Batman, Aygen
2010-01-01
To compare the optic disc topography parameters of children with juvenile diabetes mellitus and normal children using the Heidelberg Retinal Tomograph (HRT III) (Heidelberg Engineering, Heidelberg, Germany). The topographic optic disc parameters (cup volume, cup area, rim volume, rim area, disc area, mean cup-to-disc ratio, and mean cup depth) of 28 non-glaucomatous eyes of 28 children with type 1 diabetes mellitus and 28 eyes of 28 age-matched healthy children were compared using the nonparametric Mann-Whitney U test. No statistically significant differences were found between cup volume (P = .782), cup area (P = .878), rim volume (P = .853), disc area (P = .452), mean cup-to-disc ratio (P = .852), and mean cup depth (P = .711) of eyes of cases with diabetes mellitus and normal subjects. This result suggests that non-glaucomatous eyes of children with type 1 diabetes mellitus and healthy subjects have similar topographic optic disc characteristics. Copyright 2010, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Wu, Chunhung; Huang, Jyuntai
2017-04-01
Most of the landslide cases in Taiwan were triggered by rainfall or earthquake events. The heavy rainfall in the typhoon seasons, from June to October, causes the landslide hazard more serious. Renai Towhship is of the most large landslide cases after 2009 Typhoon Morakot (from Aug. 5 to Aug. 10, 2009) in Taiwan. Around 2,744 landslides cases with the total landslide area of 21.5 km2 (landslide ratio =1.8%), including 26 large landslide cases, induced after 2009 Typhoon Morakot in Renai Towhship. The area of each large landslides case is more than 0.1 km2, and the area of the largest case is around 0.96 km2. 58% of large landslide cases locate in the area with metamorphosed sandstone. The mean slope of 26 large landslide cases ranges from 15 degree to 56 degree, and the accumulated rainfall during 2009 Typhoon Morakot ranges from 530 mm to 937 mm. Three methods, including frequency ratio method (abbreviated as FR), weights of evidence method (abbreviated as WOE), and logistic regression method (abbreviated as LR), are used in this study to establish the landslides susceptibility in the Renai Township, Nantou County, Taiwan. Eight landslide related-factors, including elevation, slope, aspect, geology, land use, distance to drainage, distance to fault, accumulation rainfall during 2009 Typhoon Morakot, are used to establish the landslide susceptibility models in this study. The landslide inventory after 2009 Typhoon Morakot is also used to test the model performance in this study. The mean accumulated rainfall in Renai Township during 2009 typhoon Morakot was around 735 mm with the maximum 1-hr, 3-hrs, and 6-hrs rainfall intensity of 44 mm/1-hr, 106 mm/3-hrs and 204 mm/6-hrs, respectively. The range of original susceptibility values established by three methods are 4.0 to 20.9 for FR, -33.8 to -16.1 for WOE, and -41.7 to 5.7 for LR, and the mean landslide susceptibility value are 8.0, -24.6 and 0.38, respectively. The AUC values are 0.815 for FR, 0.816 for WOE, and 0.823 for LR. The study normalized the susceptibility value range of three landslide susceptibility models to 0 to 1 to deeply compare the model performance. The normalized landslide susceptibility value > 0.5 and ≦0.5 are regarded as predicted-landslide area and predicted-not-landslide area. The ratio of the area in the predicted-landslide area to the total area is 3.0% for FR, 71.4% for WOE, and 26.5% for LR. And the correct ratio is 65.5% for FR, 61.9% for WOE, 74.5% for LR. The study adopted 14 rainfall stations with more than 20 years daily rainfall data in Renai Township to estimate the 24 hrs accumulated rainfall with different RPYs. Landslide susceptibility map under 24 hrs accumulated rainfall distribution with different RPYs is used to estimate the landslide disaster location and scale. The landslide risk under different RPYs in Renai Township is calculated as 2.62 billion for 5 RPYs, 3.06 billion for 10 RPYs, 4.69 billion for 25 RPYs, 5.97 billion for 50 RPYs, 6.98 billion for 100 RPYs, and 8.23 billion for 200 RPYs, respectively.
Normalization matters: tracking the best strategy for sperm miRNA quantification.
Corral-Vazquez, Celia; Blanco, Joan; Salas-Huetos, Albert; Vidal, Francesca; Anton, Ester
2017-01-01
What is the most reliable normalization strategy for sperm microRNA (miRNA) quantitative Reverse Transcription Polymerase Chain Reactions (qRT-PCR) using singleplex assays? The use of the average expression of hsa-miR-100-5p and hsa-miR-30a-5p as sperm miRNA qRT-PCR data normalizer is suggested as an optimal strategy. Mean-centering methods are the most reliable normalization strategies for miRNA high-throughput expression analyses. Nevertheless, specific trustworthy reference controls must be established in singleplex sperm miRNA qRT-PCRs. Cycle threshold (Ct) values from previously published sperm miRNA expression profiles were normalized using four approaches: (i) Mean-Centering Restricted (MCR) method (taken as the reference strategy); (ii) expression of the small nuclear RNA RNU6B; (iii) expression of four miRNAs selected by the Concordance Correlation Restricted (CCR) algorithm: hsa-miR-100-5p, hsa-miR-146b-5p, hsa-miR-92a-3p and hsa-miR-30a-5p; (iv) the combination of two of these miRNAs that achieved the highest proximity to MCR. Expression profile data from 736 sperm miRNAs were taken from previously published studies performed in fertile donors (n = 10) and infertile patients (n = 38). For each tested normalizer molecule, expression ubiquity and uniformity across the different samples and populations were assessed as indispensable requirements for being considered as valid candidates. The reliability of the different normalizing strategies was compared to MCR based on the set of differentially expressed miRNAs (DE-miRNAs) detected between populations, the corresponding predicted targets and the associated enriched biological processes. All tested normalizers were found to be ubiquitous and non-differentially expressed between populations. RNU6B was the least uniformly expressed candidate across samples. Data normalization through RNU6B led to dramatically misguided results when compared to MCR outputs, with a null prediction of target genes and enriched biological processes. Hsa-miR-146b-5p and hsa-miR-92a-3p were more uniformly expressed than RNU6B, but their results still showed scant proximity to the reference method. The highest resemblance to MCR was achieved by hsa-miR-100-5p and hsa-miR-30a-5p. Normalization against the combination of both miRNAs reached the best proximity rank regarding the detected DE-miRNAs (Area Under the Curve = 0.8). This combination also exhibited the best performance in terms of the target genes predicted (72.3% of True Positives) and their corresponding enriched biological processes (70.4% of True Positives). Not applicable. This study is focused on sperm miRNA qRT-PCR analysis. The use of the selected normalizers in other cell types or tissues would still require confirmation. The search for new fertility biomarkers based on sperm miRNA expression using high-throughput assays is one of the upcoming challenges in the field of reproductive genetics. In this context, validation of the results using singleplex assays would be mandatory. The normalizer strategy suggested in this study would provide a universal option in this area, allowing for normalization of the validated data without causing meaningful variations of the results. Instead, qRT-PCR data normalization by RNU6B should be discarded in sperm-miRNA expression studies. This work was supported by the 2014/SGR00524 project (Agència de Gestió d'Ajuts Universitaris i de Recerca, Generalitat de Catalunya, Spain) and UAB CF-180034 grant (Universitat Autònoma de Barcelona). Celia Corral-Vazquez is a recipient of a Personal Investigador en Formació grant UAB/PIF2015 (Universitat Autònoma de Barcelona). The authors report no conflict of interest. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
Dissolution process analysis using model-free Noyes-Whitney integral equation.
Hattori, Yusuke; Haruna, Yoshimasa; Otsuka, Makoto
2013-02-01
Drug dissolution process of solid dosages is theoretically described by Noyes-Whitney-Nernst equation. However, the analysis of the process is demonstrated assuming some models. Normally, the model-dependent methods are idealized and require some limitations. In this study, Noyes-Whitney integral equation was proposed and applied to represent the drug dissolution profiles of a solid formulation via the non-linear least squares (NLLS) method. The integral equation is a model-free formula involving the dissolution rate constant as a parameter. In the present study, several solid formulations were prepared via changing the blending time of magnesium stearate (MgSt) with theophylline monohydrate, α-lactose monohydrate, and crystalline cellulose. The formula could excellently represent the dissolution profile, and thereby the rate constant and specific surface area could be obtained by NLLS method. Since the long time blending coated the particle surface with MgSt, it was found that the water permeation was disturbed by its layer dissociating into disintegrant particles. In the end, the solid formulations were not disintegrated; however, the specific surface area gradually increased during the process of dissolution. The X-ray CT observation supported this result and demonstrated that the rough surface was dominant as compared to dissolution, and thus, specific surface area of the solid formulation gradually increased. Copyright © 2012 Elsevier B.V. All rights reserved.
Estimation of Subpixel Snow-Covered Area by Nonparametric Regression Splines
NASA Astrophysics Data System (ADS)
Kuter, S.; Akyürek, Z.; Weber, G.-W.
2016-10-01
Measurement of the areal extent of snow cover with high accuracy plays an important role in hydrological and climate modeling. Remotely-sensed data acquired by earth-observing satellites offer great advantages for timely monitoring of snow cover. However, the main obstacle is the tradeoff between temporal and spatial resolution of satellite imageries. Soft or subpixel classification of low or moderate resolution satellite images is a preferred technique to overcome this problem. The most frequently employed snow cover fraction methods applied on Moderate Resolution Imaging Spectroradiometer (MODIS) data have evolved from spectral unmixing and empirical Normalized Difference Snow Index (NDSI) methods to latest machine learning-based artificial neural networks (ANNs). This study demonstrates the implementation of subpixel snow-covered area estimation based on the state-of-the-art nonparametric spline regression method, namely, Multivariate Adaptive Regression Splines (MARS). MARS models were trained by using MODIS top of atmospheric reflectance values of bands 1-7 as predictor variables. Reference percentage snow cover maps were generated from higher spatial resolution Landsat ETM+ binary snow cover maps. A multilayer feed-forward ANN with one hidden layer trained with backpropagation was also employed to estimate the percentage snow-covered area on the same data set. The results indicated that the developed MARS model performed better than th
NASA Astrophysics Data System (ADS)
Ramos, A. M.; Lorenzo, M. N.; Gimeno, L.; Nieto, R.; Añel, J. A.
2009-09-01
Several methods have been developed to rank meteorological events in terms of severity, social impact or economic impacts. These classifications are not always objective since they depend of several factors, for instance, the observation network is biased towards the densely populated urban areas against rural or oceanic areas. It is also very important to note that not all rare synoptic-scale meteorological events attract significant media attention. In this work we use a comprehensive method of classifying synoptic-scale events adapted from Hart and Grumm, 2001, to the European region (30N-60N, 30W-15E). The main motivation behind this method is that the more unusual the event (a cold outbreak, a heat wave, or a flood), for a given region, the higher ranked it must be. To do so, we use four basic meteorological variables (Height, Temperature, Wind and Specific Humidity) from NCEP reanalysis dataset over the range of 1000hPa to 200hPa at a daily basis from 1948 to 2004. The climatology used embraces the 1961-1990 period. For each variable, the analysis of raking climatological anomalies was computed taking into account the daily normalized departure from climatology at different levels. For each day (from 1948 to 2004) we have four anomaly measures, one for each variable, and another, a combined where the anomaly (total anomaly) is the average of the anomaly of the four variables. Results will be analyzed on a monthly, seasonal and annual basis. Seasonal trends and variability will also be shown. In addition, and given the extent of the database, the expected return periods associated with the anomalies are revealed. Moreover, we also use an automated version of the Lamb weather type (WT) classification scheme (Jones et al, 1993) adapted for the Galicia area (Northwestern corner of the Iberian Peninsula) by Lorenzo et al (2008) in order to compute the daily local circulation regimes in this area. By combining the corresponding daily WT with the five anomaly measures we can evaluate if there is any preferable WT responsible for high or low values of anomalies. Hart, R.E and R.H. Grumm (2001) Using normalized climatological anomalies to rank synoptic-scale events objectivily. Monthly Weather Review, 129, 2426-2442. Jones, P. D., M. Hulme, K. R. Briffa (1993) A comparison of Lamb circulation types with anobjective classification scheme. International Journal of Climatology, 13: 655- 663. Lorenzo M.N., J.J. Taboada and L.Gimeno (2008). Links between circulation weather types and teleconnection patterns and their influence on precipitation patterns in Galicia (NW Spain). International Journal of Climatology 28(11): 1493:1505 DOI: 10.1002/joc.1646.
2014-01-01
Background We tested the feasibility of a simple method for assessment of prostate cancer (PCa) aggressiveness using diffusion-weighted magnetic resonance imaging (MRI) to calculate apparent diffusion coefficient (ADC) ratios between prostate cancer and healthy prostatic tissue. Methods The requirement for institutional review board approval was waived. A set of 20 standardized core transperineal saturation biopsy specimens served as the reference standard for placement of regions of interest on ADC maps in tumorous and normal prostatic tissue of 22 men with PCa (median Gleason score: 7; range, 6–9). A total of 128 positive sectors were included for evaluation. Two diagnostic ratios were computed between tumor ADCs and normal sector ADCs: the ADC peripheral ratio (the ratio between tumor ADC and normal peripheral zone tissue, ADC-PR), and the ADC central ratio (the ratio between tumor ADC and normal central zone tissue, ADC-CR). The performance of the two ratios in detecting high-risk tumor foci (Gleason 8 and 9) was assessed using the area under the receiver operating characteristic curve (AUC). Results Both ADC ratios presented significantly lower values in high-risk tumors (0.48 ± 0.13 for ADC-CR and 0.40 ± 0.09 for ADC-PR) compared with low-risk tumors (0.66 ± 0.17 for ADC-CR and 0.54 ± 0.09 for ADC-PR) (p < 0.001) and had better diagnostic performance (ADC-CR AUC = 0.77, sensitivity = 82.2%, specificity = 66.7% and ADC-PR AUC = 0.90, sensitivity = 93.7%, specificity = 80%) than stand-alone tumor ADCs (AUC of 0.75, sensitivity = 72.7%, specificity = 70.6%) for identifying high-risk lesions. Conclusions The ADC ratio as an intrapatient-normalized diagnostic tool may be better in detecting high-grade lesions compared with analysis based on tumor ADCs alone, and may reduce the rate of biopsies. PMID:24885552
The impact on midlevel vision of statistically optimal divisive normalization in V1
Coen-Cagli, Ruben; Schwartz, Odelia
2013-01-01
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950
Huang, Shiping
2017-11-13
The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction.
Pacific Dictates Droughts and Drenchings
2004-01-30
The latest remote sensing data from NASA's Jason satellite show that the equatorial Pacific sea surface levels are higher, indicating warmer sea surface temperatures in the central and west Pacific Ocean. This pattern has the appearance of La Niña rather than El Niño. This contrasts with the Bering Sea, Gulf of Alaska and U.S. West Coast where lower-than-normal sea surface levels and cool ocean temperatures continue (indicated by blue and purple areas). The image above is a global map of sea surface height, accurate to within 30 millimeters. The image represents data collected and composited over a 10-day period, ending on Jan 23, 2004. The height of the water relates to the temperature of the water. As the ocean warms, its level rises; and as it cools, its level falls. Yellow and red areas indicate where the waters are relatively warmer and have expanded above sea level, green indicates near normal sea level, and blue and purple areas show where the waters are relatively colder and the surface is lower than sea level. The blue areas are between 5 and 13 centimeters (2 and 5 inches) below normal, whereas the purple areas range from 14 to 18 centimeters (6 to 7 inches) below normal. http://photojournal.jpl.nasa.gov/catalog/PIA05071
Evolution of the Contact Area with Normal Load for Rough Surfaces: from Atomic to Macroscopic Scales
NASA Astrophysics Data System (ADS)
Huang, Shiping
2017-11-01
The evolution of the contact area with normal load for rough surfaces has great fundamental and practical importance, ranging from earthquake dynamics to machine wear. This work bridges the gap between the atomic scale and the macroscopic scale for normal contact behavior. The real contact area, which is formed by a large ensemble of discrete contacts (clusters), is proven to be much smaller than the apparent surface area. The distribution of the discrete contact clusters and the interaction between them are key to revealing the mechanism of the contacting solids. To this end, Green's function molecular dynamics (GFMD) is used to study both how the contact cluster evolves from the atomic scale to the macroscopic scale and the interaction between clusters. It is found that the interaction between clusters has a strong effect on their formation. The formation and distribution of the contact clusters is far more complicated than that predicted by the asperity model. Ignorance of the interaction between them leads to overestimating the contacting force. In real contact, contacting clusters are smaller and more discrete due to the interaction between the asperities. Understanding the exact nature of the contact area with the normal load is essential to the following research on friction.
Tabard-Fougère, Anne; Rose-Dulcina, Kevin; Pittet, Vincent; Dayer, Romain; Vuillerme, Nicolas; Armand, Stéphane
2018-02-01
Electromyography (EMG) is an important parameter in Clinical Gait Analysis (CGA), and is generally interpreted with timing of activation. EMG amplitude comparisons between individuals, muscles or days need normalization. There is no consensus on existing methods. The gold standard, maximum voluntary isometric contraction (MVIC), is not adapted to pathological populations because patients are often unable to perform an MVIC. The normalization method inspired by the isometric grade 3 of manual muscle testing (isoMMT3), which is the ability of a muscle to maintain a position against gravity, could be an interesting alternative. The aim of this study was to evaluate the within- and between-day reliability of the isoMMT3 EMG normalizing method during gait compared with the conventional MVIC method. Lower limb muscles EMG (gluteus medius, rectus femoris, tibialis anterior, semitendinosus) were recorded bilaterally in nine healthy participants (five males, aged 29.7±6.2years, BMI 22.7±3.3kgm -2 ) giving a total of 18 independent legs. Three repeated measurements of the isoMMT3 and MVIC exercises were performed with an EMG recording. EMG amplitude of the muscles during gait was normalized by these two methods. This protocol was repeated one week later. Within- and between-day reliability of normalization tasks were similar for isoMMT3 and MVIC methods. Within- and between-day reliability of gait EMG normalized by isoMMT3 was higher than with MVIC normalization. These results indicate that EMG normalization using isoMMT3 is a reliable method with no special equipment needed and will support CGA interpretation. The next step will be to evaluate this method in pathological populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Microscopic fluorescence spectral analysis of basal cell carcinomas
NASA Astrophysics Data System (ADS)
He, Qingli; Lui, Harvey; Zloty, David; Cowan, Bryce; Warshawski, Larry; McLean, David I.; Zeng, Haishan
2007-05-01
Background and Objectives. Laser-induced autofluorescence (LIAF) is a promising tool for cancer diagnosis. This method is based on the differences in autofluorescence spectra between normal and cancerous tissues, but the underlined mechanisms are not well understood. The objective of this research is to study the microscopic origins and intrinsic fluorescence properties of basal cell carcinoma (BCC) for better understanding of the mechanism of in vivo fluorescence detection and margin delineation of BCCs on skin patients. A home-made micro- spectrophotometer (MSP) system was used to image the fluorophore distribution and to measure the fluorescence spectra of various microscopic structures and regions on frozen tissue sections. Materials and Methods. BCC tissue samples were obtained from 14 patients undergoing surgical resections. After surgical removal, each tissue sample was immediately embedded in OCT medium and snap-frozen in liquid nitrogen. The frozen tissue block was then cut into 16-μm thickness sections using a cryostat microtome and placed on microscopic glass slides. The sections for fluorescence study were kept unstained and unfixed, and then analyzed by the MSP system. The adjacent tissue sections were H&E stained for histopathological examination and also served to help identify various microstructures on the adjacent unstained sections. The MSP system has all the functions of a conventional microscope, plus the ability of performing spectral analysis on selected micro-areas of a microscopic sample. For tissue fluorescence analysis, 442nm He-Cd laser light is used to illuminate and excite the unstained tissue sections. A 473-nm long pass filter was inserted behind the microscope objective to block the transmitted laser light while passing longer wavelength fluorescence signal. The fluorescence image of the sample can be viewed through the eyepieces and also recorded by a CCD camera. An optical fiber is mounted onto the image plane of the photograph port of the microscope to collect light from a specific micro area of the sample. The collected light is transmitted via the fiber to a disperserve type CCD spectrometer for spectral analysis. Results. The measurement results showed significant spectral differences between normal and cancerous tissues. For normal tissue regions, the spectral results agreed with our previous findings on autofluorescence of normal skin sections. For the cancerous regions, the epidermis showed very weak fluorescence signal, while the stratum corneum exhibited fluorescence emissions peaking at about 510 nm. In the dermis, the basal cell island and a band of surrounding areas showed very weak fluorescence signal, while distal dermis above and below the basal cell island showed greater fluorescence signal but with different spectral shapes. The very weak autofluorescence from the basal cell island and its surrounding area may be attributed to their degenerative properties that limited the production of collagens. Conclusions. The obtained microscopic results very well explain the in vivo fluorescence properties of BCC lesions in that they have decreased fluorescence intensity compared to the surrounding normal skin. The intrinsic spectra of various microstructures and the microscopic fluorescence images (corresponding fluorophore distribution in tissue) obtained in this study will be used for further theoretical modeling of in vivo fluorescence spectroscopy and imaging of skin cancers.
Wright, M J; Bishop, D T; Jackson, R C; Abernethy, B
2011-08-18
Badminton players of varying skill levels viewed normal and point-light video clips of opponents striking the shuttle towards the viewer; their task was to predict in which quadrant of the court the shuttle would land. In a whole-brain fMRI analysis we identified bilateral cortical networks sensitive to the anticipation task relative to control stimuli. This network is more extensive and localised than previously reported. Voxel clusters responding more strongly in experts than novices were associated with all task-sensitive areas, whereas voxels responding more strongly in novices were found outside these areas. Task-sensitive areas for normal and point-light video were very similar, whereas early visual areas responded differentially, indicating the primacy of kinematic information for sport-related anticipation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Liang, Winnie S.; Dunckley, Travis; Beach, Thomas G.; Grover, Andrew; Mastroeni, Diego; Walker, Douglas G.; Caselli, Richard J.; Kukull, Walter A.; McKeel, Daniel; Morris, John C.; Hulette, Christine; Schmechel, Donald; Alexander, Gene E.; Reiman, Eric M.; Rogers, Joseph; Stephan, Dietrich A.
2008-01-01
In this article, we have characterized and compared gene expression profiles from laser capture microdissected neurons in six functionally and anatomically distinct regions from clinically and histopathologically normal aged human brains. These regions, which are also known to be differentially vulnerable to the histopathological and metabolic features of Alzheimer’s disease (AD), include the entorhinal cortex and hippocampus (limbic and paralimbic areas vulnerable to early neurofibrillary tangle pathology in AD), posterior cingulate cortex (a paralimbic area vulnerable to early metabolic abnormalities in AD), temporal and prefrontal cortex (unimodal and heteromodal sensory association areas vulnerable to early neuritic plaque pathology in AD), and primary visual cortex (a primary sensory area relatively spared in early AD). These neuronal profiles will provide valuable reference information for future studies of the brain, in normal aging, AD and other neurological and psychiatric disorders. PMID:17077275
Analysis and diagnosis of basal cell carcinoma (BCC) via infrared imaging
NASA Astrophysics Data System (ADS)
Flores-Sahagun, J. H.; Vargas, J. V. C.; Mulinari-Brenner, F. A.
2011-09-01
In this work, a structured methodology is proposed and tested through infrared imaging temperature measurements of a healthy control group to establish expected normality ranges and of basal cell carcinoma patients (a type of skin cancer) previously diagnosed through biopsies of the affected regions. A method of conjugated gradients is proposed to compare measured dimensionless temperature difference values (Δ θ) between two symmetric regions of the patient's body, that takes into account the skin, the surrounding ambient and the individual core temperatures and doing so, the limitation of the results interpretation for different individuals become simple and nonsubjective. The range of normal temperatures in different regions of the body for seven healthy individuals was determined, and admitting that the human skin exhibits a unimodal normal distribution, the normal range for each region was considered to be the mean dimensionless temperature difference plus/minus twice the standard deviation of the measurements (Δθ±2σ) in order to represent 95% of the population. Eleven patients with previously diagnosed basal cell carcinoma through biopsies were examined with the method, which was capable of detecting skin abnormalities in all cases. Therefore, the conjugated gradients method was considered effective in the identification of the basal cell carcinoma through infrared imaging even with the use of a low optical resolution camera (160 × 120 pixels) and a thermal resolution of 0.1 °C. The method could also be used to scan a larger area around the lesion in order to detect the presence of other lesions still not perceptible in the clinical exam. However, it is necessary that a temperature differences mesh-like mapping of the healthy human body skin is produced, so that the comparison of the patient Δ θ could be made with the exact region of such mapping in order to possibly make a more effective diagnosis. Finally, the infrared image analyzed through the conjugated gradients method could be useful in the definition of a better safety margin in the surgery for the removal of the lesion, both minimizing esthetics damage to the patient and possibly avoiding basal cell carcinoma recurrence.
[Primary culture of human normal epithelial cells].
Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun
2017-11-28
The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.
Bengtsson, Henrik; Hössjer, Ola
2006-03-01
Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is a platform-independent package for R.
Saura, Daniel; Dulgheru, Raluca; Caballero, Luis; Bernard, Anne; Kou, Seisyou; Gonjilashvili, Natalia; Athanassopoulos, George D; Barone, Daniele; Baroni, Monica; Cardim, Nuno; Hagendorff, Andreas; Hristova, Krasimira; Lopez, Teresa; de la Morena, Gonzalo; Popescu, Bogdan A; Penicka, Martin; Ozyigit, Tolga; Rodrigo Carbonero, Jose David; Van De Veire, Nico; Von Bardeleben, Ralph Stephan; Vinereanu, Dragos; Zamorano, Jose Luis; Gori, Ann-Stephan; Cosyns, Bernard; Donal, Erwan; Habib, Gilbert; Addetia, Karima; Lang, Roberto M; Badano, Luigi P; Lancellotti, Patrizio
2017-02-01
To report normal reference ranges for echocardiographic dimensions of the proximal aorta obtained in a large group of healthy volunteers recruited using state-of-the-art cardiac ultrasound equipment, considering different measurement conventions, and taking into account gender, age, and body size of individuals. A total of 704 (mean age: 46.0 ± 13.5 years) healthy volunteers (310 men and 394 women) were prospectively recruited from the collaborating institutions of the Normal Reference Ranges for Echocardiography (NORRE) study. A comprehensive echocardiographic examination was obtained in all subjects following pre-defined protocols. Aortic dimensions were obtained in systole and diastole, following both the leading-edge to leading-edge and the inner-edge to inner-edge conventions. Diameters were measured at four levels: ventricular-arterial junction, sinuses of Valsalva, sino-tubular junction, and proximal tubular ascending aorta. Measures of aortic root in the short-axis view following the orientation of each of the three sinuses were also performed. Men had significantly larger body sizes when compared with women, and showed larger aortic dimensions independently of the measurement method used. Dimensions indexed by height and body surface area are provided, and stratification by age ranges is also displayed. In multivariable analysis, the independent predictors of aortic dimensions were age, gender, and height or body surface area. The NORRE study provides normal values of proximal aorta dimensions as assessed by echocardiography. Reference ranges for different anatomical levels using different (i) measurement conventions and (ii) at different times of the cardiac cycle (i.e. mid-systole and end-diastole) are provided. Age, gender, and body size were significant determinants of aortic dimensions. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.
Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas
2014-12-01
Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.
Method for materials deposition by ablation transfer processing
Weiner, Kurt H.
1996-01-01
A method in which a thin layer of semiconducting, insulating, or metallic material is transferred by ablation from a source substrate, coated uniformly with a thin layer of said material, to a target substrate, where said material is desired, with a pulsed, high intensity, patternable beam of energy. The use of a patternable beam allows area-selective ablation from the source substrate resulting in additive deposition of the material onto the target substrate which may require a very low percentage of the area to be covered. Since material is placed only where it is required, material waste can be minimized by reusing the source substrate for depositions on multiple target substrates. Due to the use of a pulsed, high intensity energy source the target substrate remains at low temperature during the process, and thus low-temperature, low cost transparent glass or plastic can be used as the target substrate. The method can be carried out atmospheric pressures and at room temperatures, thus eliminating vacuum systems normally required in materials deposition processes. This invention has particular application in the flat panel display industry, as well as minimizing materials waste and associated costs.
Stereology techniques in radiation biology
NASA Technical Reports Server (NTRS)
Kubinova, Lucie; Mao, XiaoWen; Janacek, Jiri; Archambeau, John O.; Nelson, G. A. (Principal Investigator)
2003-01-01
Clinicians involved in conventional radiation therapy are very concerned about the dose-response relationships of normal tissues. Before proceeding to new clinical protocols, radiation biologists involved with conformal proton therapy believe it is necessary to quantify the dose response and tolerance of the organs and tissues that will be irradiated. An important focus is on the vasculature. This presentation reviews the methodology and format of using confocal microscopy and stereological methods to quantify tissue parameters, cell number, tissue volume and surface area, and vessel length using the microvasculature as a model tissue. Stereological methods and their concepts are illustrated using an ongoing study of the dose response of the microvessels in proton-irradiated hemibrain. Methods for estimating the volume of the brain and the brain cortex, the total number of endothelial cells in cortical microvessels, the length of cortical microvessels, and the total surface area of cortical microvessel walls are presented step by step in a way understandable for readers with little mathematical background. It is shown that stereological techniques, based on a sound theoretical basis, are powerful and reliable and have been used successfully.
Proximal and distal esophageal sensitivity is decreased in patients with Barrett’s esophagus
Krarup, Anne L; Olesen, Søren S; Funch-Jensen, Peter; Gregersen, Hans; Drewes, Asbjørn M
2011-01-01
AIM: To investigate sensations to multimodal pain stimulation in the metaplastic and normal parts of the esophagus in patients with Barrett’s esophagus (BE). METHODS: Fifteen patients with BE and 15 age-matched healthy volunteers were subjected to mechanical, thermal and electrical pain stimuli of the esophagus. Both the metaplastic part and the normal part (4 and 14 cm, respectively, above the esophago-gastric junction) were examined. At sensory thresholds the stimulation intensity, referred pain areas, and evoked brain potentials were recorded. RESULTS: Patients were hyposensitive to heat stimulation both in the metaplastic part [median stimulation time to reach the pain detection threshold: 15 (12-34) s vs 14 (6-23) s in controls; F = 4.5, P = 0.04] and the normal part of the esophagus [median 17 (6-32) s vs 13 (8-20) s in controls; F = 6.2, P = 0.02]. Furthermore, patients were hyposensitive in the metaplastic part of the esophagus to mechanical distension [median volume at moderate pain: 50 (20-50) mL vs 33 (13-50) mL in controls; F = 5.7, P = 0.02]. No indication of central nervous system abnormalities was present, as responses were comparable between groups to electrical pain stimuli in the metaplastic part [median current evoking moderate pain: 13 (6-26) mA vs 12 (9-24) mA in controls; F = 0.1, P = 0.7], and in the normal part of the esophagus [median current evoking moderate pain: 9 (6-16) mA, vs 11 (5-11) mA in controls; F = 3.4, P = 0.07]. Furthermore, no differences were seen for the referred pain areas (P-values all > 0.3) or latencies and amplitudes for the evoked brain potentials (P-values all > 0.1). CONCLUSION: Patients with BE are hyposensitive both in the metaplastic and normal part of esophagus likely as a result of abnormalities affecting peripheral nerve pathways. PMID:21274382
MODIS Snowcover in North America: A Comparison of Winter 2013/14 and 2014/15 to Median Condition
NASA Astrophysics Data System (ADS)
Trubilowicz, J. W.; Floyd, B. C.; D'Amore, D. V.; Bidlack, A.
2015-12-01
The winters from 2013-2015 had exceptionally low snow-packs in much of western North America. In particular, the winter of 2014/2015 had the lowest peak snow-water-equivalent (SWE) depths ever recorded in many areas of the Pacific Northwest. These low snow-packs have contributed to drought conditions from British Columbia to California. Along with the low SWE values, the snow covered area (SCA) of the previous two winters has been a significant departure from normal conditions. SCA is related to SWE, rain-on-snow events and the seasonal water supply, provides insulation for plant root systems from late season frost, and is an important factor in forest fire hazard, delaying the start of soil and fuel drying. Remote sensing can be a useful tool to monitor SCA over large regions, with the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments providing a suitable temporal (twice daily), and spatial resolution (500m) to create detailed maps, even with high frequencies of cloud covered days. While comparison of SWE at snow monitoring sites to historical values is a standard analysis, doing the same for SCA has been difficult due to the technical and logistical problems related to processing the large amounts of spatial data required to determine a 'normal' annual SCA cycle. Through the use of new cloud-based computation methods from Google Earth Engine, we have calculated the monthly median (from 2002-2015) MODIS SCA, at a 500 m resolution, for all of the major Pacific draining watersheds of North America. Determining the 'normal' SCA cycle of the past 13 years allowed us to compare the past two winters to the median SCA levels, showing which basins have seen the most significant departures from normal SCA levels. Results indicate more significant departures from normal in basins with significant maritime-influenced snow-packs.
Subcutaneous blood flow in psoriasis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klemp, P.
1985-03-01
The simultaneously recorded disappearance rates of /sup 133/xe from subcutaneous adipose tissue in the crus were studied in 10 patients with psoriasis vulgaris using atraumatic labeling of the tissue in lesional skin (LS) areas and symmetrical, nonlesional skin (NLS) areas. Control experiments were performed bilaterally in 10 younger, healthy subjects. The subcutaneous washout rate constant was significantly higher in LS, 0.79 +/- 0.05 min-1 x 10(2) compared to the washout rate constant of NLS, 0.56 +/- 0.07 min-1. 10(2), or the washout rate constant in the normal subjects, 0.46 +/- 0.17 min-1 x 10(2). The mean washout rate constant inmore » NLS was 25% higher than the mean washout rate constant in the normal subjects. The difference was, however, not statistically significant. Differences in the washout rate constants might be due to abnormal subcutaneous tissue-to-blood partition (lambda) in the LS--and therefore not reflecting the real differences in the subcutaneous blood flow (SBF). The lambda for /sup 133/Xe was therefore measured--using a double isotope washout method (/sup 133/Xe and (/sup 131/I)antipyrine)--in symmetrical sites of the lateral crus in LS and NLS of 10 patients with psoriasis vulgaris and in 10 legs of normal subjects. In LS the lambda was 4.52 +/- 1.67 ml/g, which was not statistically different from that of NLS, 5.25 +/- 2.19 ml/g, nor from that of normal subcutaneous tissue, 4.98 +/- 1.04 ml/g. Calculations of the SBF using the obtained lambda values gave a significantly higher SBF in LS, 3.57 +/- 0.23 ml/100 g/min, compared to SBF in the NLS, 2.94 +/- 0.37 ml/100 g/min. There was no statistically significant difference between SBF in NLS and SBF in the normal subjects. The increased SBF in LS of psoriatics might be a secondary phenomenon to an increased heat loss in the lesional skin.« less
Regeneration of defective epithelial basement membrane and restoration of corneal transparency
Marino, Gustavo K.; Santhiago, Marcony R.; Santhanam, Abirami; Torricelli, Andre A. M.; Wilson, Steven E.
2018-01-01
PURPOSE To study regeneration of the normal ultrastructure of the epithelial basement membrane (EBM) in rabbit corneas that had -9D photorefractive keratectomy (PRK) and developed late haze (fibrosis) with restoration of transparency over one to four months after surgery and in corneas that had incisional wounds. METHODS Twenty-four rabbits had one of their eyes included into one of the two procedure groups (-9D PRK or nearly full-thickness incisional wounds), while the opposite eye serving as unwounded controls. All corneas were evaluated with slit lamp photos, transmission electron microscopy and immunohistochemistry for the myofibroblast marker alpha-smooth muscle actin and collagen type III. RESULTS In the ‘-9D PRK group’, corneas at one month after surgery had dense corneal haze and no evidence of regenerated EBM ultrastructure. By two months after surgery, however, small areas of stromal clearing began to appear within the confluent opacity (lacunae), and these corresponded to small islands of normally-regenerated EBM detected within larger area of the excimer laser-ablated zone with no evidence of normal EBM. By four months after surgery, the EBM was fully-regenerated and the corneal transparency was completely restored to the ablated zone. In the ‘Incisional wound group’, the two dense, linear corneal opacities were observed at one month after surgery and progressively faded by two and three months after surgery. The EBM ultrastructure was fully regenerated at the site of the incisions, including around epithelial plugs that extended into the stroma, by one month after surgery in all eyes. CONCLUSIONS In the rabbit model, spontaneous resolution of corneal fibrosis (haze) after high correction PRK is triggered by regeneration of EBM with normal ultrastructure in the excimer laser- ablated zone. Conversely, incisional wounds heal in rabbit corneas without the development of myofibroblasts because the EBM regenerates normally by one month after surgery. PMID:28486725
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
Wess, G; Mäurer, J; Simak, J; Hartmann, K
2010-01-01
M-mode is the echocardiographic gold standard to diagnose dilated cardiomyopathy (DCM) in dogs, whereas Simpson's method of discs (SMOD) is the preferred method to detect echocardiographic evidence of disease in humans. To establish reference values for SMOD and to compare those with M-mode measurements. Nine hundred and sixty-nine examinations of 471 Doberman Pinschers. Using a prospective longitudinal study design. Reference values for SMOD were established using 75 healthy Doberman Pinschers >8 years old with <50 ventricular premature contractions (VPCs) in 24 hours. The ability of the new SMOD cut-off values, normalized to body surface area (BSA), for left ventricular end-diastolic volume (LVEDV/BSA >95mL/m(2) ) and end-systolic volume (LVESV/BSA > 55mL/m(2) ) to detect echocardiographic changes in Doberman Pinschers with DCM was compared with currently recommended M-mode values. Dogs with elevated SMOD values but normal M-mode measurements were followed-up using a prospective longitudinal study design. At the final examination 175 dogs were diagnosed with DCM according to both methods (M-mode and SMOD). At previous examinations, M-mode values were abnormal in 142 examinations only, whereas all 175 SMOD already had detected changes. Additionally, 19 of 154 dogs with >100 VPCs/24 hours and normal M-mode values had abnormal SMOD measurement. Six dogs with increased SMOD measurements remained healthy at several follow-up examinations (classified as false positive); in 24 dogs with increased SMOD measurements, no follow-up examinations were available (classified as unclear). SMOD measurements are superior to M-mode to detect early echocardiographic changes in Dobermans with occult DCM. Copyright © 2010 by the American College of Veterinary Internal Medicine.
Automated Cell Detection and Morphometry on Growth Plate Images of Mouse Bone
Ascenzi, Maria-Grazia; Du, Xia; Harding, James I; Beylerian, Emily N; de Silva, Brian M; Gross, Ben J; Kastein, Hannah K; Wang, Weiguang; Lyons, Karen M; Schaeffer, Hayden
2014-01-01
Microscopy imaging of mouse growth plates is extensively used in biology to understand the effect of specific molecules on various stages of normal bone development and on bone disease. Until now, such image analysis has been conducted by manual detection. In fact, when existing automated detection techniques were applied, morphological variations across the growth plate and heterogeneity of image background color, including the faint presence of cells (chondrocytes) located deeper in tissue away from the image’s plane of focus, and lack of cell-specific features, interfered with identification of cell. We propose the first method of automated detection and morphometry applicable to images of cells in the growth plate of long bone. Through ad hoc sequential application of the Retinex method, anisotropic diffusion and thresholding, our new cell detection algorithm (CDA) addresses these challenges on bright-field microscopy images of mouse growth plates. Five parameters, chosen by the user in respect of image characteristics, regulate our CDA. Our results demonstrate effectiveness of the proposed numerical method relative to manual methods. Our CDA confirms previously established results regarding chondrocytes’ number, area, orientation, height and shape of normal growth plates. Our CDA also confirms differences previously found between the genetic mutated mouse Smad1/5CKO and its control mouse on fluorescence images. The CDA aims to aid biomedical research by increasing efficiency and consistency of data collection regarding arrangement and characteristics of chondrocytes. Our results suggest that automated extraction of data from microscopy imaging of growth plates can assist in unlocking information on normal and pathological development, key to the underlying biological mechanisms of bone growth. PMID:25525552
Liu, Yu-Ying; Chen, Mei; Wollstein, Gadi; Duker, Jay S.; Fujimoto, James G.; Schuman, Joel S.; Rehg, James M.
2011-01-01
Purpose. To develop an automated method to identify the normal macula and three macular pathologies (macular hole [MH], macular edema [ME], and age-related macular degeneration [AMD]) from the fovea-centered cross sections in three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) images. Methods. A sample of SD-OCT macular scans (macular cube 200 × 200 or 512 × 128 scan protocol; Cirrus HD-OCT; Carl Zeiss Meditec, Inc., Dublin, CA) was obtained from healthy subjects and subjects with MH, ME, and/or AMD (dataset for development: 326 scans from 136 subjects [193 eyes], and dataset for testing: 131 scans from 37 subjects [58 eyes]). A fovea-centered cross-sectional slice for each of the SD-OCT images was encoded using spatially distributed multiscale texture and shape features. Three ophthalmologists labeled each fovea-centered slice independently, and the majority opinion for each pathology was used as the ground truth. Machine learning algorithms were used to identify the discriminative features automatically. Two-class support vector machine classifiers were trained to identify the presence of normal macula and each of the three pathologies separately. The area under the receiver operating characteristic curve (AUC) was calculated to assess the performance. Results. The cross-validation AUC result on the development dataset was 0.976, 0.931, 0939, and 0.938, and the AUC result on the holdout testing set was 0.978, 0.969, 0.941, and 0.975, for identifying normal macula, MH, ME, and AMD, respectively. Conclusions. The proposed automated data-driven method successfully identified various macular pathologies (all AUC > 0.94). This method may effectively identify the discriminative features without relying on a potentially error-prone segmentation module. PMID:21911579
Rapin, Nicolas; Bagger, Frederik Otzen; Jendholm, Johan; Mora-Jensen, Helena; Krogh, Anders; Kohlmann, Alexander; Thiede, Christian; Borregaard, Niels; Bullinger, Lars; Winther, Ole; Theilgaard-Mönch, Kim; Porse, Bo T
2014-02-06
Gene expression profiling has been used extensively to characterize cancer, identify novel subtypes, and improve patient stratification. However, it has largely failed to identify transcriptional programs that differ between cancer and corresponding normal cells and has not been efficient in identifying expression changes fundamental to disease etiology. Here we present a method that facilitates the comparison of any cancer sample to its nearest normal cellular counterpart, using acute myeloid leukemia (AML) as a model. We first generated a gene expression-based landscape of the normal hematopoietic hierarchy, using expression profiles from normal stem/progenitor cells, and next mapped the AML patient samples to this landscape. This allowed us to identify the closest normal counterpart of individual AML samples and determine gene expression changes between cancer and normal. We find the cancer vs normal method (CvN method) to be superior to conventional methods in stratifying AML patients with aberrant karyotype and in identifying common aberrant transcriptional programs with potential importance for AML etiology. Moreover, the CvN method uncovered a novel poor-outcome subtype of normal-karyotype AML, which allowed for the generation of a highly prognostic survival signature. Collectively, our CvN method holds great potential as a tool for the analysis of gene expression profiles of cancer patients.
2014-01-01
Background Previously, we evaluated a minimally invasive epidermal lipid sampling method called skin scrub, which achieved reproducible and comparable results to skin scraping. The present study aimed at investigating regional variations in canine epidermal lipid composition using the skin scrub technique and its suitability for collecting skin lipids in dogs suffering from certain skin diseases. Eight different body sites (5 highly and 3 lowly predisposed for atopic lesions) were sampled by skin scrub in 8 control dogs with normal skin. Additionally, lesional and non-lesional skin was sampled from 12 atopic dogs and 4 dogs with other skin diseases by skin scrub. Lipid fractions were separated by high performance thin layer chromatography and analysed densitometrically. Results No significant differences in total lipid content were found among the body sites tested in the control dogs. However, the pinna, lip and caudal back contained significantly lower concentrations of ceramides, whereas the palmar metacarpus and the axillary region contained significantly higher amounts of ceramides and cholesterol than most other body sites. The amount of total lipids and ceramides including all ceramide classes were significantly lower in both lesional and non-lesional skin of atopic dogs compared to normal skin, with the reduction being more pronounced in lesional skin. The sampling by skin scrub was relatively painless and caused only slight erythema at the sampled areas but no oedema. Histological examinations of skin biopsies at 2 skin scrubbed areas revealed a potential lipid extraction from the transition zone between stratum corneum and granulosum. Conclusions The present study revealed regional variations in the epidermal lipid and ceramide composition in dogs without skin abnormalities but no connection between lipid composition and predilection sites for canine atopic dermatitis lesions. The skin scrub technique proved to be a practicable sampling method for canine epidermal lipids, revealed satisfying results regarding alterations of skin lipid composition in canine atopic dermatitis and might be suitable for epidermal lipid investigations of further canine skin diseases. Although the ceramide composition should be unaffected by the deeper lipid sampling of skin scrub compared to other sampling methods, further studies are required to determine methodological differences. PMID:25012966
Characteristics of the uridine uptake system in normal and polyoma transformed hamster embryo cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemkin, J.A.
1973-01-01
The lability of the uridine uptake system in the normal and polyoma transformed hamster embryo fibroblast was studied. The major areas investigated were: the kinetic parameters of uridine transport, a comparison of changes in cellular ATP content by factors which modulate uridine uptake, and a comparison of the qualitative and quantitative effects of the same modulating agent on uridine transport, cell growth, and cellular ATP content. Uridine uptake into cells in vitro was examined using tritiated uridine as a tracer to measure the amount of uridine incorporated into the acid soluble and acid-insoluble fractions of the cells studied. The ATPmore » content of the cells was determined by the firefly bioluminescence method. It was found that the K/sub t/ for uridine uptake into the normal hamster embryo cell and two polyoma transformed hamster embryo cell lines was identical. However, the V/sub max/ for uridine transport was higher in both polyoma transformed cell lines. Furthermore, the K/sub t/ in both the normal and transformed cell cultured in serum-less or serum-containing media was identical, although the V/sub max/ was higher in the serum-stimulated cell in both the normal and transformed cell. Stimulation of the normal cell with adenosine produced a different K/sub t/ for uridine transport. Preliminary investigations have demonstrated that treatment of the polyoma transformed with adenosine also induces a different K/sub t/ (not shown). The K/sub i/ for phloretin inhibition in serum-less and serum-stimulated normal and polyoma transformed cells was found to be identical in each case.« less
Jonnagaddala, Jitendra; Jue, Toni Rose; Chang, Nai-Wen; Dai, Hong-Jie
2016-01-01
The rapidly increasing biomedical literature calls for the need of an automatic approach in the recognition and normalization of disease mentions in order to increase the precision and effectivity of disease based information retrieval. A variety of methods have been proposed to deal with the problem of disease named entity recognition and normalization. Among all the proposed methods, conditional random fields (CRFs) and dictionary lookup method are widely used for named entity recognition and normalization respectively. We herein developed a CRF-based model to allow automated recognition of disease mentions, and studied the effect of various techniques in improving the normalization results based on the dictionary lookup approach. The dataset from the BioCreative V CDR track was used to report the performance of the developed normalization methods and compare with other existing dictionary lookup based normalization methods. The best configuration achieved an F-measure of 0.77 for the disease normalization, which outperformed the best dictionary lookup based baseline method studied in this work by an F-measure of 0.13.Database URL: https://github.com/TCRNBioinformatics/DiseaseExtract. © The Author(s) 2016. Published by Oxford University Press.
Volz-Köster, S; Volz, J; Kiefer, A; Biesalski, H K
2000-01-01
The appearance of the cervical mucosa is regulated by different factors including retinoic acid. Hormone-dependent alteration of the cervix uteri mucosa is accompanied by a decrease or increase of cytoplasmatic retinoic-acid-binding protein (CRABP). To elucidate whether this hormone-dependent alteration of CRABP is preserved in the case of neoplasms of the cervix uteri, we measured the level of total and apo-CRABP in normal and neoplastically transformed cervical cells. In a prospective pilot study, standardised biopsies of normal epithelium and cervical intra-epithelial neoplasm grade 3 (CIN III) were taken from 24 patients. A newly developed method was used to determine the intra-epithelial level of apo- and total CRABP. The concentration of total CRABP in normal squamous epithelium compared with that in intra-epithelial neoplasm grade 3 is very significantly lower in the CIN III areas (normal: 3.66 +/- 1.46 pmol/ mg wet weight +/- SD; CIN III 1.43 +/- 0.59 pmol/mg P < 0.01). In addition CRABP in the apo form is lower in normal than in neoplastic epithelium (Wilcoxon test for paired non-parametric values: P < 0.05; mean for all patients: normal: 1.65 + 0.82 pmol/mg; CIN III: 1.14 +/- 0.23 pmol/mg). From our results we conclude that, in neoplastically transformed cells, the hormone-dependent CRABP cycle is interrupted. Whether this has consequences for the further development of the neoplastic cells has to be elucidated.
Automated quantification of myocardial perfusion SPECT using simplified normal limits.
Slomka, Piotr J; Nishina, Hidetaka; Berman, Daniel S; Akincioglu, Cigdem; Abidov, Aiden; Friedman, John D; Hayes, Sean W; Germano, Guido
2005-01-01
To simplify development of normal limits for myocardial perfusion SPECT (MPS), we implemented a quantification scheme in which normal limits are derived without visual scoring of abnormal scans or optimization of regional thresholds. Normal limits were derived from same-day TI-201 rest/Tc-99m-sestamibi stress scans of male (n = 40) and female (n = 40) low-likelihood patients. Defect extent, total perfusion deficit (TPD), and regional perfusion extents were derived by comparison to normal limits in polar-map coordinates. MPS scans from 256 consecutive patients without known coronary artery disease, who underwent coronary angiography, were analyzed. The new method of quantification (TPD) was compared with our previously developed quantification system and visual scoring. The receiver operator characteristic area under the curve for detection of 50% or greater stenoses by TPD (0.88 +/- 0.02) was higher than by visual scoring (0.83 +/- 0.03) ( P = .039) or standard quantification (0.82 +/- 0.03) ( P = .004). For detection of 70% or greater stenoses, it was higher for TPD (0.89 +/- 0.02) than for standard quantification (0.85 +/- 0.02) ( P = .014). Sensitivity and specificity were 93% and 79%, respectively, for TPD; 81% and 85%, respectively, for visual scoring; and 80% and 73%, respectively, for standard quantification. The use of stress mode-specific normal limits did not improve performance. Simplified quantification achieves performance better than or equivalent to visual scoring or quantification based on per-segment visual optimization of abnormality thresholds.
Distribution of endogenous albumin in the glomerular wall of proteinuric patients.
Russo, P. A.; Bendayan, M.
1990-01-01
Glomerular proteinuria seems to be related, in part, to loss or impairment of the normal barrier function of the glomerular capillary wall. To investigate the functional properties of this barrier, endogenous albumin was revealed in the glomerular wall of proteinuric patients and compared with a nonproteinuric control by immunoelectron microscopy using the protein A-gold method. In the control biopsy, peaks of albumin accumulation were noted in the subendothelial area and in the inner portion of the lamina densa, with gradual tapering of the distribution toward the epithelial side of the basement membrane. The urinary space and epithelial cells were weakly labeled. In tissues from proteinuric patients, albumin was distributed throughout the entire width of the glomerular basement membrane, although the pattern of accumulation varied between patients. The urinary space showed significant labeling associated with some flocculent material. Mesangial areas were heavily labeled in tissues from both control and proteinuric patients. In the latter, lysozomes in glomerular and tubular epithelial cells also accumulated albumin, which is evidence of reabsorption. These results reveal the existence, in normal conditions, of a barrier located in the subendothelial area of the glomerular basement membrane, the loss of which, as in the idiopathic nephrotic syndrome, leads to diffuse distribution of albumin in the glomerular capillary wall. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:2260634
Modelling of PM10 concentration for industrialized area in Malaysia: A case study in Shah Alam
NASA Astrophysics Data System (ADS)
N, Norazian Mohamed; Abdullah, M. M. A.; Tan, Cheng-yau; Ramli, N. A.; Yahaya, A. S.; Fitri, N. F. M. Y.
In Malaysia, the predominant air pollutants are suspended particulate matter (SPM) and nitrogen dioxide (NO2). This research is on PM10 as they may trigger harm to human health as well as environment. Six distributions, namely Weibull, log-normal, gamma, Rayleigh, Gumbel and Frechet were chosen to model the PM10 observations at the chosen industrial area i.e. Shah Alam. One-year period hourly average data for 2006 and 2007 were used for this research. For parameters estimation, method of maximum likelihood estimation (MLE) was selected. Four performance indicators that are mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2) and prediction accuracy (PA), were applied to determine the goodness-of-fit criteria of the distributions. The best distribution that fits with the PM10 observations in Shah Alamwas found to be log-normal distribution. The probabilities of the exceedences concentration were calculated and the return period for the coming year was predicted from the cumulative density function (cdf) obtained from the best-fit distributions. For the 2006 data, Shah Alam was predicted to exceed 150 μg/m3 for 5.9 days in 2007 with a return period of one occurrence per 62 days. For 2007, the studied area does not exceed the MAAQG of 150 μg/m3
Schmidt, Carl R; Shires, Peter; Mootoo, Mary
2012-02-01
Irreversible electroporation (IRE) is a largely non-thermal method for the ablation of solid tumours. The ability of ultrasound (US) to measure the size of the IRE ablation zone was studied in a porcine liver model. Three normal pig livers were treated in vivo with a total of 22 ablations using IRE. Ultrasound was used within minutes after ablation and just prior to liver harvest at either 6 h or 24 h after the procedure. The area of cellular necrosis was measured after staining with nitroblue tetrazolium and the percentage of cell death determined by histomorphometry. Visible changes in the hepatic parenchyma were apparent by US after all 22 ablations using IRE. The mean maximum diameter of the ablation zone measured by US during the procedure was 20.1 ± 2.7 mm. This compared with a mean cellular necrosis zone maximum diameter of 20.3 ± 2.9 mm as measured histologically. The mean percentage of dead cells within the ablation zone was 77% at 6 h and 98% at 24 h after ablation. Ultrasound is a useful modality for measuring the ablation zone within minutes of applying IRE to normal liver tissue. The area of parenchymal change measured by US correlates with the area of cellular necrosis. © 2011 International Hepato-Pancreato-Biliary Association.
Type II thyroplasty changes cortical activation in patients with spasmodic dysphonia.
Tateya, Ichiro; Omori, Koichi; Kojima, Hisayoshi; Naito, Yasushi; Hirano, Shigeru; Yamashita, Masaru; Ito, Juichi
2015-04-01
Spasmodic dysphonia (SD) is a complex neurological communication disorder characterized by a choked, strain-strangled vocal quality with voice stoppages in phonation. Its symptoms are exacerbated by situations where communication failures are anticipated, and reduced when talking with animals or small children. Symptoms are also reduced following selected forms of treatment. It is reasonable to assume that surgical alteration reducing symptoms would also alter brain activity, though demonstration of such a phenomenon has not been documented. The objective of this study is to reveal brain activity of SD patients before and after surgical treatment. We performed lateralization thyroplasties on three adductor SD patients and compared pre- and post-operative positron emission tomography recordings made during vocalization. Pre-operatively, cordal supplementary motor area (SMA), bilateral auditory association areas, and thalamus were activated while reading aloud. Such activity was not observed in normal subjects. Type II thyroplasty was performed according to Isshiki's method and the strained voice was significantly reduced or eliminated in all three patients. Post-operative PET showed normal brain activation pattern with a significant decrease in cordal SMA, bilateral auditory association areas and thalamus, and a significant increase in rostral SMA compared with pre-operative recordings. This is the first report showing that treatment to a peripheral organ, which reverses voice symptoms, also reverses dysfunctional patterns of the central nervous system in patients with SD. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
TOPEX/El Nino Watch - Warm Water Pool is Increasing, Nov. 10, 1997
NASA Technical Reports Server (NTRS)
1997-01-01
This image of the Pacific Ocean was produced using sea surface height measurements taken by the U.S./French TOPEX/Poseidon satellite. The image shows sea surface height relative to normal ocean conditions on Nov. 10, 1997. The volume of extra warm surface water (shown in white) in the core of the El Nino continues to increase, especially in the area between 15 degrees south latitude and 15 degrees north latitude in the eastern Pacific Ocean. The area of low sea level (shown in purple) has decreased somewhat from late October. The white and red areas indicate unusual patterns of heat storage; in the white areas, the sea surface is between 14 centimeters and 32 cm (6 inches to 13 inches) above normal; in the red areas, it is about 10 centimeters (4 inches) above normal. The surface area covered by the warm water mass is about one-and-one-half times the size of the continental United States. The added amount of oceanic warm water near the Americas, with a temperature between 21 to 30 degrees Celsius (70 to 85 degrees Fahrenheit), is about 30 times the volume of water in all the U.S. Great Lakes combined. The green areas indicate normal conditions, while purple (the western Pacific) means at least 18 centimeters (7 inches) below normal sea level.
The El Nino phenomenon is thought to be triggered when the steady westward blowing trade winds weaken and even reverse direction. This change in the winds allows a large mass of warm water (the red and white areas) that is normally located near Australia to move eastward along the equator until it reaches the coast of South America. The displacement of so much warm water affects evaporation, where rain clouds form and, consequently, alters the typical atmospheric jet stream patterns around the world. Using these global data, limited regional measurements from buoys and ships, and a forecasting model of the ocean-atmospheric system, the National Centers for Environmental Prediction (NCEP) of the National Oceanic and Atmospheric Administration (NOAA) has issued an advisory indicating the presence of a strong El Nino condition throughout the winter.For more information, please visit the TOPEX/Poseidon project web page at http://topex-www.jpl.nasa.gov/Responsible and controlled use: Older cannabis users and harm reduction.
Lau, Nicholas; Sales, Paloma; Averill, Sheigla; Murphy, Fiona; Sato, Sye-Ok; Murphy, Sheigla
2015-08-01
Cannabis use is becoming more accepted in mainstream society. In this paper, we use Zinberg's classic theoretical framework of drug, set, and setting to elucidate how older adult cannabis users managed health, social and legal risks in a context of normalized cannabis use. We present selected findings from our qualitative study of Baby Boomer (born 1946-1964) cannabis users in the San Francisco Bay Area. Data collection consisted of a recorded, in-depth life history interview followed by a questionnaire and health survey. Qualitative interviews were analyzed to discover the factors of cannabis harm reduction from the users' perspectives. Interviewees made harm reduction choices based on preferred cannabis derivatives and routes of administration, as well as why, when, where, and with whom to use. Most interviewees minimized cannabis-related harms so they could maintain social functioning in their everyday lives. Responsible and controlled use was described as moderation of quantity and frequency of cannabis used, using in appropriate settings, and respect for non-users. Users contributed to the normalization of cannabis use through normification. Participants followed rituals or cultural practices, characterized by sanctions that helped define "normal" or "acceptable" cannabis use. Users contributed to cannabis normalization through their harm reduction methods. These cultural practices may prove to be more effective than formal legal prohibitions in reducing cannabis-related harms. Findings also suggest that users with access to a regulated market (medical cannabis dispensaries) were better equipped to practice harm reduction. More research is needed on both cannabis culture and alternative routes of administration as harm reduction methods. Copyright © 2015 Elsevier B.V. All rights reserved.
New methodology for adjusting rotating shadowband irradiometer measurements
NASA Astrophysics Data System (ADS)
Vignola, Frank; Peterson, Josh; Wilbert, Stefan; Blanc, Philippe; Geuder, Norbert; Kern, Chris
2017-06-01
A new method is developed for correcting systematic errors found in rotating shadowband irradiometer measurements. Since the responsivity of photodiode-based pyranometers typically utilized for RST sensors is dependent upon the wavelength of the incident radiation and the spectral distribution of the incident radiation is different for the Direct Normal Trradiance and the Diffuse Horizontal Trradiance, spectral effects have to be considered. These cause the most problematic errors when applying currently available correction functions to RST measurements. Hence, direct normal and diffuse contributions are analyzed and modeled separately. An additional advantage of this methodology is that it provides a prescription for how to modify the adjustment algorithms to locations with different atmospheric characteristics from the location where the calibration and adjustment algorithms were developed. A summary of results and areas for future efforts are then discussed.
Zangwill, Linda M; Chan, Kwokleung; Bowd, Christopher; Hao, Jicuang; Lee, Te-Won; Weinreb, Robert N; Sejnowski, Terrence J; Goldbaum, Michael H
2004-09-01
To determine whether topographical measurements of the parapapillary region analyzed by machine learning classifiers can detect early to moderate glaucoma better than similarly processed measurements obtained within the disc margin and to improve methods for optimization of machine learning classifier feature selection. One eye of each of 95 patients with early to moderate glaucomatous visual field damage and of each of 135 normal subjects older than 40 years participating in the longitudinal Diagnostic Innovations in Glaucoma Study (DIGS) were included. Heidelberg Retina Tomograph (HRT; Heidelberg Engineering, Dossenheim, Germany) mean height contour was measured in 36 equal sectors, both along the disc margin and in the parapapillary region (at a mean contour line radius of 1.7 mm). Each sector was evaluated individually and in combination with other sectors. Gaussian support vector machine (SVM) learning classifiers were used to interpret HRT sector measurements along the disc margin and in the parapapillary region, to differentiate between eyes with normal and glaucomatous visual fields and to compare the results with global and regional HRT parameter measurements. The area under the receiver operating characteristic (ROC) curve was used to measure diagnostic performance of the HRT parameters and to evaluate the cross-validation strategies and forward selection and backward elimination optimization techniques that were used to generate the reduced feature sets. The area under the ROC curve for mean height contour of the 36 sectors along the disc margin was larger than that for the mean height contour in the parapapillary region (0.97 and 0.85, respectively). Of the 36 individual sectors along the disc margin, those in the inferior region between 240 degrees and 300 degrees, had the largest area under the ROC curve (0.85-0.91). With SVM Gaussian techniques, the regional parameters showed the best ability to discriminate between normal eyes and eyes with glaucomatous visual field damage, followed by the global parameters, mean height contour measures along the disc margin, and mean height contour measures in the parapapillary region. The area under the ROC curve was 0.98, 0.94, 0.93, and 0.85, respectively. Cross-validation and optimization techniques demonstrated that good discrimination (99% of peak area under the ROC curve) can be obtained with a reduced number of HRT parameters. Mean height contour measurements along the disc margin discriminated between normal and glaucomatous eyes better than measurements obtained in the parapapillary region. Copyright Association for Research in Vision and Ophthalmology
Improving diagnostic recognition of primary hyperparathyroidism with machine learning.
Somnay, Yash R; Craven, Mark; McCoy, Kelly L; Carty, Sally E; Wang, Tracy S; Greenberg, Caprice C; Schneider, David F
2017-04-01
Parathyroidectomy offers the only cure for primary hyperparathyroidism, but today only 50% of primary hyperparathyroidism patients are referred for operation, in large part, because the condition is widely under-recognized. The diagnosis of primary hyperparathyroidism can be especially challenging with mild biochemical indices. Machine learning is a collection of methods in which computers build predictive algorithms based on labeled examples. With the aim of facilitating diagnosis, we tested the ability of machine learning to distinguish primary hyperparathyroidism from normal physiology using clinical and laboratory data. This retrospective cohort study used a labeled training set and 10-fold cross-validation to evaluate accuracy of the algorithm. Measures of accuracy included area under the receiver operating characteristic curve, precision (sensitivity), and positive and negative predictive value. Several different algorithms and ensembles of algorithms were tested using the Weka platform. Among 11,830 patients managed operatively at 3 high-volume endocrine surgery programs from March 2001 to August 2013, 6,777 underwent parathyroidectomy for confirmed primary hyperparathyroidism, and 5,053 control patients without primary hyperparathyroidism underwent thyroidectomy. Test-set accuracies for machine learning models were determined using 10-fold cross-validation. Age, sex, and serum levels of preoperative calcium, phosphate, parathyroid hormone, vitamin D, and creatinine were defined as potential predictors of primary hyperparathyroidism. Mild primary hyperparathyroidism was defined as primary hyperparathyroidism with normal preoperative calcium or parathyroid hormone levels. After testing a variety of machine learning algorithms, Bayesian network models proved most accurate, classifying correctly 95.2% of all primary hyperparathyroidism patients (area under receiver operating characteristic = 0.989). Omitting parathyroid hormone from the model did not decrease the accuracy significantly (area under receiver operating characteristic = 0.985). In mild disease cases, however, the Bayesian network model classified correctly 71.1% of patients with normal calcium and 92.1% with normal parathyroid hormone levels preoperatively. Bayesian networking and AdaBoost improved the accuracy of all parathyroid hormone patients to 97.2% cases (area under receiver operating characteristic = 0.994), and 91.9% of primary hyperparathyroidism patients with mild disease. This was significantly improved relative to Bayesian networking alone (P < .0001). Machine learning can diagnose accurately primary hyperparathyroidism without human input even in mild disease. Incorporation of this tool into electronic medical record systems may aid in recognition of this under-diagnosed disorder. Copyright © 2016 Elsevier Inc. All rights reserved.
Calculation of Cardiac Kinetic Energy Index from PET images.
Sims, John; Oliveira, Marco Antônio; Meneghetti, José Claudio; Gutierrez, Marco Antônio
2015-01-01
Cardiac function can be assessed from displacement measurements in imaging modalities from nuclear medicine Using positron emission tomography (PET) image sequences with Rubidium-82, we propose and estimate the total Kinetic Energy Index (KEf) obtained from the velocity field, which was calculated using 3D optical flow(OF) methods applied over the temporal image sequence. However, it was found that the brightness of the image varied unexpectedly between frames, violating the constant brightness assumption of the OF method and causing large errors in estimating the velocity field. Therefore total brightness was equalized across image frames and the adjusted configuration tested with rest perfusion images acquired from individuals with normal (n=30) and low (n=33) cardiac function. For these images KEf was calculated as 0.5731±0.0899 and 0.3812±0.1146 for individuals with normal and low cardiac function respectively. The ability of KEf to properly classify patients into the two groups was tested with a ROC analysis, with area under the curve estimated as 0.906. To our knowledge this is the first time that KEf has been applied to PET images.
The relationship between consanguineous marriage and death in fetus and infants.
Mohammadi, Majid Mehr; Hooman, Heidar Ali; Afrooz, Gholam Ali; Daramadi, Parviz Sharifi
2012-05-01
Given the high prevalence of consanguineous marriages in rural and urban areas of Iran, the aim of this study was to identify its role in increasing fetal and infant deaths. This was a cross-sectional study in which 494 mothers with more than one exceptional child (mentally retarded and physically-dynamically disabled) or with normal children were selected based on multi-stage random sampling method. Data was gathered using the features of parents with more than one exceptional child questionnaire. The validity and reliability of this questionnaire was acceptable. Hierarchical log-linear method was used for statistical analysis. Consanguineous marriage significantly increased the number of births of exceptional children. Moreover, there was a significant relation between the history of fetal/infant death and belonging to the group. There was a significant relation between consanguineous marriage and the history of fetal/infant death which means consanguineous marriage increased the prevalence of fetal/infant death in parents with exceptional children rather than in parents with normal children. The rate of fetal/infant death in exceptional births of consanguineous marriages was higher than that of non-consanguineous marriages.
Spatially tuned normalization explains attention modulation variance within neurons.
Ni, Amy M; Maunsell, John H R
2017-09-01
Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803-813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375-1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical area can be largely explained by between-neuron differences in normalization strength. Here we demonstrate that attention modulation size varies within neurons as well and that this variance is largely explained by within-neuron differences in normalization strength. We provide a new spatially tuned normalization model that explains this broad range of observed normalization and attention effects. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Wulandari, Asri; Asti Anggari, Ega; Dwiasih, Novi; Suyanto, Imam
2018-03-01
Very Low Frequency (VLF) measurement has been done at Pagerkandang Volcanic, Dieng Volcanic Complex (DVC) to examine the possible existence of conductive zones that related with geothermal manifestation. VLF – EM survey used tilt mode with T-VLF BRGM Iris Instrument operated with two frequencies, they are 22200 Hz from Japan (JJI) and 19800 Hz from Australia (NWC). There are five lines with distance between lines is 50 m, and distance between measure points is 20 m. The parameters measured from VLF method are tilt angle (%) and elliptisity (%). Data processed by tilt angle value with fraser and Karous – Hjelt filter used WinVLF program. Karous – Hjelt filter resulted current density contour to estimate lateral location from conductive and resistive zones. The conductive zone is interpreted as the area which have high current density value. This area located at eastern dan western of Pagerkandang Volcanic. The conductive zone related to geothermal manifestation as like as fumarol that appeared because presenced of normal fault. Whereas the resistive zone is interpreted as the area which have low current density value. This area spread almost in the middle of the Pagerkandang Volcanic. The resistive zone was caused by the high weathering in claystone.
Analysis of Cervical Supernatant Samples Luminescence Using 355 nm Laser
NASA Astrophysics Data System (ADS)
Vaitkuviene, A.; Gegzna, V.; Kurtinaitiene, R.; Stanikunas, R.; Rimiene, J.; Vaitkus, J.
2010-05-01
The biomarker discovery for accurate detection and diagnosis of cervical carcinoma and its malignant precursors represents one of the current challenges in clinical medicine. Laser induced autofluorescence spectra in cervical smear content were fitted to predict the cervical epithelium diagnosis as a lab off "optical biopsy" method. Liquid PAP supernatant sediment dried on Quartz plate spectroscopy was performed by 355 nm Nd YAG microlaser STA-1 (Standa, Ltd). For comparison a liquid supernatant spectroscopy was formed by laboratory "Perkin Elmer LS 50B spetrometer at 290, 300, 310 nm excitations. Analysis of spectrum was performed by approximation using the multi-peaks program with Lorentz functions for the liquid samples and with Gaussian functions for the dry samples. Ratio of spectral components area to the area under whole experimental curve (SPP) was calculated. The spectral components were compared by averages of SPP using Mann-Whitney U-test in histology groups. Results. Differentiation of Normal and HSIL/CIN2+ cases in whole supernatant could be performed by stationary laboratory lamp spectroscopy at excitation 290 nm and emission >379 nm with accuracy AUC 0,69, Sens 0,72, Spec 0,65. Differentiation Normal versus HSIL/CIN2+ groups in dried enriched supernatant could be performed by 355 nm microlaser excitation at emission 405-424 nm with accuracy (AUC 0,96, Sens 0,91, Spec 1.00). Diagnostic algorithm could be created for all histology groups differentiation under 355 nm excitation. Microlaser induced "optical biopsy "looks promising method for cervical screening at the point of care.
Health Risk Assessment of Inhalable Particulate Matter in Beijing Based on the Thermal Environment
Xu, Lin-Yu; Yin, Hao; Xie, Xiao-Dong
2014-01-01
Inhalable particulate matter (PM10) is a primary air pollutant closely related to public health, and an especially serious problem in urban areas. The urban heat island (UHI) effect has made the urban PM10 pollution situation more complex and severe. In this study, we established a health risk assessment system utilizing an epidemiological method taking the thermal environment effects into consideration. We utilized a remote sensing method to retrieve the PM10 concentration, UHI, Normalized Difference Vegetation Index (NDVI), and Normalized Difference Water Index (NDWI). With the correlation between difference vegetation index (DVI) and PM10 concentration, we utilized the established model between PM10 and thermal environmental indicators to evaluate the PM10 health risks based on the epidemiological study. Additionally, with the regulation of UHI, NDVI and NDWI, we aimed at regulating the PM10 health risks and thermal environment simultaneously. This study attempted to accomplish concurrent thermal environment regulation and elimination of PM10 health risks through control of UHI intensity. The results indicate that urban Beijing has a higher PM10 health risk than rural areas; PM10 health risk based on the thermal environment is 1.145, which is similar to the health risk calculated (1.144) from the PM10 concentration inversion; according to the regulation results, regulation of UHI and NDVI is effective and helpful for mitigation of PM10 health risk in functional zones. PMID:25464132
Di Donato, Guido; Laufer-Amorim, Renée; Palmieri, Chiara
2017-10-01
Ten normal prostates, 22 benign prostatic hyperplasia (BPH) and 29 prostate cancer (PC) were morphometrically analyzed with regard to mean nuclear area (MNA), mean nuclear perimeter (MNP), mean nuclear diameter (MND), coefficient of variation of the nuclear area (NACV), mean nuclear diameter maximum (MDx), mean nuclear diameter minimum (MDm), mean nuclear form ellipse (MNFe) and form factor (FF). The relationship between nuclear morphometric parameters and histological type, Gleason score, methods of sample collection, presence of metastases and survival time of canine PC were also investigated. Overall, nuclei from neoplastic cells were larger, with greater variation in nuclear size and shape compared to normal and hyperplastic cells. Significant differences were found between more (small acinar/ductal) and less (cribriform, solid) differentiated PCs with regard to FF (p<0.05). MNA, MNP, MND, MDx, and MDm were significantly correlated with the Gleason score of PC (p<0.05). MNA, MNP, MDx and MNFe may also have important prognostic implications in canine prostatic cancer since negatively correlated with the survival time. Biopsy specimens contained nuclei that were smaller and more irregular in comparison to those in prostatectomy and necropsy specimens and therefore factors associated with tissue sampling and processing may influence the overall morphometric evaluation. The results indicate that nuclear morphometric analysis in combination with Gleason score can help in canine prostate cancer grading, thus contributing to the establishment of a more precise prognosis and patient's management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Childhood leukaemia in areas with different radon levels: a spatial and temporal analysis using GIS.
Kohli, S; Noorlind Brage, H; Löfman, O
2000-11-01
To evaluate the relation between exposure to ground radon levels and leukaemia among children using existing population and disease registers. Ecological correlation study. The county of Ostergötland in Sweden. Every child born in the county between 1979 and 1992 was mapped to the property centroid coordinates by linking addresses in the population and property registers. Population maps were overlaid with radon maps and exposure at birth and each subsequent year was quantified as high, normal, low or unknown. This was analysed with data from the tumour registry. Standardised mortality ratios (SMRs) were calculated using the age and sex specific rates for Sweden for the year 1995. 90 malignancies occurred among 53 146 children (498 887 person years) who formed the study population. SMRs for acute lymphatic leukaemia (ALL) among children born in high, normal and low risk areas were 1.43, 1.17 and 0.25 respectively. The relative risk for the normal risk group and high risk group as compared with the low risk group was 4.64 (95% CI 1.29, 28.26) and 5. 67 (95% CI 1.06, 42.27). The association between ALL and continued residence at normal or high risk areas showed a similar trend. No association between radon risk levels and any other malignancy was seen. Children born in and staying at areas where the risk from ground radon has been classified as low are less likely to develop ALL than those born in areas classified as normal and high risk.
2003-11-18
Some climate forecast models indicate there is an above average chance that there could be a weak to borderline El Niño by the end of November 2003. However, the trade winds, blowing from east to west across the equatorial Pacific Ocean, remain strong. Thus, there remains some uncertainty among climate scientists as to whether the warm temperature anomaly will form again this year. The latest remote sensing data from NASA's Jason satellite show near normal conditions across the equatorial Pacific. There are currently no visible signs in sea surface height of an impending El Niño. This equatorial quiet contrasts with the Bering Sea, Gulf of Alaska and U.S. West Coast where lower-than-normal sea surface levels and cool ocean temperatures continue (indicated by blue and purple areas). The image above is a global map of sea surface height, accurate to within 30 millimeters. The image represents data collected and composited over a 10-day period, ending on Nov. 3, 2003. The height of the water relates to the temperature of the water. As the ocean warms, its level rises; and as it cools, its level falls. Yellow and red areas indicate where the waters are relatively warmer and have expanded above sea level, green indicates near normal sea level, and blue and purple areas show where the waters are relatively colder and the surface is lower than sea level. The blue areas are between 5 and 13 centimeters (2 and 5 inches) below normal, whereas the purple areas range from 14 to 18 centimeters (6 to 7 inches) below normal. http://photojournal.jpl.nasa.gov/catalog/PIA04878
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less
Feinson, Lawrence S.; Gibs, Jacob; Imbrigiotta, Thomas E.; Garrett, Jessica D.
2016-01-01
The U.S. Geological Survey's New Jersey and Iowa Water Science Centers deployed ultraviolet-visible spectrophotometric sensors at water-quality monitoring sites on the Passaic and Pompton Rivers at Two Bridges, New Jersey, on Toms River at Toms River, New Jersey, and on the North Raccoon River near Jefferson, Iowa to continuously measure in-stream nitrate plus nitrite as nitrogen (NO3 + NO2) concentrations in conjunction with continuous stream flow measurements. Statistical analysis of NO3 + NO2 vs. stream discharge during storm events found statistically significant links between land use types and sampling site with the normalized area and rotational direction of NO3 + NO2-stream discharge (N-Q) hysteresis patterns. Statistically significant relations were also found between the normalized area of a hysteresis pattern and several flow parameters as well as the normalized area adjusted for rotational direction and minimum NO3 + NO2 concentrations. The mean normalized hysteresis area for forested land use was smaller than that of urban and agricultural land uses. The hysteresis rotational direction of the agricultural land use was opposite of that of the urban and undeveloped land uses. An r2 of 0.81 for the relation between the minimum normalized NO3 + NO2 concentration during a storm vs. the normalized NO3 + NO2 concentration at peak flow suggested that dilution was the dominant process controlling NO3 + NO2 concentrations over the course of most storm events.