Pintus, Elia; Sorbolini, Silvia; Albera, Andrea; Gaspa, Giustino; Dimauro, Corrado; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P
2014-02-01
Selection is the major force affecting local levels of genetic variation in species. The availability of dense marker maps offers new opportunities for a detailed understanding of genetic diversity distribution across the animal genome. Over the last 50 years, cattle breeds have been subjected to intense artificial selection. Consequently, regions controlling traits of economic importance are expected to exhibit selection signatures. The fixation index (Fst ) is an estimate of population differentiation, based on genetic polymorphism data, and it is calculated using the relationship between inbreeding and heterozygosity. In the present study, locally weighted scatterplot smoothing (LOWESS) regression and a control chart approach were used to investigate selection signatures in two cattle breeds with different production aptitudes (dairy and beef). Fst was calculated for 42 514 SNP marker loci distributed across the genome in 749 Italian Brown and 364 Piedmontese bulls. The statistical significance of Fst values was assessed using a control chart. The LOWESS technique was efficient in removing noise from the raw data and was able to highlight selection signatures in chromosomes known to harbour genes affecting dairy and beef traits. Examples include the peaks detected for BTA2 in the region where the myostatin gene is located and for BTA6 in the region harbouring the ABCG2 locus. Moreover, several loci not previously reported in cattle studies were detected. © 2013 The Authors, Animal Genetics © 2013 Stichting International Foundation for Animal Genetics.
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing
NASA Astrophysics Data System (ADS)
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture—for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments—as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series—daily Poaceae pollen concentrations over the period 2006-2014—was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.
Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa
2017-02-01
Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.
Adjustment of pesticide concentrations for temporal changes in analytical recovery, 1992–2010
Martin, Jeffrey D.; Eberle, Michael
2011-01-01
Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ("spiked" QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as a percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in apparent environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report presents data and models related to the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as "pesticides") that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 through 2010 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Models of recovery, based on robust, locally weighted scatterplot smooths (lowess smooths) of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.
Airborne Remote Sensing of Trafficability in the Coastal Zone
2009-01-01
validation instruments: Analytical Spectral Devices (ASD) full-range spectrometer; light weight deflectometer ( LWD ), which measures dynamic deflection...liquid water absorption features. The corresponding bearing strength measured by the LWD was high at the shoreline site and low at the backdune site...REVIEW REMOTE SENSING FIGURE 7 Correlation of in situ grain size, moisture, and bearing strength measurements. Scatterplot of percent moisture vs LWD
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Chen, L; Liu, J; Xu, T; Long, X; Lin, J
2010-07-01
The study aims were to investigate the correlation between vertebral shape and hand-wrist maturation and to select characteristic parameters of C2-C5 (the second to fifth cervical vertebrae) for cervical vertebral maturation determination by mixed longitudinal data. 87 adolescents (32 males, 55 females) aged 8-18 years with normal occlusion were studied. Sequential lateral cephalograms and hand-wrist radiographs were taken annually for 6 consecutive years. Lateral cephalograms were divided into 11 maturation groups according to Fishman Skeletal Maturity Indicators (SMI). 62 morphological measurements of C2-C5 at 11 different developmental stages (SMI1-11) were measured and analysed. Locally weighted scatterplot smoothing, correlation coefficient analysis and variable cluster analysis were used for statistical analysis. Of the 62 cervical vertebral parameters, 44 were positively correlated with SMI, 6 were negatively correlated and 12 were not correlated. The correlation coefficients between cervical vertebral parameters and SMI were relatively high. Characteristic parameters for quantitative analysis of cervical vertebral maturation were selected. In summary, cervical vertebral maturation could be used reliably to evaluate the skeletal stage instead of the hand-wrist radiographic method. Selected characteristic parameters offered a simple and objective reference for the assessment of skeletal maturity and timing of orthognathic surgery. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Kuo, Terry B J; Yang, Cheryl C H
2004-06-15
To explore interactions between cerebral cortical and autonomic functions in different sleep-wake states. Active waking (AW), quiet sleep (QS), and paradoxical sleep (PS) of adult male Wistar-Kyoto rats (WKY) on their daytime sleep were compared. Ten WKY. All rats had electrodes implanted for polygraphic recordings. One week later, a 6-hour daytime sleep-wakefulness recording session was performed. A scatterplot analysis of electroencephalogram (EEG) slow-wave magnitude (0.5-4 Hz) and heart rate variability (HRV) was applied in each rat. The EEG slow-wave-RR interval scatterplot from all of the recordings revealed a propeller-like pattern. If the scatterplot was divided into AW, PS, and QS according to the corresponding EEG mean power frequency and nuchal electromyogram, the EEG slow wave-RR interval relationship became nil, negative, and positive for AW, PS, and QS, respectively. A significant negative relationship was found for EEG slow-wave and high-frequency power of HRV (HF) coupling during PS and for EEG slow wave and low-frequency power of HRV to HF ratio (LF/HF) coupling during QS. The optimal time lags for the slow wave-LF/HF relationship were different between PS and QS. Bradycardia noted in QS and PS was related to sympathetic suppression and vagal excitation, respectively. The EEG slow wave-HRV scatterplot may provide unique insights into studies of sleep, and such a relationship may delineate the sleep-state-dependent fluctuations in autonomic nervous system activity.
Partitioning degrees of freedom in hierarchical and other richly-parameterized models.
Cui, Yue; Hodges, James S; Kong, Xiaoxiao; Carlin, Bradley P
2010-02-01
Hodges & Sargent (2001) developed a measure of a hierarchical model's complexity, degrees of freedom (DF), that is consistent with definitions for scatterplot smoothers, interpretable in terms of simple models, and that enables control of a fit's complexity by means of a prior distribution on complexity. DF describes complexity of the whole fitted model but in general it is unclear how to allocate DF to individual effects. We give a new definition of DF for arbitrary normal-error linear hierarchical models, consistent with Hodges & Sargent's, that naturally partitions the n observations into DF for individual effects and for error. The new conception of an effect's DF is the ratio of the effect's modeled variance matrix to the total variance matrix. This gives a way to describe the sizes of different parts of a model (e.g., spatial clustering vs. heterogeneity), to place DF-based priors on smoothing parameters, and to describe how a smoothed effect competes with other effects. It also avoids difficulties with the most common definition of DF for residuals. We conclude by comparing DF to the effective number of parameters p(D) of Spiegelhalter et al (2002). Technical appendices and a dataset are available online as supplemental materials.
The Effect of Patient and Surgical Characteristics on Renal Function After Partial Nephrectomy.
Winer, Andrew G; Zabor, Emily C; Vacchio, Michael J; Hakimi, A Ari; Russo, Paul; Coleman, Jonathan A; Jaimes, Edgar A
2018-06-01
The purpose of the study was to identify patient and disease characteristics that have an adverse effect on renal function after partial nephrectomy. We conducted a retrospective review of 387 patients who underwent partial nephrectomy for renal tumors between 2006 and 2014. A line plot with a locally weighted scatterplot smoothing was generated to visually assess renal function over time. Univariable and multivariable longitudinal regression analyses incorporated a random intercept and slope to evaluate the association between patient and disease characteristics with renal function after surgery. Median age was 60 years and most patients were male (255 patients [65.9%]) and white (343 patients [88.6%]). In univariable analysis, advanced age at surgery, larger tumor size, male sex, longer ischemia time, history of smoking, and hypertension were significantly associated with lower preoperative estimated glomerular filtration rate (eGFR). In multivariable analysis, independent predictors of reduced renal function after surgery included advanced age, lower preoperative eGFR, and longer ischemia time. Length of time from surgery was strongly associated with improvement in renal function among all patients. Independent predictors of postoperative decline in renal function include advanced age, lower preoperative eGFR, and longer ischemia time. A substantial number of subjects had recovery in renal function over time after surgery, which continued past the 12-month mark. These findings suggest that patients who undergo partial nephrectomy can experience long-term improvement in renal function. This improvement is most pronounced among younger patients with higher preoperative eGFR. Copyright © 2017 Elsevier Inc. All rights reserved.
Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.
d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K
2015-12-01
Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.
Adjustment of Pesticide Concentrations for Temporal Changes in Analytical Recovery, 1992-2006
Martin, Jeffrey D.; Stone, Wesley W.; Wydoski, Duane S.; Sandstrom, Mark W.
2009-01-01
Recovery is the proportion of a target analyte that is quantified by an analytical method and is a primary indicator of the analytical bias of a measurement. Recovery is measured by analysis of quality-control (QC) water samples that have known amounts of target analytes added ('spiked' QC samples). For pesticides, recovery is the measured amount of pesticide in the spiked QC sample expressed as percentage of the amount spiked, ideally 100 percent. Temporal changes in recovery have the potential to adversely affect time-trend analysis of pesticide concentrations by introducing trends in environmental concentrations that are caused by trends in performance of the analytical method rather than by trends in pesticide use or other environmental conditions. This report examines temporal changes in the recovery of 44 pesticides and 8 pesticide degradates (hereafter referred to as 'pesticides') that were selected for a national analysis of time trends in pesticide concentrations in streams. Water samples were analyzed for these pesticides from 1992 to 2006 by gas chromatography/mass spectrometry. Recovery was measured by analysis of pesticide-spiked QC water samples. Temporal changes in pesticide recovery were investigated by calculating robust, locally weighted scatterplot smooths (lowess smooths) for the time series of pesticide recoveries in 5,132 laboratory reagent spikes; 1,234 stream-water matrix spikes; and 863 groundwater matrix spikes. A 10-percent smoothing window was selected to show broad, 6- to 12-month time scale changes in recovery for most of the 52 pesticides. Temporal patterns in recovery were similar (in phase) for laboratory reagent spikes and for matrix spikes for most pesticides. In-phase temporal changes among spike types support the hypothesis that temporal change in method performance is the primary cause of temporal change in recovery. Although temporal patterns of recovery were in phase for most pesticides, recovery in matrix spikes was greater than recovery in reagent spikes for nearly every pesticide. Models of recovery based on matrix spikes are deemed more appropriate for adjusting concentrations of pesticides measured in groundwater and stream-water samples than models based on laboratory reagent spikes because (1) matrix spikes are expected to more closely match the matrix of environmental water samples than are reagent spikes and (2) method performance is often matrix dependent, as was shown by higher recovery in matrix spikes for most of the pesticides. Models of recovery, based on lowess smooths of matrix spikes, were developed separately for groundwater and stream-water samples. The models of recovery can be used to adjust concentrations of pesticides measured in groundwater or stream-water samples to 100 percent recovery to compensate for temporal changes in the performance (bias) of the analytical method.
Alternative Smoothing and Scaling Strategies for Weighted Composite Scores
ERIC Educational Resources Information Center
Moses, Tim
2014-01-01
In this study, smoothing and scaling approaches are compared for estimating subscore-to-composite scaling results involving composites computed as rounded and weighted combinations of subscores. The considered smoothing and scaling approaches included those based on raw data, on smoothing the bivariate distribution of the subscores, on smoothing…
Taube, Nadine; He, Jianxun; Ryan, M Cathryn; Valeo, Caterina
2016-08-01
The role of nutrient loading on biomass growth in wastewater-impacted rivers is important in order to effectively optimize wastewater treatment to avoid excessive biomass growth in the receiving water body. This paper directly relates wastewater treatment plant (WWTP) effluent nutrients (including ammonia (NH3-N), nitrate (NO3-N) and total phosphorus (TP)) to the temporal and spatial distribution of epilithic algae and macrophyte biomass in an oligotrophic river. Annual macrophyte biomass, epilithic algae data and WWTP effluent nutrient data from 1980 to 2012 were statistically analysed. Because discharge can affect aquatic biomass growth, locally weighted scatterplot smoothing (LOWESS) was used to remove the influence of river discharge from the aquatic biomass (macrophytes and algae) data before further analysis was conducted. The results from LOWESS indicated that aquatic biomass did not increase beyond site-specific threshold discharge values in the river. The LOWESS-estimated biomass residuals showed a variable response to different nutrients. Macrophyte biomass residuals showed a decreasing trend concurrent with enhanced nutrient removal at the WWTP and decreased effluent P loading, whereas epilithic algae biomass residuals showed greater response to enhanced N removal. Correlation analysis between effluent nutrient concentrations and the biomass residuals (both epilithic algae and macrophytes) suggested that aquatic biomass is nitrogen limited, especially by NH3-N, at most sampling sites. The response of aquatic biomass residuals to effluent nutrient concentrations did not change with increasing distance to the WWTP but was different for P and N, allowing for additional conclusions about nutrient limitation in specific river reaches. The data further showed that the mixing process between the effluent and the river has an influence on the spatial distribution of biomass growth.
Sorbolini, Silvia; Marras, Gabriele; Gaspa, Giustino; Dimauro, Corrado; Cellesi, Massimo; Valentini, Alessio; Macciotta, Nicolò Pp
2015-06-23
Domestication and selection are processes that alter the pattern of within- and between-population genetic variability. They can be investigated at the genomic level by tracing the so-called selection signatures. Recently, sequence polymorphisms at the genome-wide level have been investigated in a wide range of animals. A common approach to detect selection signatures is to compare breeds that have been selected for different breeding goals (i.e. dairy and beef cattle). However, genetic variations in different breeds with similar production aptitudes and similar phenotypes can be related to differences in their selection history. In this study, we investigated selection signatures between two Italian beef cattle breeds, Piemontese and Marchigiana, using genotyping data that was obtained with the Illumina BovineSNP50 BeadChip. The comparison was based on the fixation index (Fst), combined with a locally weighted scatterplot smoothing (LOWESS) regression and a control chart approach. In addition, analyses of Fst were carried out to confirm candidate genes. In particular, data were processed using the varLD method, which compares the regional variation of linkage disequilibrium between populations. Genome scans confirmed the presence of selective sweeps in the genomic regions that harbour candidate genes that are known to affect productive traits in cattle such as DGAT1, ABCG2, CAPN3, MSTN and FTO. In addition, several new putative candidate genes (for example ALAS1, ABCB8, ACADS and SOD1) were detected. This study provided evidence on the different selection histories of two cattle breeds and the usefulness of genomic scans to detect selective sweeps even in cattle breeds that are bred for similar production aptitudes.
Luykx, Jurjen J.; Bakker, Steven C.; Lentjes, Eef; Boks, Marco P. M.; van Geloven, Nan; Eijkemans, Marinus J. C.; Janson, Esther; Strengman, Eric; de Lepper, Anne M.; Westenberg, Herman; Klopper, Kai E.; Hoorn, Hendrik J.; Gelissen, Harry P. M. M.; Jordan, Julian; Tolenaar, Noortje M.; van Dongen, Eric P. A.; Michel, Bregt; Abramovic, Lucija; Horvath, Steve; Kappen, Teus; Bruins, Peter; Keijzers, Peter; Borgdorff, Paul; Ophoff, Roel A.; Kahn, René S.
2012-01-01
Background Animal studies have revealed seasonal patterns in cerebrospinal fluid (CSF) monoamine (MA) turnover. In humans, no study had systematically assessed seasonal patterns in CSF MA turnover in a large set of healthy adults. Methodology/Principal Findings Standardized amounts of CSF were prospectively collected from 223 healthy individuals undergoing spinal anesthesia for minor surgical procedures. The metabolites of serotonin (5-hydroxyindoleacetic acid, 5-HIAA), dopamine (homovanillic acid, HVA) and norepinephrine (3-methoxy-4-hydroxyphenylglycol, MPHG) were measured using high performance liquid chromatography (HPLC). Concentration measurements by sampling and birth dates were modeled using a non-linear quantile cosine function and locally weighted scatterplot smoothing (LOESS, span = 0.75). The cosine model showed a unimodal season of sampling 5-HIAA zenith in April and a nadir in October (p-value of the amplitude of the cosine = 0.00050), with predicted maximum (PCmax) and minimum (PCmin) concentrations of 173 and 108 nmol/L, respectively, implying a 60% increase from trough to peak. Season of birth showed a unimodal 5-HIAA zenith in May and a nadir in November (p = 0.00339; PCmax = 172 and PCmin = 126). The non-parametric LOESS showed a similar pattern to the cosine in both season of sampling and season of birth models, validating the cosine model. A final model including both sampling and birth months demonstrated that both sampling and birth seasons were independent predictors of 5-HIAA concentrations. Conclusion In subjects without mental illness, 5-HT turnover shows circannual variation by season of sampling as well as season of birth, with peaks in spring and troughs in fall. PMID:22312427
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.
2005-01-01
For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.
Approximating scatterplots of large datasets using distribution splats
NASA Astrophysics Data System (ADS)
Camuto, Matthew; Crawfis, Roger; Becker, Barry G.
2000-02-01
Many situations exist where the plotting of large data sets with categorical attributes is desired in a 3D coordinate system. For example, a marketing company may conduct a survey involving one million subjects and then plot peoples favorite car type against their weight, height and annual income. Scatter point plotting, in which each point is individually plotted at its correspond cartesian location using a defined primitive, is usually used to render a plot of this type. If the dependent variable is continuous, we can discretize the 3D space into bins or voxels and retain the average value of all records falling within each voxel. Previous work employed volume rendering techniques, in particular, splatting, to represent this aggregated data, by mapping each average value to a representative color.
Heavy metal concentrations in commercial deep-sea fish from the Rockall Trough
NASA Astrophysics Data System (ADS)
Mormede, S.; Davies, I. M.
2001-05-01
Samples of monkfish ( Lophius piscatorius), black scabbard ( Aphanopus carbo), blue ling ( Molva dypterygia), blue whiting ( Micromesistius poutassou) and hake ( Merluccius merluccius) were obtained from 400 to 1150 m depth on the continental slope of Rockall Trough west of Scotland. Muscle, liver, gill and gonad tissue were analysed for arsenic, cadmium, copper, lead, mercury and zinc by various atomic absorption techniques. Median concentrations of arsenic in the muscle tissue ranged from 1.25 to 8.63 mg/kg wet weight; in liver tissue from 3.04 to 5.72 mg/kg wet weight; cadmium in muscle tissue from <0.002 to 0.034 mg/kg wet weight, in liver tissue from 0.11 to 6.98 mg/kg wet weight; copper in the muscle from 0.12 to 0.29 mg/kg wet weight, in the liver from 3.47 to 11.87 mg/kg wet weight; lead levels in muscle from <0.002 to 0.009 mg/kg wet weight, respectively, and in liver tissue <0.05 mg/kg wet weight for all species. In general, the concentrations are similar to those previously published on deep-sea fish, and higher or similar to those published for shallow water counterparts. All metal levels in black scabbard livers are much higher than in the other fish, and between 2 and 30 times higher than the limits of the European Dietary Standards and Guidelines. Differences in accumulation patterns between species and elements, as well as between organs are described using univariate and multivariate statistics (scatterplots, discriminant analysis, triangular plots).
Qualitative human body composition analysis assessed with bioelectrical impedance.
Talluri, T
1998-12-01
Body composition is generally aiming at quantitative estimates of fat mass, inadequate to assess nutritional states that on the other hand are well defined by the intra/extra cellular masses proportion (ECM/BCM). Direct measures performed with phase sensitive bioelectrical impedance analyzers can be used to define the current distribution in normal and abnormal populations. Phase angle and reactance nomogram is directly reflecting the ECM/BCM pathways proportions and body impedance analysis (BIA) is also validated to estimate the individual content of body cell mass (BCM). A new body cell mass index (BCMI) obtained dividing the weight of BCM in kilograms by the body surface in square meters is confronted to the scatterplot distribution of phase angle and reactance values obtained from controls and patients, and proposed as a qualitative approach to identify abnormal ECM/BCM ratios and nutritional states.
New approaches for calculating Moran's index of spatial autocorrelation.
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
Spatial study of mortality in motorcycle accidents in the State of Pernambuco, Northeastern Brazil.
Silva, Paul Hindenburg Nobre de Vasconcelos; Lima, Maria Luiza Carvalho de; Moreira, Rafael da Silveira; Souza, Wayner Vieira de; Cabral, Amanda Priscila de Santana
2011-04-01
To analyze the spatial distribution of mortality due to motorcycle accidents in the state of Pernambuco, Northeastern Brazil. A population-based ecological study using data on mortality in motorcycle accidents from 01/01/2000 to 31/12/2005. The analysis units were the municipalities. For the spatial distribution analysis, an average mortality rate was calculated, using deaths from motorcycle accidents recorded in the Mortality Information System as the numerator, and as the denominator the population of the mid-period. Spatial analysis techniques, mortality smoothing coefficient estimate by the local empirical Bayesian method and Moran scatterplot, applied to the digital cartographic base of Pernambuco were used. The average mortality rate for motorcycle accidents in Pernambuco was 3.47 per 100 thousand inhabitants. Of the 185 municipalities, 16 were part of five clusters identified with average mortality rates ranging from 5.66 to 11.66 per 100 thousand inhabitants, and were considered critical areas. Three clusters are located in the area known as sertão and two in the agreste of the state. The risk of dying from a motorcycle accident is greater in conglomerate areas outside the metropolitan axis, and intervention measures should consider the economic, social and cultural contexts.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
The Danger of Dichotomizing Continuous Variables: A Visualization
ERIC Educational Resources Information Center
Kuss, Oliver
2013-01-01
Four rather different scatterplots of two variables X and Y are given, which, after dichotomizing X and Y, result in identical fourfold-tables misleadingly showing no association. (Contains 1 table and 1 figure.)
Uda, Satoshi; Matsui, Mie; Tanaka, Chiaki; Uematsu, Akiko; Miura, Kayoko; Kawana, Izumi; Noguchi, Kyo
2015-01-01
Diffusion tensor imaging (DTI), which measures the magnitude of anisotropy of water diffusion in white matter, has recently been used to visualize and quantify parameters of neural tracts connecting brain regions. In order to investigate the developmental changes and sex and hemispheric differences of neural fibers in normal white matter, we used DTI to examine 52 healthy humans ranging in age from 2 months to 25 years. We extracted the following tracts of interest (TOIs) using the region of interest method: the corpus callosum (CC), cingulum hippocampus (CGH), inferior longitudinal fasciculus (ILF), and superior longitudinal fasciculus (SLF). We measured fractional anisotropy (FA), apparent diffusion coefficient (ADC), axial diffusivity (AD), and radial diffusivity (RD). Approximate values and changes in growth rates of all DTI parameters at each age were calculated and analyzed using LOESS (locally weighted scatterplot smoothing). We found that for all TOIs, FA increased with age, whereas ADC, AD and RD values decreased with age. The turning point of growth rates was at approximately 6 years. FA in the CC was greater than that in the SLF, ILF and CGH. Moreover, FA, ADC and AD of the splenium of the CC (sCC) were greater than in the genu of the CC (gCC), whereas the RD of the sCC was lower than the RD of the gCC. The FA of right-hemisphere TOIs was significantly greater than that of left-hemisphere TOIs. In infants, growth rates of both FA and RD were larger than those of AD. Our data show that developmental patterns differ by TOIs and myelination along with the development of white matter, which can be mainly expressed as an increase in FA together with a decrease in RD. These findings clarify the long-term normal developmental characteristics of white matter microstructure from infancy to early adulthood. © 2015 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei
2014-01-01
Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.
New Approaches for Calculating Moran’s Index of Spatial Autocorrelation
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran’s index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran’s index. Moran’s scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran’s index and Geary’s coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran’s index and Geary’s coefficient will be clarified and defined. One of theoretical findings is that Moran’s index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation. PMID:23874592
Kojima, Motonaga; Obuchi, Shuichi; Mizuno, Kousuke; Henmi, Osamu; Ikeda, Noriaki
2008-06-01
We propose a novel indicator for smoothness of movement, i.e., the power spectrum entropy of the acceleration time-series, and compare it with conventional indices of smoothness. For this purpose, nineteen healthy adults (21.3+/-2.5 years old) performed the task of raising and lowering a beaker between the level of the umbilicus and eye level under the two following conditions: one with the beaker containing water and the other with the beaker containing a weight of the same mass as the water. Moving the beaker up and down when it contained water required extra control to prevent the water from being spilled. This means that movement was not as smooth as when the beaker contained a weight. Under these two conditions, entropy was measured along with a traditional indicator of smoothness of movement, the jerk index. The entropy could distinguish just as well as the jerk index (p<0.01) between when water was used and when the weight was used. The entropy correlated highly with the jerk index, with Spearman's rho at 0.88 (p<0.01). These results showed that the entropy derived from the spectrum of the acceleration time-series during movement is useful as an indicator of the smoothness of that movement.
Effects of trimming weight-for-height data on growth-chart percentiles1–3
Flegal, Katherine M; Carroll, Margaret D; Ogden, Cynthia L
2016-01-01
Background Before estimating smoothed percentiles of weight-for-height and BMI-for-age to construct the WHO growth charts, WHO excluded observations that were considered to represent unhealthy weights for height. Objective The objective was to estimate the effects of similar data trimming on empirical percentiles from the CDC growth-chart data set relative to the smoothed WHO percentiles for ages 24–59 mo. Design We used the nationally representative US weight and height data from 1971 to 1994, which was the source data for the 2000 CDC growth charts. Trimming cutoffs were calculated on the basis of weight-for-height for 9722 children aged 24–71 mo. Empirical percentiles for 7315 children aged 24–59 mo were compared with the corresponding smoothed WHO percentiles. Results Before trimming, the mean empirical percentiles for weight-for-height in the CDC data set were higher than the corresponding smoothed WHO percentiles. After trimming, the mean empirical 95th and 97th percentiles of weight-for-height were lower than the WHO percentiles, and the proportion of children in the CDC data set above the WHO 95th percentile decreased from 7% to 5%. The findings were similar for BMI-for-age. However, for weight-for-age, which had not been trimmed by the WHO, the empirical percentiles before trimming agreed closely with the upper percentiles from the WHO charts. Conclusion WHO data-trimming procedures may account for some of the differences between the WHO growth charts and the 2000 CDC growth charts. PMID:22990032
On splice site prediction using weight array models: a comparison of smoothing techniques
NASA Astrophysics Data System (ADS)
Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard
2007-11-01
In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.
Lemke, Arne-Jörn; Brinkmann, Martin Julius; Schott, Thomas; Niehues, Stefan Markus; Settmacher, Utz; Neuhaus, Peter; Felix, Roland
2006-09-01
To prospectively develop equations for the calculation of expected intraoperative weight and volume of a living donor's right liver lobe by using preoperative computed tomography (CT) for volumetric measurement. After medical ethics committee and state medical board approval, informed consent was obtained from eight female and eight male living donors (age range, 18-63 years) for participation in preoperative CT volumetric measurement of the right liver lobes by using the summation-of-area method. Intraoperatively, the graft was weighed, and the volume of the graft was determined by means of water displacement. Distributions of pre- and intraoperative data were depicted as Tukey box-and-whisker diagrams. Then, linear regressions were calculated, and the results were depicted as scatterplots. On the basis of intraoperative data, physical density of the parenchyma was calculated by dividing weight by volume of the graft. Preoperative measurement of grafts resulted in a mean volume of 929 mL +/- 176 (standard deviation); intraoperative mean weight and volume of the grafts were 774 g +/- 138 and 697 mL +/- 139, respectively. All corresponding pre- and intraoperative data correlated significantly (P < .001) with each other. Intraoperatively expected volume (V(intraop)) in millilliters and weight (W(intraop)) in grams can be calculated with the equations V(intra)(op) = (0.656 . V(preop)) + 87.629 mL and W(intra)(op) = (0.678 g/mL . V(preop)) + 143.704 g, respectively, where preoperative volume is V(preop) in milliliters. Physical density of transplanted liver lobes was 1.1172 g/mL +/- 0.1015. By using two equations developed from the data obtained in this study, expected intraoperative weight and volume can properly be determined from CT volumetric measurements. (c) RSNA, 2006.
Pintucci, Giuseppe; Yu, Pey-Jen; Saponara, Fiorella; Kadian-Dodov, Daniella L; Galloway, Aubrey C; Mignatti, Paolo
2005-08-15
Basic fibroblast growth factor (FGF-2) and platelet-derived growth factor (PDGF) are implicated in vascular remodeling secondary to injury. Both growth factors control vascular endothelial and smooth muscle cell proliferation, migration, and survival through overlapping intracellular signaling pathways. In vascular smooth muscle cells PDGF-BB induces FGF-2 expression. However, the effect of PDGF on the different forms of FGF-2 has not been elucidated. Here, we report that treatment of vascular aortic smooth muscle cells with PDGF-BB rapidly induces expression of 20.5 and 21 kDa, high molecular weight (HMW) FGF-2 that accumulates in the nucleus and nucleolus. Conversely, PDGF treatment has little or no effect on 18 kDa, low-molecular weight FGF-2 expression. PDGF-BB-induced upregulation of HMW FGF-2 expression is controlled by sustained activation of extracellular signal-regulated kinase (ERK)-1/2 and is abolished by actinomycin D. These data describe a novel interaction between PDGF-BB and FGF-2, and indicate that the nuclear forms of FGF-2 may mediate the effect of PDGF activity on vascular smooth muscle cells.
Using Maslows Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events
2016-03-23
34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising
Shafer, Steven L; Lemmer, Bjoern; Boselli, Emmanuel; Boiste, Fabienne; Bouvet, Lionel; Allaouchiche, Bernard; Chassard, Dominique
2010-10-01
The duration of analgesia from epidural administration of local anesthetics to parturients has been shown to follow a rhythmic pattern according to the time of drug administration. We studied whether there was a similar pattern after intrathecal administration of bupivacaine in parturients. In the course of the analysis, we came to believe that some data points coincident with provider shift changes were influenced by nonbiological, health care system factors, thus incorrectly suggesting a periodic signal in duration of labor analgesia. We developed graphical and analytical tools to help assess the influence of individual points on the chronobiological analysis. Women with singleton term pregnancies in vertex presentation, cervical dilation 3 to 5 cm, pain score >50 mm (of 100 mm), and requesting labor analgesia were enrolled in this study. Patients received 2.5 mg of intrathecal bupivacaine in 2 mL using a combined spinal-epidural technique. Analgesia duration was the time from intrathecal injection until the first request for additional analgesia. The duration of analgesia was analyzed by visual inspection of the data, application of smoothing functions (Supersmoother; LOWESS and LOESS [locally weighted scatterplot smoothing functions]), analysis of variance, Cosinor (Chronos-Fit), Excel, and NONMEM (nonlinear mixed effect modeling). Confidence intervals (CIs) were determined by bootstrap analysis (1000 replications with replacement) using PLT Tools. Eighty-two women were included in the study. Examination of the raw data using 3 smoothing functions revealed a bimodal pattern, with a peak at approximately 0630 and a subsequent peak in the afternoon or evening, depending on the smoother. Analysis of variance did not identify any statistically significant difference between the duration of analgesia when intrathecal injection was given from midnight to 0600 compared with the duration of analgesia after intrathecal injection at other times. Chronos-Fit, Excel, and NONMEM produced identical results, with a mean duration of analgesia of 38.4 minutes (95% CI: 35.4-41.6 minutes), an 8-hour periodic waveform with an amplitude of 5.8 minutes (95% CI: 2.1-10.7 minutes), and a phase offset of 6.5 hours (95% CI: 5.4-8.0 hours) relative to midnight. The 8-hour periodic model did not reach statistical significance in 40% of bootstrap analyses, implying that statistical significance of the 8-hour periodic model was dependent on a subset of the data. Two data points before the change of shift at 0700 contributed most strongly to the statistical significance of the periodic waveform. Without these data points, there was no evidence of an 8-hour periodic waveform for intrathecal bupivacaine analgesia. Chronobiology includes the influence of external daily rhythms in the environment (e.g., nursing shifts) as well as human biological rhythms. We were able to distinguish the influence of an external rhythm by combining several novel analyses: (1) graphical presentation superimposing the raw data, external rhythms (e.g., nursing and anesthesia provider shifts), and smoothing functions; (2) graphical display of the contribution of each data point to the statistical significance; and (3) bootstrap analysis to identify whether the statistical significance was highly dependent on a data subset. These approaches suggested that 2 data points were likely artifacts of the change in nursing and anesthesia shifts. When these points were removed, there was no suggestion of biological rhythm in the duration of intrathecal bupivacaine analgesia.
CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN
2014-01-01
Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759
Using Maslow’s Hierarchy of Needs to Identify Indicators of Potential Mass Migration Events
2016-03-23
34, Psychological Review, 1943: 370-396. 7 Ibid. 10 subconsciously ; however, one will consciously seek them out if they are denied. Other...and less prone to bias . Examining statistical data 4 The scatterplots for the most promising
Visualizing Qualitative Information
ERIC Educational Resources Information Center
Slone, Debra J.
2009-01-01
The abundance of qualitative data in today's society and the need to easily scrutinize, digest, and share this information calls for effective visualization and analysis tools. Yet, no existing qualitative tools have the analytic power, visual effectiveness, and universality of familiar quantitative instruments like bar charts, scatter-plots, and…
Exponential smoothing weighted correlations
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2012-06-01
In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.
Hardware Implementation of a Bilateral Subtraction Filter
NASA Technical Reports Server (NTRS)
Huertas, Andres; Watson, Robert; Villalpando, Carlos; Goldberg, Steven
2009-01-01
A bilateral subtraction filter has been implemented as a hardware module in the form of a field-programmable gate array (FPGA). In general, a bilateral subtraction filter is a key subsystem of a high-quality stereoscopic machine vision system that utilizes images that are large and/or dense. Bilateral subtraction filters have been implemented in software on general-purpose computers, but the processing speeds attainable in this way even on computers containing the fastest processors are insufficient for real-time applications. The present FPGA bilateral subtraction filter is intended to accelerate processing to real-time speed and to be a prototype of a link in a stereoscopic-machine- vision processing chain, now under development, that would process large and/or dense images in real time and would be implemented in an FPGA. In terms that are necessarily oversimplified for the sake of brevity, a bilateral subtraction filter is a smoothing, edge-preserving filter for suppressing low-frequency noise. The filter operation amounts to replacing the value for each pixel with a weighted average of the values of that pixel and the neighboring pixels in a predefined neighborhood or window (e.g., a 9 9 window). The filter weights depend partly on pixel values and partly on the window size. The present FPGA implementation of a bilateral subtraction filter utilizes a 9 9 window. This implementation was designed to take advantage of the ability to do many of the component computations in parallel pipelines to enable processing of image data at the rate at which they are generated. The filter can be considered to be divided into the following parts (see figure): a) An image pixel pipeline with a 9 9- pixel window generator, b) An array of processing elements; c) An adder tree; d) A smoothing-and-delaying unit; and e) A subtraction unit. After each 9 9 window is created, the affected pixel data are fed to the processing elements. Each processing element is fed the pixel value for its position in the window as well as the pixel value for the central pixel of the window. The absolute difference between these two pixel values is calculated and used as an address in a lookup table. Each processing element has a lookup table, unique for its position in the window, containing the weight coefficients for the Gaussian function for that position. The pixel value is multiplied by the weight, and the outputs of the processing element are the weight and pixel-value weight product. The products and weights are fed to the adder tree. The sum of the products and the sum of the weights are fed to the divider, which computes the sum of products the sum of weights. The output of the divider is denoted the bilateral smoothed image. The smoothing function is a simple weighted average computed over a 3 3 subwindow centered in the 9 9 window. After smoothing, the image is delayed by an additional amount of time needed to match the processing time for computing the bilateral smoothed image. The bilateral smoothed image is then subtracted from the 3 3 smoothed image to produce the final output. The prototype filter as implemented in a commercially available FPGA processes one pixel per clock cycle. Operation at a clock speed of 66 MHz has been demonstrated, and results of a static timing analysis have been interpreted as suggesting that the clock speed could be increased to as much as 100 MHz.
ERIC Educational Resources Information Center
Felsager, Bjorn
2001-01-01
Describes a mathematics and science project designed to help students gain some familiarity with constellations and trigonometry by using the TI-83 calculator as a tool. Specific constellations such as the Big Dipper (Plough) and other sets of stars are located using stereographic projection and graphed using scatterplots. (MM)
Visualizing Concordance of Sets
2006-01-01
Elements Filtering with Human Muscular Dystrophy Dataset of 21 sets and 163 elements. 4.1.4 Diagram Ordering using the Rank-by-Feature Framework...Proceedings of Advanced Visual Interfaces, pp. 110-119, 2000. [4] R. A. Becker and W. S. Cleveland, "Brushing Scatterplots," Technometrics, vol. 29, pp. 127
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Applying Descriptive Statistics to Teaching the Regional Classification of Climate.
ERIC Educational Resources Information Center
Lindquist, Peter S.; Hammel, Daniel J.
1998-01-01
Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…
Chou, I.-Ming; Lee, R.D.
1983-01-01
Solubilities of halite in the ternary system NaCl-CsCl-H2O have been determined by the visual polythermal method at 1 atm from 20 to 100??C along five constant CsCl/(CsCl + H2O) weight ratio lines. These five constant weight ratios are 0.1, 0.2, 0.3, 0.4, and 0.5. The maximum uncertainties in these measurements are ??0.02 wt % NaCl and ??0.15??C. The data along each constant CsCl/(CsCl + H2O) weight ratio line were regressed to a smooth curve. The maximum deviation of the measured solubilities from the smooth curves is 0.06 wt % NaCl. Isothermal solubilities of halite were calculated from smoothed curves at 25, 50, and 75??C.
Mussin, Nadiar; Sumo, Marco; Lee, Kwang-Woong; Choi, YoungRok; Choi, Jin Yong; Ahn, Sung-Woo; Yoon, Kyung Chul; Kim, Hyo-Sin; Hong, Suk Kyun; Yi, Nam-Joon; Suh, Kyung-Suk
2017-04-01
Liver volumetry is a vital component in living donor liver transplantation to determine an adequate graft volume that meets the metabolic demands of the recipient and at the same time ensures donor safety. Most institutions use preoperative contrast-enhanced CT image-based software programs to estimate graft volume. The objective of this study was to evaluate the accuracy of 2 liver volumetry programs (Rapidia vs . Dr. Liver) in preoperative right liver graft estimation compared with real graft weight. Data from 215 consecutive right lobe living donors between October 2013 and August 2015 were retrospectively reviewed. One hundred seven patients were enrolled in Rapidia group and 108 patients were included in the Dr. Liver group. Estimated graft volumes generated by both software programs were compared with real graft weight measured during surgery, and further classified into minimal difference (≤15%) and big difference (>15%). Correlation coefficients and degree of difference were determined. Linear regressions were calculated and results depicted as scatterplots. Minimal difference was observed in 69.4% of cases from Dr. Liver group and big difference was seen in 44.9% of cases from Rapidia group (P = 0.035). Linear regression analysis showed positive correlation in both groups (P < 0.01). However, the correlation coefficient was better for the Dr. Liver group (R 2 = 0.719), than for the Rapidia group (R 2 = 0.688). Dr. Liver can accurately predict right liver graft size better and faster than Rapidia, and can facilitate preoperative planning in living donor liver transplantation.
Lawrence, K E; Forsyth, S F; Vaatstra, B L; McFadden, Amj; Pulford, D J; Govindaraju, K; Pomroy, W E
2018-01-01
To present the haematology and biochemistry profiles for cattle in New Zealand naturally infected with Theileria orientalis Ikeda type and investigate if the results differed between adult dairy cattle and calves aged <6 months. Haematology and biochemistry results were obtained from blood samples from cattle which tested positive for T. orientalis Ikeda type by PCR, that were submitted to veterinary laboratories in New Zealand between October 2012 and November 2014. Data sets for haematology and biochemistry results were prepared for adult dairy cattle (n=62 and 28, respectively) and calves aged <6 months (n=62 and 28, respectively), which were matched on the basis of individual haematocrit (HCT). Results were compared between age groups when categorised by HCT. Selected variables were plotted against individual HCT, and locally weighted scatterplot smoothing (Loess) curves were fitted to the data for adult dairy cattle and calves <6 months old. When categorised by HCT, the proportion of samples with HCT <0.15 L/L (severe anaemia) was greater for adult dairy cattle than for beef or dairy calves, for both haematology (p<0.002) and biochemistry (p<0.001) submissions. There were differences (p<0.05) between adult dairy cattle and calves aged <6 months in the relationships between HCT and red blood cell counts, mean corpuscular volume, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentrations, lymphocyte and eosinophil counts, and activities of glutamate dehydrogenase and aspartate aminotransferase. In both age groups anisocytosis was frequently recorded. The proportion of blood smears showing mild and moderate macrocytosis was greater in adults than calves (p=0.01), and mild and moderate poikilocytosis was greater in calves than adults (p=0.005). The haematology and biochemistry changes observed in cattle infected with T. orientalis Ikeda type were consistent with extravascular haemolytic anaemia. Adult dairy cattle were more likely to be severely anaemic than calves. There were differences in haematology and biochemistry profiles between adult dairy cattle and calves, but most of these differences likely had a physiological rather than pathological basis. Overall, the haematological changes in calves aged <6 months appeared less severe than in adult dairy cattle.
Kawano, Takahisa; Nishiyama, Kei; Morita, Hiroshi; Yamamura, Osamu; Hiraide, Atsuchi; Hasegawa, Kohei
2016-01-13
We determined whether crowding at emergency shelters is associated with a higher incidence of sleep disturbance among disaster evacuees and identified the minimum required personal space at shelters. Retrospective review of medical charts. 30 shelter-based medical clinics in Ishinomaki, Japan, during the 46 days following the Great Eastern Japan Earthquake and Tsunami in 2011. Shelter residents who visited eligible clinics. Based on the result of a locally weighted scatter-plot smoothing technique assessing the relationship between the mean space per evacuee and cumulative incidence of sleep disturbance at the shelter, eligible shelters were classified into crowded and non-crowded shelters. The cumulative incidence per 1000 evacuees was compared between groups, using a Mann-Whitney U test. To assess the association between shelter crowding and the daily incidence of sleep disturbance per 1000 evacuees, quasi-least squares method adjusting for potential confounders was used. The 30 shelters were categorised as crowded (mean space per evacuee <5.0 m(2), 9 shelters) or non-crowded (≥ 5.0 m(2), 21 shelters). The study included 9031 patients. Among the eligible patients, 1079 patients (11.9%) were diagnosed with sleep disturbance. Mean space per evacuee during the study period was 3.3 m(2) (SD, 0.8 m(2)) at crowded shelters and 8.6 m(2) (SD, 4.3 m(2)) at non-crowded shelters. The median cumulative incidence of sleep disturbance did not differ between the crowded shelters (2.3/1000 person-days (IQR, 1.6-5.4)) and non-crowded shelters (1.9/1000 person-days (IQR, 1.0-2.8); p=0.20). In contrast, after adjusting for potential confounders, crowded shelters had an increased daily incidence of sleep disturbance (2.6 per 1000 person-days; 95% CI 0.2 to 5.0/1000 person-days, p=0.03) compared to that at non-crowded shelters. Crowding at shelters may exacerbate sleep disruptions in disaster evacuees; therefore, appropriate evacuation space requirements should be considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Marquart, Katharina; Prokopchuk, Olga; Worek, Franz; Thiermann, Horst; Martignoni, Marc E; Wille, Timo
2018-09-01
Isolated organs proofed to be a robust tool to study effects of (potential) therapeutics in organophosphate poisoning. Small bowel samples have been successfully used to reveal smooth muscle relaxing effects. In the present study, the effects of obidoxime, TMB-4, HI-6 and MB 327 were investigated on human small bowel tissue and compared with rat data. Hereby, the substances were tested in at least seven different concentrations in the jejunum or ileum both pre-contracted with carbamoylcholine. Additionally, the cholinesterase activity of native tissue was determined. Human small intestine specimens showed classical dose response-curves, similar to rat tissue, with MB 327 exerting the most potent smooth muscle relaxant effect in both species (human EC 50 =0.7×10 -5 M and rat EC 50 =0.7×10 -5 M). The AChE activity for human and rat samples did not differ significantly (rat jejunum=1351±166 mU/mg wet weight; rat ileum=1078±123 mU/mg wet weight; human jejunum=1030±258 mU/mg wet weight; human ileum=1293±243 mU/mg wet weight). Summarizing, our isolated small bowel setup seems to be a solid tool to investigate the effects of (potential) therapeutics on pre-contracted smooth muscle, with data being transferable between rat and humans. Copyright © 2017 Elsevier B.V. All rights reserved.
Using Scatterplots to Teach the Critical Power Concept
ERIC Educational Resources Information Center
Pettitt, Robert W.
2012-01-01
The critical power (CP) concept has received renewed attention and excitement in the academic community. The CP concept was originally conceived as a model derived from a series of exhaustive, constant-load, exercise bouts. All-out exercise testing has made quantification of the parameters for the two-component model easier to arrive at, which may…
Arc_Mat: a Matlab-based spatial data analysis toolbox
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Lesage, James
2010-03-01
This article presents an overview of Arc_Mat, a Matlab-based spatial data analysis software package whose source code has been placed in the public domain. An earlier version of the Arc_Mat toolbox was developed to extract map polygon and database information from ESRI shapefiles and provide high quality mapping in the Matlab software environment. We discuss revisions to the toolbox that: utilize enhanced computing and graphing capabilities of more recent versions of Matlab, restructure the toolbox with object-oriented programming features, and provide more comprehensive functions for spatial data analysis. The Arc_Mat toolbox functionality includes basic choropleth mapping; exploratory spatial data analysis that provides exploratory views of spatial data through various graphs, for example, histogram, Moran scatterplot, three-dimensional scatterplot, density distribution plot, and parallel coordinate plots; and more formal spatial data modeling that draws on the extensive Spatial Econometrics Toolbox functions. A brief review of the design aspects of the revised Arc_Mat is described, and we provide some illustrative examples that highlight representative uses of the toolbox. Finally, we discuss programming with and customizing the Arc_Mat toolbox functionalities.
Modernization of dump truck onboard system
NASA Astrophysics Data System (ADS)
Semenov, M. A.; Bolshunova, O. M.; Korzhev, A. A.; Kamyshyan, A. M.
2017-10-01
The review of the only automated dispatch system for the career dump trucks, which is presented in the domestic market, was made. A method for upgrading the loading control system and technological weighing process of the career dump was proposed. The cargo weight during loading is determined by the gas pressure in the suspension cylinders at the time of the oscillation ending and at the start of the vibration smoothing process; the smoothing speed correction is performed. The error of the cargo weighting is 2.5-3%, and of the technological weighing process during driving - 1%, which corresponds to the error level of the steady-state weighting means.
A simulation study of hardwood rootstock populations in young loblolly pine plantations
David R. Weise; Glenn R. Glover
1988-01-01
A computer program to simulate spatial distribution of hardwood rootstock populations is presented. Nineteen 3 to 6 yearold loblolly pine (Pinus taeda L.) plantations in Alabama and Georgia were measured to provide information for the simulator. Spatial pattern, expressed as Pielou's nonrandomness index (PNI), ranged from 0.47 to 2.45. Scatterplots illustrated no...
A New Way to Teach (or Compute) Pearson's "r" without Reliance on Cross-Products
ERIC Educational Resources Information Center
Huck, Schuyler W.; Ren, Bixiang; Yang, Hongwei
2007-01-01
Many students have difficulty seeing the conceptual link between bivariate data displayed in a scatterplot and the statistical summary of the relationship, "r." This article shows how to teach (and compute) "r" such that each datum's direct and indirect influences are made apparent and used in a new formula for calculating Pearson's "r."
Presentation of growth velocities of rural Haitian children using smoothing spline techniques.
Waternaux, C; Hebert, J R; Dawson, R; Berggren, G G
1987-01-01
The examination of monthly (or quarterly) increments in weight or length is important for assessing the nutritional and health status of children. Growth velocities are widely thought to be more important than actual weight or length measurements per se. However, there are no standards by which clinicians, researchers, or parents can gauge a child's growth. This paper describes a method for computing growth velocities (monthly increments) for physical growth measurements with substantial measurement error and irregular spacing over time. These features are characteristic of data collected in the field where conditions are less than ideal. The technique of smoothing by splines provides a powerful tool to deal with the variability and irregularity of the measurements. The technique consists of approximating the observed data by a smooth curve as a clinician might have drawn on the child's growth chart. Spline functions are particularly appropriate to describe bio-physical processes such as growth, for which no model can be postulated a priori. This paper describes how the technique was used for the analysis of a large data base collected on pre-school aged children in rural Haiti. The sex-specific length and weight velocities derived from the spline-smoothed data are presented as reference data for researchers and others interested in longitudinal growth of children in the Third World.
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Mussin, Nadiar; Sumo, Marco; Choi, YoungRok; Choi, Jin Yong; Ahn, Sung-Woo; Yoon, Kyung Chul; Kim, Hyo-Sin; Hong, Suk Kyun; Yi, Nam-Joon; Suh, Kyung-Suk
2017-01-01
Purpose Liver volumetry is a vital component in living donor liver transplantation to determine an adequate graft volume that meets the metabolic demands of the recipient and at the same time ensures donor safety. Most institutions use preoperative contrast-enhanced CT image-based software programs to estimate graft volume. The objective of this study was to evaluate the accuracy of 2 liver volumetry programs (Rapidia vs. Dr. Liver) in preoperative right liver graft estimation compared with real graft weight. Methods Data from 215 consecutive right lobe living donors between October 2013 and August 2015 were retrospectively reviewed. One hundred seven patients were enrolled in Rapidia group and 108 patients were included in the Dr. Liver group. Estimated graft volumes generated by both software programs were compared with real graft weight measured during surgery, and further classified into minimal difference (≤15%) and big difference (>15%). Correlation coefficients and degree of difference were determined. Linear regressions were calculated and results depicted as scatterplots. Results Minimal difference was observed in 69.4% of cases from Dr. Liver group and big difference was seen in 44.9% of cases from Rapidia group (P = 0.035). Linear regression analysis showed positive correlation in both groups (P < 0.01). However, the correlation coefficient was better for the Dr. Liver group (R2 = 0.719), than for the Rapidia group (R2 = 0.688). Conclusion Dr. Liver can accurately predict right liver graft size better and faster than Rapidia, and can facilitate preoperative planning in living donor liver transplantation. PMID:28382294
ERIC Educational Resources Information Center
Olsen, Robert J.
2008-01-01
I describe how data pooling and data visualization can be employed in the first-semester general chemistry laboratory to introduce core statistical concepts such as central tendency and dispersion of a data set. The pooled data are plotted as a 1-D scatterplot, a purpose-designed number line through which statistical features of the data are…
Examining Student Conceptions of Covariation: A Focus on the Line of Best Fit
ERIC Educational Resources Information Center
Casey, Stephanie A.
2015-01-01
The purpose of this research study was to learn about students' conceptions concerning the line of best fit just prior to their introduction to the topic. Task-based interviews were conducted with thirty-three students, focused on five tasks that asked them to place the line of best fit on a scatterplot and explain their reasoning throughout the…
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Liang, Zhengyang; Zheng, Yuanyuan; Wang, Jing; Zhang, Quanbin; Ren, Shuang; Liu, Tiantian; Wang, Zhiqiang; Luo, Dali
2016-09-15
Low molecular weight fucoidan (LMWF) was prepared from Laminaria japonica Areschoug, a popular seafood and medicinal plant consumed in Asia. Chinese have long been using it as a traditional medicine for curing hypertension and edema. This study was intent to investigate the possible beneficial effect of LMWF on hyper-responsiveness of aortic smooth muscles instreptozotocin (STZ)-induced type 1 diabetic rats. Sprague-Dawley rats were made diabetic by injection of STZ, followed by the administration of LMWF (50 or 100mg/kg/day) or probucol (100mg/kg/day) for 12 weeks. Body weight, blood glucose level, basal blood pressure, serum lipid profiles, oxidative stress, prostanoids production, and vasoconstriction response of endothelium-denuded aorta rings to phenylephrine were measured by Real time-PCR, Western blots, ELISA assay, and force myograph, respectively. LMWF (100mg/kg/day)-treated group showed robust improvements on STZ-induced body weight-loss, hypertension, and hyperlipidaemia as indicated by decreased serum level of total cholesterol, triglyceride, and low density lipoprotein cholesterol; while probucol, a lipid-modifying drug with antioxidant properties, displayed mild effects. In addition, LMWF appreciably ameliorated STZ-elicited hyper-responsiveness and oxidative stress in aortic smooth muscles as indicated by decreased superoxide level, increased glutathione content and higher superoxide dismutase activity. Furthermore, administration with LMWF dramatically prevented cyclooxygenase-2 stimulation and restored the up-regulation of thromboxane synthase and down-regulation of 6-keto-PGF1α (a stable metabolic product of prostaglandin I2) in the STZ-administered rats. This study demonstrates for the first time that LMWF can protect against hyperlipidaemia, hypertension, and hyper-responsiveness of aortic smooth muscles in type 1 diabetic rat via, at least in part, amelioration of oxidative stress and restoration of prostanoids levels in aortic smooth muscles. Therefore, LMWF can be a potential adjuvant treatment against cardiovascular complications in type 1 diabetes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-01-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-06-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.
Li, Biao; Zhao, Hong; Rybak, Paulina; Dobrucki, Jurek W; Darzynkiewicz, Zbigniew; Kimmel, Marek
2014-09-01
Mathematical modeling allows relating molecular events to single-cell characteristics assessed by multiparameter cytometry. In the present study we labeled newly synthesized DNA in A549 human lung carcinoma cells with 15-120 min pulses of EdU. All DNA was stained with DAPI and cellular fluorescence was measured by laser scanning cytometry. The frequency of cells in the ascending (left) side of the "horseshoe"-shaped EdU/DAPI bivariate distributions reports the rate of DNA replication at the time of entrance to S phase while their frequency in the descending (right) side is a marker of DNA replication rate at the time of transition from S to G2 phase. To understand the connection between molecular-scale events and scatterplot asymmetry, we developed a multiscale stochastic model, which simulates DNA replication and cell cycle progression of individual cells and produces in silico EdU/DAPI scatterplots. For each S-phase cell the time points at which replication origins are fired are modeled by a non-homogeneous Poisson Process (NHPP). Shifted gamma distributions are assumed for durations of cell cycle phases (G1, S and G2 M), Depending on the rate of DNA synthesis being an increasing or decreasing function, simulated EdU/DAPI bivariate graphs show predominance of cells in left (early-S) or right (late-S) side of the horseshoe distribution. Assuming NHPP rate estimated from independent experiments, simulated EdU/DAPI graphs are nearly indistinguishable from those experimentally observed. This finding proves consistency between the S-phase DNA-replication rate based on molecular-scale analyses, and cell population kinetics ascertained from EdU/DAPI scatterplots and demonstrates that DNA replication rate at entrance to S is relatively slow compared with its rather abrupt termination during S to G2 transition. Our approach opens a possibility of similar modeling to study the effect of anticancer drugs on DNA replication/cell cycle progression and also to quantify other kinetic events that can be measured during S-phase. © 2014 International Society for Advancement of Cytometry.
Weighted Bergman Kernels and Quantization}
NASA Astrophysics Data System (ADS)
Engliš, Miroslav
Let Ω be a bounded pseudoconvex domain in CN, φ, ψ two positive functions on Ω such that - log ψ, - log φ are plurisubharmonic, and z∈Ω a point at which - log φ is smooth and strictly plurisubharmonic. We show that as k-->∞, the Bergman kernels with respect to the weights φkψ have an asymptotic expansion
Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.
2014-01-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
NASA Astrophysics Data System (ADS)
Liu, Liangyun; Zhang, Bing; Xu, Genxing; Zheng, Lanfen; Tong, Qingxi
2002-03-01
In this paper, the temperature-missivity separating (TES) method and normalized difference vegetation index (NDVI) are introduced, and the hyperspectral image data are analyzed using land surface temperature (LST) and NDVI channels which are acquired by Operative Module Imaging Spectral (OMIS) in Beijing Precision Agriculture Demonstration Base in Xiaotangshan town, Beijing in 26 Apr, 2001. Firstly, the 6 kinds of ground targets, which are winter wheat in booting stage and jointing stage, bare soil, water in ponds, sullage in dry ponds, aquatic grass, are well classified using LST and NDVI channels. Secondly, the triangle-like scatter-plot is built and analyzed using LST and NDVI channels, which is convenient to extract the information of vegetation growth and soil's moisture. Compared with the scatter-plot built by red and near-infrared bands, the spectral distance between different classes are larger, and the samples in the same class are more convergent. Finally, we design a logarithm VIT model to extract the surface soil water content (SWC) using LST and NDVI channel, which works well, and the coefficient of determination, R2, between the measured surface SWC and the estimated is 0.634. The mapping of surface SWC in the wheat area are calculated and illustrated, which is important for scientific irrigation and precise agriculture.
Local indicators of geocoding accuracy (LIGA): theory and application
Jacquez, Geoffrey M; Rommel, Robert
2009-01-01
Background Although sources of positional error in geographic locations (e.g. geocoding error) used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously) and locally (to identify those locations that would benefit most from increased geocoding accuracy). We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross) has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error) and high leverage (that contribute the most to the spatial weight being considered) will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density to more accurately follow the underlying population distribution increases perturbability and introduces error into the spatial weights matrix. In some studies positional error may not impact the statistical results, and in others it might invalidate the results. We therefore must understand the relationships between positional accuracy and the perturbability of the spatial weights in order to have confidence in a study's results. PMID:19863795
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
Craig, Benjamin M; Hartman, John D; Owens, Michelle A; Brown, Derek S
2016-04-01
To estimate the prevalence and losses in quality-adjusted life years (QALYs) associated with 20 child health conditions. Using data from the 2009-2010 National Survey of Children with Special Health Care Needs, preference weights were applied to 14 functional difficulties to summarize the quality of life burden of 20 health conditions. Among the 14 functional difficulties, "a little trouble with breathing" had the highest prevalence (37.1 %), but amounted to a loss of just 0.16 QALYs from the perspective of US adults. Though less prevalent, "a lot of behavioral problems" and "chronic pain" were associated with the greatest losses (1.86 and 3.43 QALYs). Among the 20 conditions, allergies and asthma were the most prevalent but were associated with the least burden. Muscular dystrophy and cerebral palsy were among the least prevalent and most burdensome. Furthermore, a scatterplot shows the association between condition prevalence and burden. In child health, condition prevalence is negatively associated with quality of life burden from the perspective of US adults. Both should be considered carefully when evaluating the appropriate role for public health prevention and interventions.
A new adaptively central-upwind sixth-order WENO scheme
NASA Astrophysics Data System (ADS)
Huang, Cong; Chen, Li Li
2018-03-01
In this paper, we propose a new sixth-order WENO scheme for solving one dimensional hyperbolic conservation laws. The new WENO reconstruction has three properties: (1) it is central in smooth region for low dissipation, and is upwind near discontinuities for numerical stability; (2) it is a convex combination of four linear reconstructions, in which one linear reconstruction is sixth order, and the others are third order; (3) its linear weights can be any positive numbers with requirement that their sum equals one. Furthermore, we propose a simple smoothness indicator for the sixth-order linear reconstruction, this smooth indicator not only can distinguish the smooth region and discontinuities exactly, but also can reduce the computational cost, thus it is more efficient than the classical one.
Evaluating and Improving the SAMA (Segmentation Analysis and Market Assessment) Recruiting Model
2015-06-01
and rewarding me with your love every day. xx THIS PAGE INTENTIONALLY LEFT BLANK 1 I. INTRODUCTION A. THE UNITED STATES ARMY RECRUITING...the relationship between the calculated SAMA potential and the actual 2014 performance. The scatterplot in Figure 8 shows a strong linear... relationship between the SAMA calculated potential and the contracting achievement for 2014, with an R-squared value of 0.871. Simple Linear Regression of
Superquantile Regression: Theory, Algorithms, and Applications
2014-12-01
Example C: Stack loss data scatterplot matrix. 91 Regression α c0 caf cwt cac R̄ 2 α R̄ 2 α,Adj Least Squares NA -39.9197 0.7156 1.2953 -0.1521 0.9136...This is due to a small 92 Model Regression α c0 cwt cwt2 R̄ 2 α R̄ 2 α,Adj f2 Least Squares NA -41.9109 2.8174 — 0.7665 0.7542 Quantile 0.25 -32.0000
Effect of thumb anaesthesia on weight perception, muscle activity and the stretch reflex in man.
Marsden, C D; Rothwell, J C; Traub, M M
1979-01-01
1. We have confirmed the results of Gandevia & McCloskey (1977) on the effect of thumb anaesthesia on perception of weights lifted by the thumb. Weights lifted by flexion feel heavier and weights lifted by extension feel lighter. 2. The change in size of the long-latency stretch reflex in flexor pollicis longus or extensor pollicis longus after thumb anaesthesia cannot explain the effect on weight perception by removal or augmentation of the background servo assistance to muscular contraction. 3. During smooth thumb flexion, thumb anaesthesia increases e.m.g. activity in flexor pollicis longus and extensor pollicis longus for any given opposing torque. 4. During smooth thumb extension the opposite occurs: e.m.g. activity in both extensor and flexor pollicis longus decreases. 5. Clamping the thumb at the proximal phalanx to limit movement solely to the interphalangeal joint reduces or abolishes the effect of anaesthesia on both weight perception and e.m.g. activity during both flexion or extension tasks. 6. Gandevia & McCloskey's findings on the distorting effects of thumb anaesthesia on weight perception cannot be used to support the hypothesis of an efferent monitoring system of the sense of effort. Our results emphasize the close functional relationship between cutaneous and joint afferent information and motor control. PMID:512948
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
Nam, Julia EunJu; Mueller, Klaus
2013-02-01
Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.
Zong, Xinnan; Li, Hui; Zhang, Yaqin; Wu, Huahong
2017-05-01
It is important to update weight-for-length/height growth curves in China and re-examine their performance in screening malnutrition. To develop weight-for-length/height growth curves for Chinese children and adolescents. A total of 94 302 children aged 0-19 years with complete sex, age, weight and length/height data were obtained from two cross-sectional large-scaled national surveys in China. Weight-for-length/height growth curves were constructed using the LMS method before and after average spermarcheal/menarcheal ages, respectively. Screening performance in prevalence estimates of wasting, overweight and obesity was compared between weight-for-height and body mass index (BMI) criteria based on a test population of 21 416 children aged 3-18. The smoothed weight-for-length percentiles and Z-scores growth curves with length 46-110 cm for both sexes and weight-for-height with height 70-180 cm for boys and 70-170 cm for girls were established. The weight-for-height and BMI-for-age had strong correlation in screening wasting, overweight and obesity in each age-sex group. There was no striking difference in prevalence estimates of wasting, overweight and obesity between two indicators except for obesity prevalence at ages 6-11. This set of smoothed weight-for-length/height growth curves may be useful in assessing nutritional status from infants to post-pubertal adolescents.
Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes
NASA Astrophysics Data System (ADS)
Don, Wai-Sun; Borges, Rafael
2013-10-01
In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.
2007-08-01
Characterization (OHM 1998). From the plot, it is clear that the HEU dominates DU in the overall isotopic characteristic. Among the three uranium ... isotopes , 234U comprised about 90 % of the total activity, including naturally-occurring background sources. However, in comparison to the WGP, uranium ...listed for a few sampling locations that had isotopic plutonium analysis of wipe samples. Figure A-19 contains a scatterplot of the paired Table 4-13
Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert
2017-01-27
A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less
Prospects for low wing-loading STOL transports with ride smoothing.
NASA Technical Reports Server (NTRS)
Holloway, R. B.; Thompson, G. O.; Rohling, W. J.
1972-01-01
Airplanes with low wing-loadings provide STOL capability without reliance on auxiliary propulsion or augmented lift, but require a ride smoothing control system to provide acceptable passenger comfort. A parametric study produced a configuration having a .35 thrust-to-weight ratio and a 50 psf wing loading, and which satisfied specified mission requirements and airworthiness standards. A ride-smoothing control system (RCS) synthesis was then performed which consisted of ride quality criteria definition, RCS concept trades, and analysis of RCS performance benefits at significant flight conditions. Within the limitations of the study it is concluded that this is a viable approach to STOL airplane design.
Considering body mass differences, who are the world's strongest women?
Vanderburgh, P M; Dooman, C
2000-01-01
Allometric modeling (AM) has been used to determine the world's strongest body mass-adjusted man. Recently, however, AM was shown to demonstrate body mass bias in elite Olympic weightlifting performance. A second order polynomial (2OP) provided a better fit than AM with no body mass bias for men and women. The purpose of this study was to apply both AM and 2OP models to women's world powerlifting records (more a function of pure strength and less power than Olympic lifts) to determine the optimal model approach as well as the strongest body mass-adjusted woman in each event. Subjects were the 36 (9 per event) current women world record holders (as of Nov., 1997) for bench press (BP), deadlift (DL), squat (SQ), and total (TOT) lift (BP + DL + SQ) according to the International Powerlifting Federation (IPF). The 2OP model demonstrated the superior fit and no body mass bias as indicated by the coefficient of variation and residuals scatterplot inspection, respectively, for DL, SQ, and TOT. The AM for these three lifts, however, showed favorable bias toward the middle weight classes. The 2OP and AM yielded an essentially identical fit for BP. Although body mass-adjusted world records were dependent on the model used, Carrie Boudreau (U.S., 56-kg weight class), who received top scores in TOT and DL with both models, is arguably the world's strongest woman overall. Furthermore, although the 2OP model provides a better fit than AM for this elite population, a case can still be made for AM use, particularly in light of theoretical superiority.
Schröder, Annette; Kirwan, Tyler P; Jiang, Jia-Xin; Aitken, Karen J; Bägli, Darius J
2013-06-01
Previous molecular studies showed that the mTOR inhibitor rapamycin prevents bladder smooth muscle hypertrophy in vitro. We investigated the effect of rapamycin treatment in vivo on bladder smooth muscle hypertrophy in a rat model of partial bladder outlet obstruction. A total of 48 female Sprague-Dawley® rats underwent partial bladder outlet obstruction and received daily subcutaneous injections of rapamycin (1 mg/kg) or vehicle commencing 2 weeks postoperatively. A total of 36 rats underwent sham surgery and received rapamycin or vehicle. Rats were sacrificed 3, 6 and 12 weeks after surgery. Before sacrifice, voiding was observed in a metabolic cage for 24 hours. Bladder-to-body weight in gm bladder weight per kg body weight and post-void residual urine were assessed. We evaluated Col1a1, Col3a1, Eln and Mmp7 mRNA expression and histology. Two-factor ANOVA and the post hoc t test were applied. Bladder outlet obstruction caused a significant increase in bladder weight in all obstructed groups. Three weeks postoperatively (1 week of treatment) there was no difference in the bladder-to-body weight ratio in the obstructed group. However, at 6 and 12 weeks (4 and 10 weeks of treatment, respectively) the bladder-to-body weight ratio of rats with obstruction plus rapamycin was significantly lower than that of rats with obstruction plus vehicle. Post-void residual urine volume after 6 and 12 weeks of obstruction was lower in obstructed rats with rapamycin compared to that in obstructed rats with vehicle. Rapamycin decreased the obstruction induced expression of Col1a1, Col3a1, Eln and Mmp7. Rapamycin prevents mechanically induced hypertrophy in cardiovascular smooth muscle. In vivo mTOR inhibition may attenuate obstruction induced detrusor hypertrophy and help preserve bladder function. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Preprocessing of SAR interferometric data using anisotropic diffusion filter
NASA Astrophysics Data System (ADS)
Sartor, Kenneth; Allen, Josef De Vaughn; Ganthier, Emile; Tenali, Gnana Bhaskar
2007-04-01
The most commonly used smoothing algorithms for complex data processing are blurring functions (i.e., Hanning, Taylor weighting, Gaussian, etc.). Unfortunately, the filters so designed blur the edges in a Synthetic Aperture Radar (SAR) scene, reduce the accuracy of features, and blur the fringe lines in an interferogram. For the Digital Surface Map (DSM) extraction, the blurring of these fringe lines causes inaccuracies in the height of the unwrapped terrain surface. Our goal here is to perform spatially non-uniform smoothing to overcome the above mentioned disadvantages. This is achieved by using a Complex Anisotropic Non-Linear Diffuser (CANDI) filter that is a spatially varying. In particular, an appropriate choice of the convection function in the CANDI filter is able to accomplish the non-uniform smoothing. This boundary sharpening intra-region smoothing filter acts on interferometric SAR (IFSAR) data with noise to produce an interferogram with significantly reduced noise contents and desirable local smoothing. Results of CANDI filtering will be discussed and compared with those obtained by using the standard filters on simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Baeder, James D.
2014-01-21
A new class of compact-reconstruction weighted essentially non-oscillatory (CRWENO) schemes were introduced (Ghosh and Baeder in SIAM J Sci Comput 34(3): A1678–A1706, 2012) with high spectral resolution and essentially non-oscillatory behavior across discontinuities. The CRWENO schemes use solution-dependent weights to combine lower-order compact interpolation schemes and yield a high-order compact scheme for smooth solutions and a non-oscillatory compact scheme near discontinuities. The new schemes result in lower absolute errors, and improved resolution of discontinuities and smaller length scales, compared to the weighted essentially non-oscillatory (WENO) scheme of the same order of convergence. Several improvements to the smoothness-dependent weights, proposed inmore » the literature in the context of the WENO schemes, address the drawbacks of the original formulation. This paper explores these improvements in the context of the CRWENO schemes and compares the different formulations of the non-linear weights for flow problems with small length scales as well as discontinuities. Simplified one- and two-dimensional inviscid flow problems are solved to demonstrate the numerical properties of the CRWENO schemes and its different formulations. Canonical turbulent flow problems—the decay of isotropic turbulence and the shock-turbulence interaction—are solved to assess the performance of the schemes for the direct numerical simulation of compressible, turbulent flows« less
... accompanied by fever, chills, severe itching, and fatigue. Inverse psoriasis. This causes smooth, raw-looking patches of ... a healthy weight. This decreases the risk of inverse psoriasis. Remind your child to keep skin clean ...
[General growth patterns and simple mathematic models of height and weight of Chinese children].
Zong, Xin-nan; Li, Hui
2009-05-01
To explore the growth patterns and simple mathematic models of height and weight of Chinese children. The original data had been obtained from two national representative cross-sectional surveys which were 2005 National Survey of Physical Development of Children (under 7 years of age) and 2005 Chinese National Survey on Students Constitution and Health (6 - 18 years). Reference curves of height and weight of children under 7 years of age was constructed by LMS method, and data of children from 6 to 18 years of age were smoothed by cubic spline function and transformed by modified LMS procedure. Growth velocity was calculated by smoothed values of height and weight. Simple linear model was fitted for children 1 to 10 years of age, for which smoothed height and weight values were used. (1) Birth length of Chinese children was about 50 cm, average length 61 cm, 67 cm, 76 cm and 88 cm at the 3rd, 6th, 12th and 24th month. Height gain was stable from 2 to 10 years of age, average 6 - 7 cm each year. Birth length doubles by 3.5 years, and triples by 12 years. The formula estimating average height of normal children aged 2 - 10 years was, height (cm) = age (yr) x 6.5 + 76 (cm). (2) Birth weight was about 3.3 kg. Growth velocity was at peak about 1.0 - 1.1 kg/mon in the first 3 months, decreased by half and was about 0.5 - 0.6 kg/mon in the second 3 months, and was reduced by a quarter, which was about 0.25 - 0.30 kg/mon, in the last 6 months of the first year. Body mass was up to doubles, triples and quadruple of birth weight at about the 3rd, 12th and 24th month. Average annual gain was about 2 kg and 3 kg from 1 - 6 years and 7 - 10 years, respectively. The estimated formula for children 1 to 6 years of age was weight (kg) = age (yr) x 2 + 8 (kg), but for those 7 - 10 years old, weight (kg) = age (yr) x 3 + 2 (kg). Growth patterns of height and weight at the different age stages were summarized for Chinese children, and simple reference data of height and weight velocity from 0 to 18 years and approximate estimation formula from 1 - 10 years was presented for clinical practice.
NASA Astrophysics Data System (ADS)
Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em
2017-12-01
Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.
Exploring High-D Spaces with Multiform Matrices and Small Multiples
MacEachren, Alan; Dai, Xiping; Hardisty, Frank; Guo, Diansheng; Lengerich, Gene
2011-01-01
We introduce an approach to visual analysis of multivariate data that integrates several methods from information visualization, exploratory data analysis (EDA), and geovisualization. The approach leverages the component-based architecture implemented in GeoVISTA Studio to construct a flexible, multiview, tightly (but generically) coordinated, EDA toolkit. This toolkit builds upon traditional ideas behind both small multiples and scatterplot matrices in three fundamental ways. First, we develop a general, MultiForm, Bivariate Matrix and a complementary MultiForm, Bivariate Small Multiple plot in which different bivariate representation forms can be used in combination. We demonstrate the flexibility of this approach with matrices and small multiples that depict multivariate data through combinations of: scatterplots, bivariate maps, and space-filling displays. Second, we apply a measure of conditional entropy to (a) identify variables from a high-dimensional data set that are likely to display interesting relationships and (b) generate a default order of these variables in the matrix or small multiple display. Third, we add conditioning, a kind of dynamic query/filtering in which supplementary (undisplayed) variables are used to constrain the view onto variables that are displayed. Conditioning allows the effects of one or more well understood variables to be removed from the analysis, making relationships among remaining variables easier to explore. We illustrate the individual and combined functionality enabled by this approach through application to analysis of cancer diagnosis and mortality data and their associated covariates and risk factors. PMID:21947129
Vegetation Phenology Metrics Derived from Temporally Smoothed and Gap-filled MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeff; Wolfe, Robert; Esaias, Wayne; Gao, Feng; Ederer, Greg; Nightingale, Joanne; Nickeson, Jamie E.; Ma, Pete; Pedely, Jeff
2012-01-01
Smoothed and gap-filled VI provides a good base for estimating vegetation phenology metrics. The TIMESAT software was improved by incorporating the ancillary information from MODIS products. A simple assessment of the association between retrieved greenup dates and ground observations indicates satisfactory result from improved TIMESAT software. One application example shows that mapping Nectar Flow Phenology is tractable on a continental scale using hive weight and satellite vegetation data. The phenology data product is supporting more researches in ecology, climate change fields.
Schwartz, Joseph E; Burg, Matthew M; Shimbo, Daichi; Broderick, Joan E; Stone, Arthur A; Ishikawa, Joji; Sloan, Richard; Yurgel, Tyla; Grossman, Steven; Pickering, Thomas G
2016-12-06
Ambulatory blood pressure (ABP) is consistently superior to clinic blood pressure (CBP) as a predictor of cardiovascular morbidity and mortality risk. A common perception is that ABP is usually lower than CBP. The relationship of the CBP minus ABP difference to age has not been examined in the United States. Between 2005 and 2012, 888 healthy, employed, middle-aged (mean±SD age, 45±10.4 years) individuals (59% female, 7.4% black, 12% Hispanic) with screening BP <160/105 mm Hg and not taking antihypertensive medication completed 3 separate clinic BP assessments and a 24-hour ABP recording for the Masked Hypertension Study. The distributions of CBP, mean awake ABP (aABP), and the CBP-aABP difference in the full sample and by demographic characteristics were compared. Locally weighted scatterplot smoothing was used to model the relationship of the BP measures to age and body mass index. The prevalence of discrepancies in ABP- versus CBP-defined hypertension status-white-coat hypertension and masked hypertension-were also examined. Average systolic/diastolic aABP (123.0/77.4±10.3/7.4 mm Hg) was significantly higher than the average of 9 CBP readings over 3 visits (116.0/75.4±11.6/7.7 mm Hg). aABP exceeded CBP by >10 mm Hg much more frequently than CBP exceeded aABP. The difference (aABP>CBP) was most pronounced in young adults and those with normal body mass index. The systolic difference progressively diminished, but did not disappear, at older ages and higher body mass indexes. The diastolic difference vanished around age 65 and reversed (CBP>aABP) for body mass index >32.5 kg/m 2 . Whereas 5.3% of participants were hypertensive by CBP, 19.2% were hypertensive by aABP; 15.7% of those with nonelevated CBP had masked hypertension. Contrary to a widely held belief, based primarily on cohort studies of patients with elevated CBP, ABP is not usually lower than CBP, at least not among healthy, employed individuals. Furthermore, a substantial proportion of otherwise healthy individuals with nonelevated CBP have masked hypertension. Demonstrated CBP-aABP gradients, if confirmed in representative samples (eg, NHANES [National Health and Nutrition Examination Survey]), could provide guidance for primary care physicians as to when, for a given CBP, 24-hour ABP would be useful to identify or rule out masked hypertension. © 2016 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Murru, M.; Falcone, G.; Taroni, M.; Console, R.
2017-12-01
In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
Evaluation of performance of light-weight profilometers
DOT National Transportation Integrated Search
2003-10-01
Several lightweight, non-contact profilometers (LWP) are now available to measure profiles of newly constructed Portland Cement Concrete Pavement (PCCP). As constructed smoothness measurements by four LWP's and the California-type profilograph were c...
Upper arm circumference development in Chinese children and adolescents: a pooled analysis.
Tong, Fang; Fu, Tong
2015-05-30
Upper arm development in children is different in different ethnic groups. There have been few reports on upper arm circumference (UAC) at different stages of development in children and adolescents in China. The purpose of this study was to provide a reference for growth with weighted assessment of the overall level of development. Using a pooled analysis, an authoritative journal database search and reports of UAC, we created a new database on developmental measures in children. In conducting a weighted analysis, we compared reference values for 0~60 months of development according to the World Health Organization (WHO) statistics considering gender and nationality and used Z values as interval values for the second sampling to obtain an exponential smooth curve to analyze the mean, standard deviation, and sites of attachment. Ten articles were included in the pooled analysis, and these articles included participants from different areas of China. The point of intersection with the WHO curve was 3.5 years with higher values at earlier ages and lower values at older ages. Boys curve was steeper after puberty. The curves in the studies had a merged line compatible. The Z values of exponential smoothing showed the curves were similar for body weight and had a right normal distribution. The integrated index of UAC in Chinese children and adolescents indicated slightly variations with regions. Exponential curve smoothing was suitable for assessment at different developmental stages.
Blow, Nikolaus; Biswas, Pradipta
2017-01-01
As computers become more and more essential for everyday life, people who cannot use them are missing out on an important tool. The predominant method of interaction with a screen is a mouse, and difficulty in using a mouse can be a huge obstacle for people who would otherwise gain great value from using a computer. If mouse pointing were to be made easier, then a large number of users may be able to begin using a computer efficiently where they may previously have been unable to. The present article aimed to improve pointing speeds for people with arm or hand impairments. The authors investigated different smoothing and prediction models on a stored data set involving 25 people, and the best of these algorithms were chosen. A web-based prototype was developed combining a polynomial smoothing algorithm with a time-weighted gradient target prediction model. The adapted interface gave an average improvement of 13.5% in target selection times in a 10-person study of representative users of the system. A demonstration video of the system is available at https://youtu.be/sAzbrKHivEY.
Barbour, P S; Stone, M H; Fisher, J
2000-01-01
This study validates a hip joint simulator configuration as compared with other machines and clinical wear rates using smooth metal and ceramic femoral heads and ultra-high molecular weight polyethylene (UHMWPE) acetabular cups. Secondly the wear rate of UHMWPE cups is measured in the simulator with deliberately scratched cobalt-chrome heads to represent the type of mild and severe scratch damage found on retrieved heads. Finally, the scratching processes are described and the resulting scratches compared with those found in retrieved cobalt-chrome heads. For smooth cobalt-chrome and zirconia heads the wear rates were found to be statistically similar to other simulator machines and within the normal range found from clinical studies. An increased wear rate was found with cobalt-chrome heads scratched using either the diamond stylus or the bead cobalt-chrome but the greatest increase was with the diamond scratched heads which generated scratches of similar dimensions to those on retrieved heads. A greater than twofold increase in wear rate is reported for these heads when compared with smooth heads. This increased wear rate is, however, still within the limits of data from clinical wear studies.
Testing local anisotropy using the method of smoothed residuals I — methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appleby, Stephen; Shafieloo, Arman, E-mail: stephen.appleby@apctp.org, E-mail: arman@apctp.org
2014-03-01
We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect amore » signal. Issues related to the width of the Gaussian smoothing are also discussed.« less
What evidence implicates airway smooth muscle in the cause of BHR?
Dulin, Nickolai O; Fernandes, Darren J; Dowell, Maria; Bellam, Shashi; McConville, John; Lakser, Oren; Mitchell, Richard; Camoretti-Mercado, Blanca; Kogut, Paul; Solway, Julian
2003-02-01
Bronchial hyperresponsiveness (BHR), the occurrence of excessive bronchoconstriction in response to relatively small constrictor stimuli, is a cardinal feature of asthma. Here, we consider the role that airway smooth muscle might play in the generation of BHR. The weight of evidence suggests that smooth muscle isolated from asthmatic tissues exhibits normal sensitivity to constrictor agonists when studied during isometric contraction, but the increased muscle mass within asthmatic airways might generate more total force than the lesser amount of muscle found in normal bronchi. Another salient difference between asthmatic and normal individuals lies in the effect of deep inhalation (DI) on bronchoconstriction. DI often substantially reverses induced bronchoconstriction in normals, while it often has much less effect on spontaneous or induced bronchoconstriction in asthmatics. It has been proposed that abnormal dynamic aspects of airway smooth muscle contraction velocity of contraction or plasticity- elasticity balance might underlie the abnormal DI response in asthma. We suggest a speculative model in which abnormally long actin filaments might account for abnormally increased elasticity of contracted airway smooth muscle.
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.
Azzari, Lucio; Foi, Alessandro
2014-08-01
We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.
An algorithm for surface smoothing with rational splines
NASA Technical Reports Server (NTRS)
Schiess, James R.
1987-01-01
Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.
Esophageal smooth muscle hypertrophy causing regurgitation in a rabbit
PARKINSON, Lily; KUZMA, Carrie; WUENSCHMANN, Arno; MANS, Christoph
2017-01-01
A five-year-old rabbit was evaluated for a 7 to 8 month history of regurgitation, weight loss, and hyporexia. Previously performed whole body radiographs, plasma biochemistry results and complete blood count revealed had no significant abnormalities. A computed tomography (CT) scan revealed a circumferential caudal esophageal thickening. The animal received supportive care until euthanasia was performed 6 weeks later. Caudal esophageal smooth muscle hypertrophy was diagnosed on necropsy. This case indicates that regurgitation can occur in rabbits and advanced imaging can investigate the underlying cause. PMID:28966232
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1997-01-01
In these lecture notes we describe the construction, analysis, and application of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) schemes for hyperbolic conservation laws and related Hamilton- Jacobi equations. ENO and WENO schemes are high order accurate finite difference schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation procedure as much as possible. ENO and WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures, such as compressible turbulence simulations and aeroacoustics. These lecture notes are basically self-contained. It is our hope that with these notes and with the help of the quoted references, the reader can understand the algorithms and code them up for applications.
Smoothed Particle Inference Analysis of SNR RCW 103
NASA Astrophysics Data System (ADS)
Frank, Kari A.; Burrows, David N.; Dwarkadas, Vikram
2016-04-01
We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to an XMM-Newton observation of SNR RCW 103. SPI is a Bayesian modeling process that fits a population of gas blobs ("smoothed particles") such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. This RCW 103 analysis is part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.
Virlet, Nicolas; Lebourgeois, Valentine; Martinez, Sébastien; Costes, Evelyne; Labbé, Sylvain; Regnard, Jean-Luc
2014-01-01
As field phenotyping of plant response to water constraints constitutes a bottleneck for breeding programmes, airborne thermal imagery can contribute to assessing the water status of a wide range of individuals simultaneously. However, the presence of mixed soil–plant pixels in heterogeneous plant cover complicates the interpretation of canopy temperature. Moran’s Water Deficit Index (WDI = 1–ETact/ETmax), which was designed to overcome this difficulty, was compared with surface minus air temperature (T s–T a) as a water stress indicator. As parameterization of the theoretical equations for WDI computation is difficult, particularly when applied to genotypes with large architectural variability, a simplified procedure based on quantile regression was proposed to delineate the Vegetation Index–Temperature (VIT) scatterplot. The sensitivity of WDI to variations in wet and dry references was assessed by applying more or less stringent quantile levels. The different stress indicators tested on a series of airborne multispectral images (RGB, near-infrared, and thermal infrared) of a population of 122 apple hybrids, under two irrigation regimes, significantly discriminated the tree water statuses. For each acquisition date, the statistical method efficiently delineated the VIT scatterplot, while the limits obtained using the theoretical approach overlapped it, leading to inconsistent WDI values. Once water constraint was established, the different stress indicators were linearly correlated to the stem water potential among a tree subset. T s–T a showed a strong sensitivity to evaporative demand, which limited its relevancy for temporal comparisons. Finally, the statistical approach of WDI appeared the most suitable for high-throughput phenotyping. PMID:25080086
Replication of Low Density Electroformed Normal Incidence Optics
NASA Technical Reports Server (NTRS)
Ritter, Joseph M.
2000-01-01
Replicated electroformed light-weight nickel alloy mirrors can have high strength, low areal density (<3kg/m2), smooth finish, and controllable alloy composition. Progress at NASA MSFC SOMTC in developing normal incidence replicated Nickel mirrors will be reported.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Smooth Muscle-Mediated Connective Tissue Remodeling in Pulmonary Hypertension
NASA Astrophysics Data System (ADS)
Mecham, Robert P.; Whitehouse, Loren A.; Wrenn, David S.; Parks, William C.; Griffin, Gail L.; Senior, Robert M.; Crouch, Edmond C.; Stenmark, Kurt R.; Voelkel, Norbert F.
1987-07-01
Abnormal accumulation of connective tissue in blood vessels contributes to alterations in vascular physiology associated with disease states such as hypertension and atherosclerosis. Elastin synthesis was studied in blood vessels from newborn calves with severe pulmonary hypertension induced by alveolar hypoxia in order to investigate the cellular stimuli that elicit changes in pulmonary arterial connective tissue production. A two- to fourfold increase in elastin production was observed in pulmonary artery tissue and medial smooth muscle cells from hypertensive calves. This stimulation of elastin production was accompanied by a corresponding increase in elastin messenger RNA consistent with regulation at the transcriptional level. Conditioned serum harvested from cultures of pulmonary artery smooth muscle cells isolated from hypertensive animals contained one or more low molecular weight elastogenic factors that stimulated the production of elastin in both fibroblasts and smooth muscle cells and altered the chemotactic responsiveness of fibroblasts to elastin peptides. These results suggest that connective tissue changes in the pulmonary vasculature in response to pulmonary hypertension are orchestrated by the medial smooth muscle cell through the generation of specific differentiation factors that alter both the secretory phenotype and responsive properties of surrounding cells.
Asthma Is More Severe in Older Adults
Dweik, Raed A.; Comhair, Suzy A.; Bleecker, Eugene R.; Moore, Wendy C.; Peters, Stephen P.; Busse, William W.; Jarjour, Nizar N.; Calhoun, William J.; Castro, Mario; Chung, K. Fan; Fitzpatrick, Anne; Israel, Elliot; Teague, W. Gerald; Wenzel, Sally E.; Love, Thomas E.; Gaston, Benjamin M.
2015-01-01
Background Severe asthma occurs more often in older adult patients. We hypothesized that the greater risk for severe asthma in older individuals is due to aging, and is independent of asthma duration. Methods This is a cross-sectional study of prospectively collected data from adult participants (N=1130; 454 with severe asthma) enrolled from 2002 – 2011 in the Severe Asthma Research Program. Results The association between age and the probability of severe asthma, which was performed by applying a Locally Weighted Scatterplot Smoother, revealed an inflection point at age 45 for risk of severe asthma. The probability of severe asthma increased with each year of life until 45 years and thereafter increased at a much slower rate. Asthma duration also increased the probability of severe asthma but had less effect than aging. After adjustment for most comorbidities of aging and for asthma duration using logistic regression, asthmatics older than 45 maintained the greater probability of severe asthma [OR: 2.73 (95 CI: 1.96; 3.81)]. After 45, the age-related risk of severe asthma continued to increase in men, but not in women. Conclusions Overall, the impact of age and asthma duration on risk for asthma severity in men and women is greatest over times of 18-45 years of age; age has a greater effect than asthma duration on risk of severe asthma. PMID:26200463
Implementation and control of a 3 degree-of-freedom, force-reflecting manual controller
NASA Astrophysics Data System (ADS)
Kim, Whee-Kuk; Bevill, Pat; Tesar, Delbert
1991-02-01
Most available manual controllers which are used in bilateral or force-reflecting teleoperator systems can be characterized by their bulky size heavy weight high cost low magnitude of reflecting-force lack of smoothness insufficient transparency and simplified architectures. A compact smooth lightweight portable universal manual controller could provide a markedly improved level of transparency and be able to drive a broad spectrum of slave manipulators. This implies that a single stand-off position could be used for a diverse population of remote systems and that a standard environment for training of operators would result in reduced costs and higher reliability. In the implementation presented in this paper a parallel 3 degree-of-freedom (DOF) spherical structure (for compactness and reduced weight) is combined with high gear-ratio reducers using a force control algorithm to produce a " power steering" effect for enhanced smoothness and transparency. The force control algorithm has the further benefit of minimizing the effect of the system friction and non-linear inertia forces. The fundamental analytical description for the spherical force-reflecting manual controller such as forward position analysis reflecting-force transformation and applied force control algorithm are presented. Also a brief description of the system integration its actual implementation and preliminary test results are presented in the paper.
A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.
Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José
2016-08-01
Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.
Resin char oxidation retardant for composites
NASA Technical Reports Server (NTRS)
Bowles, K. J.; Gluyas, R. E.
1981-01-01
Boron powder stabilizes char, so burned substances are shiny, smooth, and free of loose graphite fibers. Resin weight loss of laminates during burning in air is identical for the first three minutes for unfilled and boron-filled samples, then boron samples stabilize.
Replication of Low Density Electroformed Normal Incidence Optics
NASA Technical Reports Server (NTRS)
Ritter, Joseph M.; Burdine, Robert (Technical Monitor)
2001-01-01
Replicated electroformed light-weight nickel alloy mirrors can have high strength, low areal density (less than 3kg/m2), smooth finish, and controllable alloy composition. Progress at NASA MSFC SOMTC in developing normal incidence replicated Nickel mirrors will be reported.
Michielsen, Koen; Nuyts, Johan; Cockmartin, Lesley; Marshall, Nicholas; Bosmans, Hilde
2016-12-01
In this work, the authors design and validate a model observer that can detect groups of microcalcifications in a four-alternative forced choice experiment and use it to optimize a smoothing prior for detectability of microcalcifications. A channelized Hotelling observer (CHO) with eight Laguerre-Gauss channels was designed to detect groups of five microcalcifications in a background of acrylic spheres by adding the CHO log-likelihood ratios calculated at the expected locations of the five calcifications. This model observer is then applied to optimize the detectability of the microcalcifications as a function of the smoothing prior. The authors examine the quadratic and total variation (TV) priors, and a combination of both. A selection of these reconstructions was then evaluated by human observers to validate the correct working of the model observer. The authors found a clear maximum for the detectability of microcalcification when using the total variation prior with weight β TV = 35. Detectability only varied over a small range for the quadratic and combined quadratic-TV priors when weight β Q of the quadratic prior was changed by two orders of magnitude. Spearman correlation with human observers was good except for the highest value of β for the quadratic and TV priors. Excluding those, the authors found ρ = 0.93 when comparing detection fractions, and ρ = 0.86 for the fitted detection threshold diameter. The authors successfully designed a model observer that was able to predict human performance over a large range of settings of the smoothing prior, except for the highest values of β which were outside the useful range for good image quality. Since detectability only depends weakly on the strength of the combined prior, it is not possible to pick an optimal smoothness based only on this criterion. On the other hand, such choice can now be made based on other criteria without worrying about calcification detectability.
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
Nonequilibrium flows with smooth particle applied mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kum, Oyeon
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separatelymore » controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.« less
Equation for the Nakanishi Weight Function Using the Inverse Stieltjes Transform
NASA Astrophysics Data System (ADS)
Karmanov, V. A.; Carbonell, J.; Frederico, T.
2018-05-01
The bound state Bethe-Salpeter amplitude was expressed by Nakanishi in terms of a smooth weight function g. By using the generalized Stieltjes transform, we derive an integral equation for the Nakanishi function g for a bound state case. It has the standard form g= \\hat{V} g, where \\hat{V} is a two-dimensional integral operator. The prescription for obtaining the kernel V starting with the kernel K of the Bethe-Salpeter equation is given.
NASA Astrophysics Data System (ADS)
Tao, Feifei; Mba, Ogan; Liu, Li; Ngadi, Michael
2017-04-01
Polyunsaturated fatty acids (PUFAs) are important nutrients present in Salmon. However, current methods for quantifying the fatty acids (FAs) contents in foods are generally based on gas chromatography (GC) technique, which is time-consuming, laborious and destructive to the tested samples. Therefore, the capability of near-infrared (NIR) hyperspectral imaging to predict the PUFAs contents of C20:2 n-6, C20:3 n-6, C20:5 n-3, C22:5 n-3 and C22:6 n-3 in Salmon fillets in a rapid and non-destructive way was investigated in this work. Mean reflectance spectra were first extracted from the region of interests (ROIs), and then the spectral pre-processing methods of 2nd derivative and Savitzky-Golay (SG) smoothing were performed on the original spectra. Based on the original and the pre-processed spectra, PLSR technique was employed to develop the quantitative models for predicting each PUFA content in Salmon fillets. The results showed that for all the studied PUFAs, the quantitative models developed using the pre-processed reflectance spectra by "2nd derivative + SG smoothing" could improve their modeling results. Good prediction results were achieved with RP and RMSEP of 0.91 and 0.75 mg/g dry weight, 0.86 and 1.44 mg/g dry weight, 0.82 and 3.01 mg/g dry weight for C20:3 n-6, C22:5 n-3 and C20:5 n-3, respectively after pre-processing by "2nd derivative + SG smoothing". The work demonstrated that NIR hyperspectral imaging could be a useful tool for rapid and non-destructive determination of the PUFA contents in fish fillets.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake
NASA Astrophysics Data System (ADS)
Chen, T.; Luo, H.
2013-12-01
On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.
Norman, Matthew R.
2014-11-24
New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less
Novel calcium infusion regimen after parathyroidectomy for renal hyperparathyroidism
Tan, Jih Huei; Tan, Henry Chor Lip; Arulanantham, Sarojah A/P
2017-01-01
Abstract Aim Calcium infusion is used after parathyroid surgery for renal hyperparathyroidism to treat postoperative hypocalcaemia. We compared a new infusion regimen to one commonly used in Malaysia based on 2003 K/DOQI guidelines. Methods Retrospective data on serum calcium and infusion rates was collected from 2011–2015. The relationship between peak calcium efflux (PER) and time was determined using a scatterplot and linear regression. A comparison between regimens was made based on treatment efficacy (hypocalcaemia duration, total infusion amount and time) and calcium excursions (outside target range, peak and trough calcium) using bar charts and an unpaired t‐test. Results Fifty‐one and 34 patients on the original and new regimens respectively were included. Mean PER was lower (2.16 vs 2.56 mmol/h; P = 0.03) and occurred earlier (17.6 vs 23.2 h; P = 0.13) for the new regimen. Both scatterplot and regression showed a large correlation between PER and time (R‐square 0.64, SE 1.53, P < 0.001). The new regimen had shorter period of hypocalcaemia (28.9 vs 66.4 h, P = 0.04), and required less calcium infusion (67.7 vs 127.2 mmol, P = 0.02) for a shorter duration (57.3 vs 102.9 h, P = 0.001). Calcium excursions, peak and trough calcium were not significantly different between regimens. Early postoperative high excursions occurred when the infusion was started in spite of elevated peri‐operative calcium levels. Conclusion The new infusion regimen was superior to the original in that it required a shorter treatment period and resulted in less hypocalcaemia. We found that early aggressive calcium replacement is unnecessary and raises the risk of rebound hypercalcemia. PMID:26952689
Assessing the role of pavement macrotexture in preventing crashes on highways.
Pulugurtha, Srinivas S; Kusam, Prasanna R; Patel, Kuvleshay J
2010-02-01
The objective of this article is to assess the role of pavement macrotexture in preventing crashes on highways in the State of North Carolina. Laser profilometer data obtained from the North Carolina Department of Transportation (NCDOT) for highways comprising four corridors are processed to calculate pavement macrotexture at 100-m (approximately 330-ft) sections according to the American Society for Testing and Materials (ASTM) standards. Crash data collected over the same lengths of the corridors were integrated with the calculated pavement macrotexture for each section. Scatterplots were generated to assess the role of pavement macrotexture on crashes and logarithm of crashes. Regression analyses were conducted by considering predictor variables such as million vehicle miles of travel (as a function of traffic volume and length), the number of interchanges, the number of at-grade intersections, the number of grade-separated interchanges, and the number of bridges, culverts, and overhead signs along with pavement macrotexture to study the statistical significance of relationship between pavement macrotexture and crashes (both linear and log-linear) when compared to other predictor variables. Scatterplots and regression analysis conducted indicate a more statistically significant relationship between pavement macrotexture and logarithm of crashes than between pavement macrotexture and crashes. The coefficient for pavement macrotexture, in general, is negative, indicating that the number of crashes or logarithm of crashes decreases as it increases. The relation between pavement macrotexture and logarithm of crashes is generally stronger than between most other predictor variables and crashes or logarithm of crashes. Based on results obtained, it can be concluded that maintaining pavement macrotexture greater than or equal to 1.524 mm (0.06 in.) as a threshold limit would possibly reduce crashes and provide safe transportation to road users on highways.
Koopman, Timco; Buikema, Henk J; Hollema, Harry; de Bock, Geertruida H; van der Vegt, Bert
2018-05-01
The Ki67 proliferation index is a prognostic and predictive marker in breast cancer. Manual scoring is prone to inter- and intra-observer variability. The aims of this study were to clinically validate digital image analysis (DIA) of Ki67 using virtual dual staining (VDS) on whole tissue sections and to assess inter-platform agreement between two independent DIA platforms. Serial whole tissue sections of 154 consecutive invasive breast carcinomas were stained for Ki67 and cytokeratin 8/18 with immunohistochemistry in a clinical setting. Ki67 proliferation index was determined using two independent DIA platforms, implementing VDS to identify tumor tissue. Manual Ki67 score was determined using a standardized manual counting protocol. Inter-observer agreement between manual and DIA scores and inter-platform agreement between both DIA platforms were determined and calculated using Spearman's correlation coefficients. Correlations and agreement were assessed with scatterplots and Bland-Altman plots. Spearman's correlation coefficients were 0.94 (p < 0.001) for inter-observer agreement between manual counting and platform A, 0.93 (p < 0.001) between manual counting and platform B, and 0.96 (p < 0.001) for inter-platform agreement. Scatterplots and Bland-Altman plots revealed no skewness within specific data ranges. In the few cases with ≥ 10% difference between manual counting and DIA, results by both platforms were similar. DIA using VDS is an accurate method to determine the Ki67 proliferation index in breast cancer, as an alternative to manual scoring of whole sections in clinical practice. Inter-platform agreement between two different DIA platforms was excellent, suggesting vendor-independent clinical implementability.
Towards practical control design using neural computation
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Mattern, Duane; Merrill, Walter
1991-01-01
The objective is to develop neural network based control design techniques which address the issue of performance/control effort tradeoff. Additionally, the control design needs to address the important issue if achieving adequate performance in the presence of actuator nonlinearities such as position and rate limits. These issues are discussed using the example of aircraft flight control. Given a set of pilot input commands, a feedforward net is trained to control the vehicle within the constraints imposed by the actuators. This is achieved by minimizing an objective function which is the sum of the tracking errors, control input rates and control input deflections. A tradeoff between tracking performance and control smoothness is obtained by varying, adaptively, the weights of the objective function. The neurocontroller performance is evaluated in the presence of actuator dynamics using a simulation of the vehicle. Appropriate selection of the different weights in the objective function resulted in the good tracking of the pilot commands and smooth neurocontrol. An extension of the neurocontroller design approach is proposed to enhance its practicality.
NASA Astrophysics Data System (ADS)
Yan, Ping; Kalscheuer, Thomas; Hedin, Peter; Garcia Juanatey, Maria A.
2017-04-01
We present a novel 2-D magnetotelluric (MT) inversion scheme, in which the local weights of the regularizing smoothness constraints are based on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. Successful application of the inversion to MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using the envelope attribute of the COSC reflection seismic profile helped to reduce the uncertainty of the interpretation of the main décollement by demonstrating that the associated alum shales may be much thinner than suggested by a previous inversion model. Thus, the new model supports the proposed location of a future borehole COSC-2 which is hoped to penetrate the main décollement and the underlying Precambrian basement.
NASA Astrophysics Data System (ADS)
Cantor-Rivera, Diego; Goubran, Maged; Kraguljac, Alan; Bartha, Robert; Peters, Terry
2010-03-01
The main objective of this study was to assess the effect of smoothing filter selection in Voxel-Based Morphometry studies on structural T1-weighted magnetic resonance images. Gaussian filters of 4 mm, 8 mm or 10 mm Full Width at High Maximum are commonly used, based on the assumption that the filter size should be at least twice the voxel size to obtain robust statistical results. The hypothesis of the presented work was that the selection of the smoothing filter influenced the detectability of small lesions in the brain. Mesial Temporal Sclerosis associated to Epilepsy was used as the case to demonstrate this effect. Twenty T1-weighted MRIs from the BrainWeb database were selected. A small phantom lesion was placed in the amygdala, hippocampus, or parahippocampal gyrus of ten of the images. Subsequently the images were registered to the ICBM/MNI space. After grey matter segmentation, a T-test was carried out to compare each image containing a phantom lesion with the rest of the images in the set. For each lesion the T-test was repeated with different Gaussian filter sizes. Voxel-Based Morphometry detected some of the phantom lesions. Of the three parameters considered: location,size, and intensity; it was shown that location is the dominant factor for the detection of the lesions.
Kowler, Eileen; Aitkin, Cordelia D; Ross, Nicholas M; Santos, Elio M; Zhao, Min
2014-05-16
The ability of smooth pursuit eye movements to anticipate the future motion of targets has been known since the pioneering work of Dodge, Travis, and Fox (1930) and Westheimer (1954). This article reviews aspects of anticipatory smooth eye movements, focusing on the roles of the different internal or external cues that initiate anticipatory pursuit.We present new results showing that the anticipatory smooth eye movements evoked by different cues differ substantially, even when the cues are equivalent in the information conveyed about the direction of future target motion. Cues that convey an easily interpretable visualization of the motion path produce faster anticipatory smooth eye movements than the other cues tested, including symbols associated arbitrarily with the path, and the same target motion tested repeatedly over a block of trials. The differences among the cues may be understood within a common predictive framework in which the cues differ in the level of subjective certainty they provide about the future path. Pursuit may be driven by a combined signal in which immediate sensory motion, and the predictions about future motion generated by sets of cues, are weighted according to their respective levels of certainty. Anticipatory smooth eye movements, an overt indicator of expectations and predictions, may not be operating in isolation, but may be part of a global process in which the brain analyzes available cues, formulates predictions, and uses them to control perceptual, motor, and cognitive processes. © 2014 ARVO.
Rahaman, Sayed Modinur; Dey, Kuntal; Das, Partha; Roy, Soumitra; Chakraborti, Tapati; Chakraborti, Sajal
2014-08-01
We have identified a novel endogenous low mol wt. (15.6 kDa) protein inhibitor of Na(+)/K(+)-ATPase in cytosolic fraction of bovine pulmonary artery smooth muscle cells. The inhibitor showed different affinities toward the α₂β₁ and α₁β₁ isozymes of Na(+)/K(+)-ATPase, where α₂ is more sensitive than α₁. The inhibitor interacted reversibly to the E1 site of the enzyme and blocked the phosphorylated intermediate formation. Circular dichroism study suggests that the inhibitor causes an alteration in the confirmation of the enzyme.
NASA Astrophysics Data System (ADS)
Islamiyati, A.; Fatmawati; Chamidah, N.
2018-03-01
The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.
An introduction to real-time graphical techniques for analyzing multivariate data
NASA Astrophysics Data System (ADS)
Friedman, Jerome H.; McDonald, John Alan; Stuetzle, Werner
1987-08-01
Orion I is a graphics system used to study applications of computer graphics - especially interactive motion graphics - in statistics. Orion I is the newest of a family of "Prim" systems, whose most striking common feature is the use of real-time motion graphics to display three dimensional scatterplots. Orion I differs from earlier Prim systems through the use of modern and relatively inexpensive raster graphics and microprocessor technology. It also delivers more computing power to its user; Orion I can perform more sophisticated real-time computations than were possible on previous such systems. We demonstrate some of Orion I's capabilities in our film: "Exploring data with Orion I".
Conen, D; Wietlisbach, V; Bovet, P; Shamlaye, C; Riesen, W; Paccaud, F; Burnier, M
2004-01-01
Background The prevalence of hyperuricemia has rarely been investigated in developing countries. The purpose of the present study was to investigate the prevalence of hyperuricemia and the association between uric acid levels and the various cardiovascular risk factors in a developing country with high average blood pressures (the Seychelles, Indian Ocean, population mainly of African origin). Methods This cross-sectional health examination survey was based on a population random sample from the Seychelles. It included 1011 subjects aged 25 to 64 years. Blood pressure (BP), body mass index (BMI), waist circumference, waist-to-hip ratio, total and HDL cholesterol, serum triglycerides and serum uric acid were measured. Data were analyzed using scatterplot smoothing techniques and gender-specific linear regression models. Results The prevalence of a serum uric acid level >420 μmol/L in men was 35.2% and the prevalence of a serum uric acid level >360 μmol/L was 8.7% in women. Serum uric acid was strongly related to serum triglycerides in men as well as in women (r = 0.73 in men and r = 0.59 in women, p < 0.001). Uric acid levels were also significantly associated but to a lesser degree with age, BMI, blood pressure, alcohol and the use of antihypertensive therapy. In a regression model, triglycerides, age, BMI, antihypertensive therapy and alcohol consumption accounted for about 50% (R2) of the serum uric acid variations in men as well as in women. Conclusions This study shows that the prevalence of hyperuricemia can be high in a developing country such as the Seychelles. Besides alcohol consumption and the use of antihypertensive therapy, mainly diuretics, serum uric acid is markedly associated with parameters of the metabolic syndrome, in particular serum triglycerides. Considering the growing incidence of obesity and metabolic syndrome worldwide and the potential link between hyperuricemia and cardiovascular complications, more emphasis should be put on the evolving prevalence of hyperuricemia in developing countries. PMID:15043756
Geospatial Association between Low Birth Weight and Arsenic in Groundwater in New Hampshire, USA
Shi, Xun; Ayotte, Joseph D.; Onda, Akikazu; Miller, Stephanie; Rees, Judy; Gilbert-Diamond, Diane; Onega, Tracy; Gui, Jiang; Karagas, Margaret; Moeschler, John
2015-01-01
Background There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Since drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, US. Methods We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, using data for 1997-2009 and stratified by maternal age. We smoothed the rates using a locally-weighted averaging method to increase the statistical stability. The town-level groundwater arsenic values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration > 1 μg/L, probability > 5 μg/L, and probability > 10 μg/L. We calculated Pearson's correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic values, at both state and county levels. Results For preterm birth, younger mothers (maternal age < 20) have a statewide r = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability > 10 μg/L; For older mothers, r = 0.19 when the smoothing threshold = 3,500; A majority of county level r values are positive based on the arsenic data of probability > 10 μg/L. For term LBW, younger mothers (maternal age < 25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic level based on the data of probability > 1 μg/L; For older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability > 10 μg/L. At the county level, for younger mothers positive r values prevail, but for older mothers it is a mix. For both birth problems, the several most populous counties - with 60-80% of the state's population and clustering at the southwest corner of the state – are largely consistent in having a positive r across different smoothing thresholds. Conclusion We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, US. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic. PMID:25326895
Fassinou Hotegni, V Nicodème; Lommen, Willemien J M; Agbossou, Euloge K; Struik, Paul C
2014-01-01
Cultural practices can affect the quality of pineapple fruits and its variation. The objectives of this study were to investigate (a) effects of weight class and type of planting material on fruit quality, heterogeneity in quality and proportion and yield of fruits meeting European export standards, and (b) the improvement in quality, proportion and yield of fruits meeting export standards when flowering was induced at optimum time. Experiments were conducted in Benin with cvs Sugarloaf (a Perola type) and Smooth Cayenne. In cv. Sugarloaf, experimental factors were weight class of planting material (light, mixed, heavy) and time of flowering induction (farmers', optimum) (Experiment 1). In cv. Smooth Cayenne an additional experimental factor was the type of planting material (hapas, ground suckers, a mixture of the two) (Experiment 2). Fruits from heavy planting material had higher infructescence and fruit weights, longer infructescences, shorter crowns, and smaller crown: infructescence length than fruits from light planting material. The type of planting material in Experiment 2 did not significantly affect fruit quality except crown length: fruits from hapas had shorter crowns than those from ground suckers. Crops from heavy planting material had a higher proportion and yield of fruits meeting export standards than those from other weight classes in Experiment 1 only; also the type of planting material in Experiment 2 did not affect these variates. Heterogeneity in fruit quality was usually not reduced by selecting only light or heavy planting material instead of mixing weights; incidentally the coefficient of variation was significantly reduced in fruits from heavy slips only. Heterogeneity was also not reduced by not mixing hapas and ground suckers. Flowering induction at optimum time increased the proportion and yield of fruits meeting export standards in fruits from light and mixed slip weights and in those from the mixture of heavy hapas plus ground suckers.
Fassinou Hotegni, V. Nicodème; Lommen, Willemien J. M.; Agbossou, Euloge K.; Struik, Paul C.
2015-01-01
Cultural practices can affect the quality of pineapple fruits and its variation. The objectives of this study were to investigate (a) effects of weight class and type of planting material on fruit quality, heterogeneity in quality and proportion and yield of fruits meeting European export standards, and (b) the improvement in quality, proportion and yield of fruits meeting export standards when flowering was induced at optimum time. Experiments were conducted in Benin with cvs Sugarloaf (a Perola type) and Smooth Cayenne. In cv. Sugarloaf, experimental factors were weight class of planting material (light, mixed, heavy) and time of flowering induction (farmers', optimum) (Experiment 1). In cv. Smooth Cayenne an additional experimental factor was the type of planting material (hapas, ground suckers, a mixture of the two) (Experiment 2). Fruits from heavy planting material had higher infructescence and fruit weights, longer infructescences, shorter crowns, and smaller crown: infructescence length than fruits from light planting material. The type of planting material in Experiment 2 did not significantly affect fruit quality except crown length: fruits from hapas had shorter crowns than those from ground suckers. Crops from heavy planting material had a higher proportion and yield of fruits meeting export standards than those from other weight classes in Experiment 1 only; also the type of planting material in Experiment 2 did not affect these variates. Heterogeneity in fruit quality was usually not reduced by selecting only light or heavy planting material instead of mixing weights; incidentally the coefficient of variation was significantly reduced in fruits from heavy slips only. Heterogeneity was also not reduced by not mixing hapas and ground suckers. Flowering induction at optimum time increased the proportion and yield of fruits meeting export standards in fruits from light and mixed slip weights and in those from the mixture of heavy hapas plus ground suckers. PMID:25653659
Taylor, David L.; Kutil, Nicholas J.; Malek, Anna J.; Collie, Jeremy S.
2014-01-01
This study examined total mercury (Hg) concentrations in cartilaginous fishes from Southern New England coastal waters, including smooth dogfish (Mustelus canis), spiny dogfish (Squalus acanthias), little skate (Leucoraja erinacea), and winter skate (L. ocellata). Total Hg in dogfish and skates were positively related to their respective body size and age, indicating Hg bioaccumulation in muscle tissue. There were also significant inter-species differences in Hg levels (mean ± 1 SD, mg Hg/kg dry weight, ppm): smooth dogfish (3.3 ± 2.1 ppm; n = 54) > spiny dogfish (1.1 ± 0.7 ppm; n = 124) > little skate (0.4 ± 0.3 ppm; n = 173) ~ winter skate (0.3 ± 0.2 ppm; n = 148). The increased Hg content of smooth dogfish was attributed to its upper trophic level status, determined by stable nitrogen (δ15N) isotope analysis (mean δ15N = 13.2 ± 0.7‰), and the consumption of high Hg prey, most notably cancer crabs (0.10 ppm). Spiny dogfish had depleted δ15N signatures (11.6 ± 0.8‰), yet demonstrated a moderate level of contamination by foraging on pelagic prey with a range of Hg concentrations, e.g., in order of dietary importance, butterfish (Hg = 0.06 ppm), longfin squid (0.17 ppm), and scup (0.11 ppm). Skates were low trophic level consumers (δ15N = 11.9-12.0‰) and fed mainly on amphipods, small decapods, and polychaetes with low Hg concentrations (0.05-0.09 ppm). Intra-specific Hg concentrations were directly related to δ15N and carbon (δ13C) isotope signatures, suggesting that Hg biomagnifies across successive trophic levels and foraging in the benthic trophic pathway increases Hg exposure. From a human health perspective, 87% of smooth dogfish, 32% of spiny dogfish, and < 2% of skates had Hg concentrations exceeding the US Environmental Protection Agency threshold level (0.3 ppm wet weight). These results indicate that frequent consumption of smooth dogfish and spiny dogfish may adversely affect human health, whereas skates present minimal risk. PMID:25081850
Li, Hui
2009-03-01
To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinmann, A; Stafford, R; Yung, J
Purpose: MRI guided radiotherapy (MRIgRT) is an emerging technology which will eventually require a proficient quality auditing system. Due to different principles in which MR and CT acquire images, there is a need for a multi-imaging-modality, end-to-end QA phantom for MRIgRT. The purpose of this study is to identify lung, soft tissue, and tumor equivalent substitutes that share similar human-like CT and MR properties (i.e. Hounsfield units and relaxation times). Methods: Materials of interested such as common CT QA phantom materials, and other proprietary gels/silicones from Polytek, SmoothOn, and CompositeOne were first scanned on a GE 1.5T Signa HDxT MR.more » Materials that could be seen on both T1-weighted and T2-weighted images were then scanned on a GE Lightspeed RT16 CT simulator and a GE Discovery 750HD CT scanner and their HU values were then measured. The materials with matching HU values of lung (−500 to −700HU), muscle (+40HU) and soft tissue (+100 to +300HU) were further scanned on GE 1.5T Signa HDx to measure their T1 and T2 relaxation times from varying parameters of TI and TE. Results: Materials that could be visualized on T1-weighted and T2-weighted images from a 1.5T MR unit and had an appropriate average CT number, −650, −685, 46,169, and 168 HUs were: compressed cork saturated with water, Polytek Platsil™ Gel-00 combined with mini styrofoam balls, radiotherapy bolus material, SmoothOn Dragon-Skin™ and SmoothOn Ecoflex™, respectively. Conclusion: Post processing analysis is currently being performed to accurately map T1 and T2 values for each material tested. From previous MR visualization and CT examinations it is expected that Dragon-Skin™, Ecoflex™ and bolus will have values consistent with tissue and tumor substitutes. We also expect compressed cork statured with water, and Polytek™-styrofoam combination to have approximate T1 and T2 values suitable for lung-equivalent materials.« less
Achouri, Neila; Smichi, Nabil; Gargouri, Youssef; Miled, Nabil; Fendri, Ahmed
2017-09-01
In order to identify fish enzymes displaying novel biochemical properties, we choose the common smooth-hound (Mustelus mustelus) as a starting biological material to characterize the digestive lipid hydrolyzing enzyme. A smooth-hound digestive lipase (SmDL) was purified from a delipidated pancreatic powder. The SmDL molecular weight was around 50kDa. Specific activities of 2200 and 500U/mg were measured at pH 9 and 40°C using tributyrin and olive oil emulsion as substrates, respectively. Unlike known mammal pancreatic lipases, the SmDL was stable at 50°C and it retained 90% of its initial activity after 15min of incubation at 60°C. Interestingly, bile salts act as an activator of the SmDL. It's worth to notice that the SmDL was also salt-tolerant since it was active in the presence of high salt concentrations reaching 0.8M. Fatty acid (FA) analysis of oil from the smooth-hound viscera showed a dominance of unsaturated ones (UFAs). Interestingly, the major n-3 fatty acids were DHA and EPA with contents of 18.07% and 6.14%, respectively. In vitro digestibility model showed that the smooth hound oil was efficiently hydrolyzed by pancreatic lipases, which suggests the higher assimilation of fish oils by consumers. Copyright © 2017 Elsevier B.V. All rights reserved.
Noise reduction for low-dose helical CT by 3D penalized weighted least-squares sinogram smoothing
NASA Astrophysics Data System (ADS)
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-03-01
Helical computed tomography (HCT) has several advantages over conventional step-and-shoot CT for imaging a relatively large object, especially for dynamic studies. However, HCT may increase X-ray exposure significantly to the patient. This work aims to reduce the radiation by lowering the X-ray tube current (mA) and filtering the low-mA (or dose) sinogram noise. Based on the noise properties of HCT sinogram, a three-dimensional (3D) penalized weighted least-squares (PWLS) objective function was constructed and an optimal sinogram was estimated by minimizing the objective function. To consider the difference of signal correlation among different direction of the HCT sinogram, an anisotropic Markov random filed (MRF) Gibbs function was designed as the penalty. The minimization of the objection function was performed by iterative Gauss-Seidel updating strategy. The effectiveness of the 3D-PWLS sinogram smoothing for low-dose HCT was demonstrated by a 3D Shepp-Logan head phantom study. Comparison studies with our previously developed KL domain PWLS sinogram smoothing algorithm indicate that the KL+2D-PWLS algorithm shows better performance on in-plane noise-resolution trade-off while the 3D-PLWS shows better performance on z-axis noise-resolution trade-off. Receiver operating characteristic (ROC) studies by using channelized Hotelling observer (CHO) shows that 3D-PWLS and KL+2DPWLS algorithms have similar performance on detectability in low-contrast environment.
The Effect of an Isogrid on Cryogenic Propellant Behavior and Thermal Stratification
NASA Technical Reports Server (NTRS)
Oliveira, Justin; Kirk, Daniel R.; Chintalapati, Sunil; Schallhorn, Paul A.; Piquero, Jorge L.; Campbell, Mike; Chase, Sukhdeep
2007-01-01
All models for thermal stratification available in the presentation are derived using smooth, flat plate laminar and turbulent boundary layer models. This study examines the effect of isogrid (roughness elements) on the surface of internal tank walls to mimic the effects of weight-saving isogrid, which is located on the inside of many rocket propellant tanks. Computational Fluid Dynamics (CFD) is used to study the momentum and thermal boundary layer thickness for free convection flows over a wall with generic roughness elements. This presentation makes no mention of actual isogrid sizes or of any specific tank geometry. The magnitude of thermal stratification is compared for smooth and isogrid-lined walls.
Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map
NASA Astrophysics Data System (ADS)
Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.
2013-12-01
Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S is in the neighborhood of 5/8. This is true whether forecast performance is scored by Kagan's [2009, GJI] I1 information score, or by the S-test of Zechar & Jordan [2010, BSSA]. These hybrids also score well (0.97) in the ASS-test of Zechar & Jordan [2008, GJI] with respect to prior relative intensity.
Gu, Zi; Rolfe, Barbara E; Thomas, Anita C; Campbell, Julie H; Lu, G Q Max; Xu, Zhi P
2011-10-01
This paper reports a clear elucidation of the pathway for the cellular delivery of layered double hydroxide (LDH) nanoparticles intercalated with anti-restenotic low molecular weight heparin (LMWH). Cellular uptake of LMWH-LDH conjugates into cultured rat vascular smooth muscle cells (SMCs) measured via flow cytometry was more than ten times greater than that of LMWH alone. Confocal and transmission electron microscopy showed LMWH-LDH conjugates taken up by endosomes, then released into the cytoplasm. We propose that LMWH-LDH is taken up via a unique 'modified endocytic' pathway, whereby the conjugate is internalized by SMCs in early endosomes, sorted in late endosomes, and quickly released from late endosomes/lysosomes, avoiding degradation. Treatment of cells with LMWH-LDH conjugates suppressed the activation of ERK1/2 in response to foetal calf serum (FCS) for up to 24h, unlike unconjugated LMWH which had no significant effect at 24h. Improved understanding of the intracellular pathway of LMWH-LDH nanohybrids in SMC will allow for refinement of design for LDH nanomedicine applications. Copyright © 2011 Elsevier Ltd. All rights reserved.
Darrouzet-Nardi, Amelia F; Masters, William A
2017-01-01
A large literature links early-life environmental shocks to later outcomes. This paper uses seasonal variation across the Democratic Republic of the Congo to test for nutrition smoothing, defined here as attaining similar height, weight and mortality outcomes despite different agroclimatic conditions at birth. We find that gaps between siblings and neighbors born at different times of year are larger in more remote rural areas, farther from the equator where there are greater seasonal differences in rainfall and temperature. For those born at adverse times in places with pronounced seasonality, the gains associated with above-median proximity to nearby towns are similar to rising one quintile in the national distribution of household wealth for mortality, and two quintiles for attained height. Smoothing of outcomes could involve a variety of mechanisms to be addressed in future work, including access to food markets, health services, public assistance and temporary migration to achieve more uniform dietary intake, or less exposure and improved recovery from seasonal diseases.
Weighted integration of short-term memory and sensory signals in the oculomotor system.
Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe
2018-05-01
Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.
A New Approach to X-ray Analysis of SNRs
NASA Astrophysics Data System (ADS)
Frank, Kari A.; Burrows, David; Dwarkadas, Vikram
2016-06-01
We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to XMM-Newton observations of SNR RCW 103 and Tycho. SPI is a Bayesian modeling process that fits a population of gas blobs (”smoothed particles”) such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. The analyses of RCW 103 and Tycho are part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.
Skelemins: cytoskeletal proteins located at the periphery of M-discs in mammalian striated muscle
1987-01-01
The cytoskeletons of mammalian striated and smooth muscles contain a pair of high molecular weight (HMW) polypeptides of 220,000 and 200,000 mol wt, each with isoelectric points of about 5 (Price, M. G., 1984, Am. J. Physiol., 246:H566-572) in a molar ratio of 1:1:20 with desmin. The HMW polypeptides of mammalian muscle have been named "skelemins," because they are in the insoluble cytoskeletons of striated muscle and are at the M-discs. I have used two-dimensional peptide mapping to show that the two skelemin polypeptides are closely related to each another. Polyclonal antibodies directed against skelemins were used to demonstrate that they are immunologically distinct from talin, fodrin, myosin heavy chain, synemin, microtubule-associated proteins, and numerous other proteins of similar molecular weight, and are not oligomers of other muscle proteins. Skelemins appear not to be proteolytic products of larger proteins, as shown by immunoautoradiography on 3% polyacrylamide gels. Skelemins are predominantly cytoskeletal, with little extractable from myofibrils by various salt solutions. Human, bovine, and rat cardiac, skeletal, and smooth muscles, but not chicken muscles, contain proteins cross- reacting with anti-skelemin antibodies. Skelemins are localized by immunofluorescence at the M-lines of cardiac and skeletal muscle, in 0.4-micron-wide smooth striations. Cross sections reveal that skelemins are located at the periphery of the M-discs. Skelemins are seen in threads linking isolated myofibrils at the M-discs. There is sufficient skelemin in striated muscle to wrap around the M-disc about three times, if the skelemin molecules are laid end to end, assuming a length- to-weight ratio similar to M-line protein and other elongated proteins. The results indicate that skelemins form linked rings around the periphery of the myofibrillar M-discs. These cytoskeletal rings may play a role in the maintenance of the structural integrity of striated muscle throughout cycles of contraction and relaxation. PMID:3553209
Zong, Xin-Nan; Li, Hui
2013-01-01
Introduction Growth references for Chinese children should be updated due to the positive secular growth trends and the progress of the smoothing techniques. Human growth differs among the various ethnic groups, so comparison of the China references with the WHO standards helps to understand such differences. Methods The China references, including weight, length/height, head circumference, weight-for-length/height and body mass index (BMI) aged 0–18 years, were constructed based on 69,760 urban infants and preschool children under 7 years and 24,542 urban school children aged 6–20 years derived from two cross-sectional national surveys. The Cole’s LMS method is employed for smoothing the growth curves. Results The merged data sets resulted in a smooth transition at age 6–7 years and continuity of curves from 0 to 18 years. Varying differences were found on the empirical standard deviation (SD) curves in each indicator at nearly all ages between China and WHO. The most noticeable differences occurred in genders, final height and boundary centiles curves. Chinese boys’ weight is strikingly heavier than that of the WHO at age 6–10 years. The height is taller than that of the WHO for boys below 15 years and for girls below 13, but is significantly lower when boys over 15 years and girls over 13. BMI is generally higher than that of the WHO for boys at age 6–16 years but appreciably lower for girls at 3–18 years. Conclusions The differences between China and WHO are mainly caused by the reference populations of different ethnic backgrounds. For practitioners, the choices of the standards/references depend on the population to be assessed and the purpose of the study. The new China references could be applied to facilitate the standardization assessment of growth and nutrition for Chinese children and adolescents in clinical pediatric and public health. PMID:23527219
SU-FF-T-668: A Simple Algorithm for Range Modulation Wheel Design in Proton Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nie, X; Nazaryan, Vahagn; Gueye, Paul
2009-06-01
Purpose: To develop a simple algorithm in designing the range modulation wheel to generate a very smooth Spread-Out Bragg peak (SOBP) for proton therapy.Method and Materials: A simple algorithm has been developed to generate the weight factors in corresponding pristine Bragg peaks which composed a smooth SOBP in proton therapy. We used a modified analytical Bragg peak function based on Monte Carol simulation tool-kits of Geant4 as pristine Bragg peaks input in our algorithm. A simple METLAB(R) Quad Program was introduced to optimize the cost function in our algorithm. Results: We found out that the existed analytical function of Braggmore » peak can't directly use as pristine Bragg peak dose-depth profile input file in optimization of the weight factors since this model didn't take into account of the scattering factors introducing from the range shifts in modifying the proton beam energies. We have done Geant4 simulations for proton energy of 63.4 MeV with a 1.08 cm SOBP for variation of pristine Bragg peaks which composed this SOBP and modified the existed analytical Bragg peak functions for their peak heights, ranges of R{sub 0}, and Gaussian energies {sigma}{sub E}. We found out that 19 pristine Bragg peaks are enough to achieve a flatness of 1.5% of SOBP which is the best flatness in the publications. Conclusion: This work develops a simple algorithm to generate the weight factors which is used to design a range modulation wheel to generate a smooth SOBP in protonradiation therapy. We have found out that a medium number of pristine Bragg peaks are enough to generate a SOBP with flatness less than 2%. It is potential to generate data base to store in the treatment plan to produce a clinic acceptable SOBP by using our simple algorithm.« less
Functional data analysis of sleeping energy expenditure.
Lee, Jong Soo; Zakeri, Issa F; Butte, Nancy F
2017-01-01
Adequate sleep is crucial during childhood for metabolic health, and physical and cognitive development. Inadequate sleep can disrupt metabolic homeostasis and alter sleeping energy expenditure (SEE). Functional data analysis methods were applied to SEE data to elucidate the population structure of SEE and to discriminate SEE between obese and non-obese children. Minute-by-minute SEE in 109 children, ages 5-18, was measured in room respiration calorimeters. A smoothing spline method was applied to the calorimetric data to extract the true smoothing function for each subject. Functional principal component analysis was used to capture the important modes of variation of the functional data and to identify differences in SEE patterns. Combinations of functional principal component analysis and classifier algorithm were used to classify SEE. Smoothing effectively removed instrumentation noise inherent in the room calorimeter data, providing more accurate data for analysis of the dynamics of SEE. SEE exhibited declining but subtly undulating patterns throughout the night. Mean SEE was markedly higher in obese than non-obese children, as expected due to their greater body mass. SEE was higher among the obese than non-obese children (p<0.01); however, the weight-adjusted mean SEE was not statistically different (p>0.1, after post hoc testing). Functional principal component scores for the first two components explained 77.8% of the variance in SEE and also differed between groups (p = 0.037). Logistic regression, support vector machine or random forest classification methods were able to distinguish weight-adjusted SEE between obese and non-obese participants with good classification rates (62-64%). Our results implicate other factors, yet to be uncovered, that affect the weight-adjusted SEE of obese and non-obese children. Functional data analysis revealed differences in the structure of SEE between obese and non-obese children that may contribute to disruption of metabolic homeostasis.
A Principled Way of Assessing Visualization Literacy.
Boy, Jeremy; Rensink, Ronald A; Bertini, Enrico; Fekete, Jean-Daniel
2014-12-01
We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less
West Antarctic Balance Fluxes: Impact of Smoothing, Algorithm and Topography.
NASA Astrophysics Data System (ADS)
Le Brocq, A.; Payne, A. J.; Siegert, M. J.; Bamber, J. L.
2004-12-01
Grid-based calculations of balance flux and velocity have been widely used to understand the large-scale dynamics of ice masses and as indicators of their state of balance. This research investigates a number of issues relating to their calculation for the West Antarctic Ice Sheet (see below for further details): 1) different topography smoothing techniques; 2) different grid based flow-apportioning algorithms; 3) the source of the flow direction, whether from smoothed topography, or smoothed gravitational driving stress; 4) different flux routing techniques and 5) the impact of different topographic datasets. The different algorithms described below lead to significant differences in both ice stream margins and values of fluxes within them. This encourages caution in the use of grid-based balance flux/velocity distributions and values, especially when considering the state of balance of individual ice streams. 1) Most previous calculations have used the same numerical scheme (Budd and Warner, 1996) applied to a smoothed topography in order to incorporate the longitudinal stresses that smooth ice flow. There are two options to consider when smoothing the topography, the size of the averaging filter and the shape of the averaging function. However, this is not a physically-based approach to incorporating smoothed ice flow and also introduces significant flow artefacts when using a variable weighting function. 2) Different algorithms to apportion flow are investigated; using 4 or 8 neighbours, and apportioning flow to all down-slope cells or only 2 (based on derived flow direction). 3) A theoretically more acceptable approach of incorporating smoothed ice flow is to use the smoothed gravitational driving stress in x and y components to derive a flow direction. The flux can then be apportioned using the flow direction approach used above. 4) The original scheme (Budd and Warner, 1996) uses an elevation sort technique to calculate the balance flux contribution from all cells to each individual cell. However, elevation sort is only successful when ice cannot flow uphill. Other possible techniques include using a recursive call for each neighbour or using a sparse matrix solution. 5) Two digital elevation models are used as input data, which have significant differences in coastal and mountainous areas and therefore lead to different calculations. Of particular interest is the difference in the Rutford Ice Stream/Carlson Inlet and Kamb Ice Stream (Ice Stream C) fluxes.
Objective analysis of pseudostress over the Indian Ocean using a direct-minimization approach
NASA Technical Reports Server (NTRS)
Legler, David M.; Navon, I. M.; O'Brien, James J.
1989-01-01
A technique not previously used in objective analysis of meteorological data is used here to produce monthly average surface pseudostress data over the Indian Ocean. An initial guess field is derived and a cost functional is constructed with five terms: approximation to initial guess, approximation to climatology, a smoothness parameter, and two kinematic terms. The functional is minimized using a conjugate-gradient technique, and the weight for the climatology term controls the overall balance of influence between the climatology and the initial guess. Results from various weight combinations are presented for January and July 1984. Quantitative and qualitative comparisons to the subject analysis are made to find which weight combination provides the best results. The weight on the approximation to climatology is found to balance the influence of the original field and climatology.
Giorio, Chiara; Moyroud, Edwige; Glover, Beverley J; Skelton, Paul C; Kalberer, Markus
2015-10-06
Plant cuticle, which is the outermost layer covering the aerial parts of all plants including petals and leaves, can present a wide range of patterns that, combined with cell shape, can generate unique physical, mechanical, or optical properties. For example, arrays of regularly spaced nanoridges have been found on the dark (anthocyanin-rich) portion at the base of the petals of Hibiscus trionum. Those ridges act as a diffraction grating, producing an iridescent effect. Because the surface of the distal white region of the petals is smooth and noniridescent, a selective chemical characterization of the surface of the petals on different portions (i.e., ridged vs smooth) is needed to understand whether distinct cuticular patterns correlate with distinct chemical compositions of the cuticle. In the present study, a rapid screening method has been developed for the direct surface analysis of Hibiscus trionum petals using liquid extraction surface analysis (LESA) coupled with high-resolution mass spectrometry. The optimized method was used to characterize a wide range of plant metabolites and cuticle monomers on the upper (adaxial) surface of the petals on both the white/smooth and anthocyanic/ridged regions, and on the lower (abaxial) surface, which is entirely smooth. The main components detected on the surface of the petals are low-molecular-weight organic acids, sugars, and flavonoids. The ridged portion on the upper surface of the petal is enriched in long-chain fatty acids, which are constituents of the wax fraction of the cuticle. These compounds were not detected on the white/smooth region of the upper petal surface or on the smooth lower surface.
Some ultrastructural characteristics of the renal artery and abdominal aorta in the rat.
Osborne-Pellegrin, M J
1978-01-01
The rat renal artery and abdominal aorta have been studied by light and electron microscopy. In rats of 200 g body weight the extracellular space in aortic media ranges between 50-60% and that of the distal renal artery between 15-25%. The surface to volume ratio of aortic smooth muscle cells is 2.7 micron2/micron3 compared to 1.6 micron2/micron3 in the distal renal artery. Dense bodies are rare in aortic smooth muscle cells but are abundant in those of the distal renal artery. Other ultrastructural details of the smooth muscle cells are similar in the two types of artery. Cell-to-cell contacts consist of simple apposition of plasm membranes and their number is proportional to the total length of cell membrane profile. Mitochondria represent 7-8% of the cell volume in both arteries. The proximal renal artery shows structural characteristics which are intermediate between those of the aorta and distal renal artery. In all renal arteries examined, bands of longitudinal smooth muscle are present in the adventitia, principally at branch points. In older rats, regions of discontinuity of the internal elastic lamina have been observed. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:640965
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Hutcheon, Jennifer A; Platt, Robert W; Abrams, Barbara; Himes, Katherine P; Simhan, Hyagriv N; Bodnar, Lisa M
2013-05-01
To establish the unbiased relation between maternal weight gain in pregnancy and perinatal health, a classification for maternal weight gain is needed that is uncorrelated with gestational age. The goal of this study was to create a weight-gain-for-gestational-age percentile and z score chart to describe the mean, SD, and selected percentiles of maternal weight gain throughout pregnancy in a contemporary cohort of US women. The study population was drawn from normal-weight women with uncomplicated, singleton pregnancies who delivered at the Magee-Womens Hospital in Pittsburgh, PA, 1998-2008. Analyses were based on a randomly selected subset of 648 women for whom serial prenatal weight measurements were available through medical chart record abstraction (6727 weight measurements). The pattern of maternal weight gain throughout gestation was estimated by using a random-effects regression model. The estimates were used to create a chart with the smoothed means, percentiles, and SDs of gestational weight gain for each week of pregnancy. This chart allows researchers to express total weight gain as an age-standardized z score, which can be used in epidemiologic analyses to study the association between pregnancy weight gain and adverse or physiologic pregnancy outcomes independent of gestational age.
Automated haematology analysis to diagnose malaria
2010-01-01
For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the method is applied. Future developments in new haematology analysers such as considerably simplified, robust and inexpensive devices for malaria detection fitted with an automatically generated alert could improve the detection capacity of these instruments and potentially expand their clinical utility in malaria diagnosis. PMID:21118557
Analysis of polarization radar returns from ice clouds
NASA Astrophysics Data System (ADS)
Battaglia, A.; Sturniolo, O.; Prodi, F.
Using a modified T-matrix code, some polarimetric single-scattering radar parameters ( Zh,v, LDR h,v, ρhv, ZDR and δhv) from populations of ice crystals in ice phase at 94 GHz, modeled with axisymmetric prolate and oblate spheroidal shapes for a Γ-size distribution with different α parameter ( α=0, 1, 2) and characteristic dimension Lm varying from 0.1 to 1.8 mm, have been computed. Some of the results for different radar elevation angles and different orientation distribution for fixed water content are shown. Deeper analysis has been carried out for pure extensive radar polarimetric variables; all of them are strongly dependent on the shapes (characterised by the aspect ratio), the canting angle and the radar elevation angle. Quantities like ZDR or δhv at side incidence or LDR h and ρhv at vertical incidence can be used to investigate the preferred orientation of the particles and, in some cases, their habits. We analyze scatterplots using couples of pure extensive variables. The scatterplots with the most evident clustering properties for the different habits seem to be those in the ( ZDR [ χ=0°], δhv [ χ=0°]), in the ( ZDR [ χ=0°], LDR h [ χ=90°]) and in the ( ZDR [ χ=0°], ρhv [ χ=90°]) plane. Among these, the most appealing one seems to be that involving ZDR and ρhv variables. To avoid the problem of having simultaneous measurements with a side and a vertical-looking radar, we believe that measurements of these two extensive variables using a radar with an elevation angle around 45° can be an effective instrument to identify different habits. In particular, this general idea can be useful for future space-borne polarimetric radars involved in the studies of high ice clouds. It is also believed that these results can be used in next challenge of developing probabilistic and expert methods for identifying hydrometeor types by W-band radars.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
Preterm Versus Term Children: Analysis of Sedation/Anesthesia Adverse Events and Longitudinal Risk.
Havidich, Jeana E; Beach, Michael; Dierdorf, Stephen F; Onega, Tracy; Suresh, Gautham; Cravero, Joseph P
2016-03-01
Preterm and former preterm children frequently require sedation/anesthesia for diagnostic and therapeutic procedures. Our objective was to determine the age at which children who are born <37 weeks gestational age are no longer at increased risk for sedation/anesthesia adverse events. Our secondary objective was to describe the nature and incidence of adverse events. This is a prospective observational study of children receiving sedation/anesthesia for diagnostic and/or therapeutic procedures outside of the operating room by the Pediatric Sedation Research Consortium. A total of 57,227 patients 0 to 22 years of age were eligible for this study. All adverse events and descriptive terms were predefined. Logistic regression and locally weighted scatterplot regression were used for analysis. Preterm and former preterm children had higher adverse event rates (14.7% vs 8.5%) compared with children born at term. Our analysis revealed a biphasic pattern for the development of adverse sedation/anesthesia events. Airway and respiratory adverse events were most commonly reported. MRI scans were the most commonly performed procedures in both categories of patients. Patients born preterm are nearly twice as likely to develop sedation/anesthesia adverse events, and this risk continues up to 23 years of age. We recommend obtaining birth history during the formulation of an anesthetic/sedation plan, with heightened awareness that preterm and former preterm children may be at increased risk. Further prospective studies focusing on the etiology and prevention of adverse events in former preterm patients are warranted. Copyright © 2016 by the American Academy of Pediatrics.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.
Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C
2012-12-01
Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.
Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs
2012-01-01
Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908
A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising
NASA Astrophysics Data System (ADS)
Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua
2018-04-01
In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.
Metabolic activation of sodium nitroprusside to nitric oxide in vascular smooth muscle.
Kowaluk, E A; Seth, P; Fung, H L
1992-09-01
Sodium nitroprusside (SNP) is thought to exert its vasodilating activity, at least in part, by vascular activation to nitric oxide (NO), but the activation mechanism has not been delineated. This study has examined the potential for vascular metabolism of SNP to NO in bovine coronary arterial smooth muscle subcellular fractions using a sensitive and specific redox-chemiluminescence assay for NO. SNP was readily metabolized to NO in subcellular fractions, and the dominant site of metabolism appeared to be located in the membrane fractions. NO-generating activity was significantly enhanced by, but did not absolutely require, the addition of a NADPH-regenerating system, NADPH per se, NADH or cysteine. A correlation analysis of NO-generating activity (in the presence of a NADPH-regenerating system) with marker enzyme activities indicated that the SNP-directed NO-generating activity was primarily membrane-associated. Radiation inactivation target-size analysis revealed that the microsomal SNP-directed NO-generating activity was relatively insensitive to inactivation by radiation exposure, suggesting that the functioning catalytic unit might be quite small. A molecular weight of 5 to 11 kDa was estimated. NO-generating activity could be solubilized from the crude microsomes with 3-[(3-cholamidopropyl)- dimethylammonio]-1-propane sulfonate, and the solubilized extract was subjected to gel filtration chromatography. NO-generating activity was eluted in two peaks: one peak corresponding to an approximate molecular weight of 4 kDa, thus confirming the existence of a small molecular weight NO-generating activity, and a second activity peak corresponding to a molecular weight of 112 to 169 kDa, the functional significance of which is unclear at present.(ABSTRACT TRUNCATED AT 250 WORDS)
Pool boiling of water-Al2O3 and water-Cu nanofluids on horizontal smooth tubes
2011-01-01
Experimental investigation of heat transfer during pool boiling of two nanofluids, i.e., water-Al2O3 and water-Cu has been carried out. Nanoparticles were tested at the concentration of 0.01%, 0.1%, and 1% by weight. The horizontal smooth copper and stainless steel tubes having 10 mm OD and 0.6 mm wall thickness formed test heater. The experiments have been performed to establish the influence of nanofluids concentration as well as tube surface material on heat transfer characteristics at atmospheric pressure. The results indicate that independent of concentration nanoparticle material (Al2O3 and Cu) has almost no influence on heat transfer coefficient while boiling of water-Al2O3 or water-Cu nanofluids on smooth copper tube. It seems that heater material did not affect the boiling heat transfer in 0.1 wt.% water-Cu nanofluid, nevertheless independent of concentration, distinctly higher heat transfer coefficient was recorded for stainless steel tube than for copper tube for the same heat flux density. PMID:21711741
Smoothing of cost function leads to faster convergence of neural network learning
NASA Astrophysics Data System (ADS)
Xu, Li-Qun; Hall, Trevor J.
1994-03-01
One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.
NASA Technical Reports Server (NTRS)
Lang, H. R.; Conel, J. E.; Paylor, E. D.
1984-01-01
A LIDQA evaluation for geologic applications of a LANDSAT TM scene covering the Wind River/Bighorn Basin area, Wyoming, is examined. This involves a quantitative assessment of data quality including spatial and spectral characteristics. Analysis is concentrated on the 6 visible, near infrared, and short wavelength infrared bands. Preliminary analysis demonstrates that: (1) principal component images derived from the correlation matrix provide the most useful geologic information. To extract surface spectral reflectance, the TM radiance data must be calibrated. Scatterplots demonstrate that TM data can be calibrated and sensor response is essentially linear. Low instrumental offset and gain settings result in spectral data that do not utilize the full dynamic range of the TM system.
Moaveni, Daria K; Lynch, Erin M; Luke, Cathy; Sood, Vikram; Upchurch, Gilbert R; Wakefield, Thomas W; Henke, Peter K
2008-03-01
Vein wall endothelial turnover after stasis deep vein thrombosis (DVT) has not been well characterized. The purpose of this study was to quantify re-endothelialization after DVT and determine if low-molecular-weight heparin (LMWH) therapy affects this process. Stasis DVT was generated in the rat by inferior vena cava ligation, with harvest at 1, 4, and 14 days. Immunohistologic quantification of vascular smooth muscle cells and luminal endothelialization was estimated by positive staining for alpha-smooth muscle actin and von Willebrand factor, respectively. In separate experiments, rats were treated either before or after DVT with subcutaneous LMWH (3 mg/kg daily) until harvesting at 4 and 14 days. The inferior vena cava was processed for histologic analysis or was processed for organ culture after the thrombus was gently removed. The vein wall was stimulated in vitro with interleukin-1beta (1 ng/mL), and the supernatant was processed at 48 hours for nitric oxide. Cells were processed by real-time polymerase chain reaction for endothelial nitric oxide synthase, inducible nitric oxide synthase, cyclooxygenase-1 and -2, and thrombomodulin at 4 and 14 days, and collagen I and III at 14 days. Comparisons were done with analysis of variance or t test. A P < .05 was significant. Thrombus size peaked at 4 days, whereas luminal re-endothelialization increased over time (1 day, 11% +/- 2%; 4 days, 23% +/- 4%; 14 days, 64% +/- 7% (+) von Willebrand factor staining; P < .01, n = 3 to 4, compared with non-DVT control). Similarly, vascular smooth muscle cell staining was lowest at day 1 and gradually returned to baseline by 14 days. Both before and after DVT, LMWH significantly increased luminal re-endothelialization, without a difference in thrombus size at 4 days, but no significant difference was noted at 14 days despite smaller thrombi with LMWH treatment. Pretreatment with LMWH was associated with increased vascular smooth muscle cell area and recovery of certain inducible endothelial specific genes. No significant difference in nitric oxide levels in the supernatant was found at 4 days. At 14 days, type III collagen was significantly elevated with LMWH treatment. Venous re-endothelialization occurs progressively as the DVT resolves and can be accelerated with LMWH treatment, although this effect appears limited to the early time frame. These findings may have clinical relevance for LMWH timing and treatment compared with mechanical forms of therapy. How the vein wall endothelium responds after deep vein thrombosis (DVT) has not been well documented owing to limited human specimens. This report shows that low-molecular-weight heparin accelerates or protects the endothelium and preserves medial smooth muscle cell integrity after DVT, but that this effect is limited to a relatively early time period. Although most DVT prophylaxis is pharmacologic (a heparin agent), use of nonpharmacologic measures is also common. The use of heparin prophylaxis, compared with after DVT treatment, and the acceleration of post-DVT re-endothelialization require clinical correlation.
New quantitative method for evaluation of motor functions applicable to spinal muscular atrophy.
Matsumaru, Naoki; Hattori, Ryo; Ichinomiya, Takashi; Tsukamoto, Katsura; Kato, Zenichiro
2018-03-01
The aim of this study was to develop and introduce new method to quantify motor functions of the upper extremity. The movement was recorded using a three-dimensional motion capture system, and the movement trajectory was analyzed using newly developed two indices, which measure precise repeatability and directional smoothness. Our target task was shoulder flexion repeated ten times. We applied our method to a healthy adult without and with a weight, simulating muscle impairment. We also applied our method to assess the efficacy of a drug therapy for amelioration of motor functions in a non-ambulatory patient with spinal muscular atrophy. Movement trajectories before and after thyrotropin-releasing hormone therapy were analyzed. In the healthy adult, we found the values of both indices increased significantly when holding a weight so that the weight-induced deterioration in motor function was successfully detected. From the efficacy assessment of drug therapy in the patient, the directional smoothness index successfully detected improvements in motor function, which were also clinically observed by the patient's doctors. We have developed a new quantitative evaluation method of motor functions of the upper extremity. Clinical usability of this method is also greatly enhanced by reducing the required number of body-attached markers to only one. This simple but universal approach to quantify motor functions will provide additional insights into the clinical phenotypes of various neuromuscular diseases and developmental disorders. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-12-07
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
Cimetidine (Tagamet) is a reproductive toxicant in male rats affecting peritubular cells.
França, L R; Leal, M C; Sasso-Cerri, E; Vasconcelos, A; Debeljuk, L; Russell, L D
2000-11-01
Cimetidine (Tagamet) is a potent histaminic H2-receptor antagonist, extensively prescribed for ulcers and now available without prescription. Cimetidine is a known testicular toxicant, but its mechanism of action remains uncertain. Rats were treated i.p. with cimetidine either at 50 mg/kg or 250 mg/kg body weight for 59 days. Accessory sex organ weights, but not testis weight, were significantly reduced in the high dose treated groups. FSH levels were significantly elevated in both treated groups, but testosterone levels were unchanged. A high degree of variability characterized testis histology, with most tubules appearing normal and some tubules (15-17%) partially lacking or devoid of germ cells. Morphometry showed that although seminiferous tubule volume was not significantly changed, the volume of peritubular tissue was reduced in the high dose group. There was extensive duplication of the basal lamina, lamina densa in both apparently normal spermatogenic tubules and severely damaged tubules. Apoptotic peritubular myoid cells were also found. TUNEL labeling confirmed extensive apoptotic cell death in peritubular cells, but revealed apoptosis of vascular smooth muscle. Given that 1) peritubular myoid cell apoptosis occurs in apparently normal tubules, that 2) basal lamina disorders are found, and that 3) peritubular cells are lost from the testis, it is suggested that the primary event in cimetidine-related damage is targeted to testicular smooth muscle cells. This is the first in vivo-administered toxicant to be described that targets myoid cells, resulting in abnormal spermatogenesis.
NASA Astrophysics Data System (ADS)
Zhang, Cheng; Wenbo, Mei; Huiqian, Du; Zexian, Wang
2018-04-01
A new algorithm was proposed for medical images fusion in this paper, which combined gradient minimization smoothing filter (GMSF) with non-sampled directional filter bank (NSDFB). In order to preserve more detail information, a multi scale edge preserving decomposition framework (MEDF) was used to decompose an image into a base image and a series of detail images. For the fusion of base images, the local Gaussian membership function is applied to construct the fusion weighted factor. For the fusion of detail images, NSDFB was applied to decompose each detail image into multiple directional sub-images that are fused by pulse coupled neural network (PCNN) respectively. The experimental results demonstrate that the proposed algorithm is superior to the compared algorithms in both visual effect and objective assessment.
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert
2016-06-21
A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less
Dawson, Colin; Gerken, Louann
2011-09-01
While many constraints on learning must be relatively experience-independent, past experience provides a rich source of guidance for subsequent learning. Discovering structure in some domain can inform a learner's future hypotheses about that domain. If a general property accounts for particular sub-patterns, a rational learner should not stipulate separate explanations for each detail without additional evidence, as the general structure has "explained away" the original evidence. In a grammar-learning experiment using tone sequences, manipulating learners' prior exposure to a tone environment affects their sensitivity to the grammar-defining feature, in this case consecutive repeated tones. Grammar-learning performance is worse if context melodies are "smooth" -- when small intervals occur more than large ones -- as Smoothness is a general property accounting for a high rate of repetition. We present an idealized Bayesian model as a "best case" benchmark for learning repetition grammars. When context melodies are Smooth, the model places greater weight on the small-interval constraint, and does not learn the repetition rule as well as when context melodies are not Smooth, paralleling the human learners. These findings support an account of abstract grammar-induction in which learners rationally assess the statistical evidence for underlying structure based on a generative model of the environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Surface modification of an Mg-1Ca alloy to slow down its biocorrosion by chitosan.
Gu, X N; Zheng, Y F; Lan, Q X; Cheng, Y; Zhang, Z X; Xi, T F; Zhang, D Y
2009-08-01
The surface morphologies before and after immersion corrosion test of various chitosan-coated Mg-1Ca alloy samples were studied to investigate the effect of chitosan dip coating on the slowdown of biocorrosion. It showed that the corrosion resistance of the Mg-Ca alloy increased after coating with chitosan, and depended on both the chitosan molecular weight and layer numbers of coating. The Mg-Ca alloy coated by chitosan with a molecular weight of 2.7 x 10(5) for six layers has smooth and intact surface morphology, and exhibits the highest corrosion resistance in a simulated body fluid.
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
Roles of Polyuria and Hyperglycemia on Bladder Dysfunction in Diabetes
Xiao, Nan; Wang, Zhiping; Huang, Yexiang; Daneshgari, Firouz; Liu, Guiming
2014-01-01
Purpose Diabetes mellitus (DM) causes diabetic bladder dysfunction (DBD). We aimed to identify the pathogenic roles of polyuria and hyperglycemia on DBD in rats. Materials and Methods Seventy-two female Sprague-Dawley rats were divided: age-matched controls (control), sham urinary diversion (sham), urinary diversion (UD), streptozotocin-induced diabetes after sham UD (DM), streptozotocin-induced diabetes after UD (UD+DM), and 5% sucrose-induced diuresis after sham UD (DIU). UD was performed by ureterovaginostomy 10d before DM induction. Animals were evaluated 20 wks after DM or diuresis induction. We measured 24-hr drinking and voiding volumes and cystometry (CMG). Bladders were harvested for quantification of smooth muscle, urothelium, and collagen. We measured nitrotyrosine and manganese superoxide dismutase (MnSOD) in bladder. Results Diabetes and diuresis caused increases in drinking volume, voiding volume and bladder weight. Bladder weights decreased in the UD and UD+DM groups. Intercontractile intervals, voided volume, and compliance increased in the DIU and DM groups, decreased in the UD, and further decreased in the UD+DM group. The total cross-sectional tissue, smooth muscle and urothelium areas increased in the DIU and DM groups, and decreased in the UD and UD+DM groups. As percentages of total tissue area, collagen decreased in the DIU and DM groups, and increased in the UD and UD+DM groups, and smooth muscle and urothelium decreased in the UD and UD+DM groups. Nitrotyrosine and MnSOD increased in DM and UD+DM rats. Conclusions Polyuria induced bladder hypertrophy, while hyperglycemia induced substantial oxidative stress in the bladder, which may play a pathogenic role in late stage DBD. PMID:22999997
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guard-Petter, J.; Parker, C.T.; Asokan, K.
1999-05-01
Twelve human and chicken isolates of Salmonella enterica serovar Enteritidis belonging to phage types 4, 8, 13a, and 23 were characterized for variability in lipopolysaccharide (LPS) composition. Isolates were differentiated into two groups, i.e., those that lacked immunoreactive O-chain, termed rough isolates, and those that had immunoreactive O-chain, termed smooth isolates. Isolates within these groups could be further differentiated by LPS compositional differences as detected by gel electrophoresis and gas liquid chromatography of samples extracted with water, which yielded significantly more LPS in comparison to phenol-chloroform extraction. The rough isolates were of two types, the O-antigen synthesis mutants and themore » O-antigen polymerization (wzy) mutants. Smooth isolates were also of two types, one producing low-molecular-weight (LMW) LPS and the other producing high-molecular-weight (HMW) LPS. To determine the genetic basis for the O-chain variability of the smooth isolates, the authors analyzed the effects of a null mutation in the O-chain length determinant gene, wzz (cld) of serovar Typhimurium. This mutation results in a loss of HMW LPS; however, the LMW LPS of this mutant was longer and more glucosylated than that from clinical isolates of serovar Enteritidis. Cluster analysis of these data and of those from two previously characterized isogenic strains of serovar Enteritidis that had different virulence attributes indicated that glucosylation of HMW LPS (via oafR function) is variable and results in two types of HMW structures, one that is highly glucosylated and one that is minimally glucosylated. These results strongly indicate that naturally occurring variability in wzy, wzz, and oafR function can be used to subtype isolates of serovar Enteritidis during epidemiological investigations.« less
ICTNET at Microblog Track TREC 2012
2012-11-01
Weight(T) = 1 _(()) ∗ ∑ () ∗ () ∑ ()∈ () In external expansion, we use Google ...based on language model, we choose stupid backoff as the smoothing technique and “queue” as the history retention technique[7].In the filter based on...is used in ICTWDSERUN2. In both ICTWDSERUN3 and ICTWDSERUN4, we use google search results as query expansion. RankSVMmethod is used in both
Cavernous neurotomy in the rat is associated with the onset of an overt condition of hypogonadism.
Vignozzi, Linda; Filippi, Sandra; Morelli, Annamaria; Marini, Mirca; Chavalmane, Aravinda; Fibbi, Benedetta; Silvestrini, Enrico; Mancina, Rosa; Carini, Marco; Vannelli, G Barbara; Forti, Gianni; Maggi, Mario
2009-05-01
Most men following radical retropubic prostatectomy (RRP) are afflicted by erectile dysfunction (ED). RRP-related ED occurs as a result of surgically elicited neuropraxia, leading to histological changes in the penis, including collagenization of smooth muscle and endothelial damage. To verify whether hypogonadism could contribute to the pathogenesis of RRP-ED. Effects of testosterone (T), alone or in association with long-term tadalafil (Tad) treatment in a rat model of bilateral cavernous neurotomy (BCN). Penile tissues from rats were harvested for vasoreactivity studies 3 months post-BCN. Penile oxygenation was evaluated by hypoxyprobe immunostaining. Phosphodiesterase type 5 (PDE5), endothelial nitric oxide synthase (eNOS), and neuronal nitric oxide synthase (nNOS) mRNA expression were quantified by Real Time quantitative reverse transcription polymerase chain reaction (qRT-PCR). In BCN rats, we observed the onset of an overt condition of hypogonadism, characterized by reduced T plasma level, reduced ventral prostate weight, reduced testis function (including testis weight and number of Leydig cells), with an inadequate compensatory increase of luteinizing hormone. BCN induced massive penile hypoxia, decreased muscle/fiber ratio, nNOS, eNOS, PDE5 expression, increased sensitivity to the nitric oxide donor, sodium nitroprusside (SNP), and reduced the relaxant response to acetylcholine (Ach), as well as unresponsiveness to acute Tad dosing. In BCN rats, chronic Tad-administration normalizes penile oxygenation, smooth muscle loss, PDE5 expression, SNP sensitivity, and the responsiveness to the acute Tad administration. Chronic Tad treatment was ineffective in counteracting the reduction of nNOS and eNOS expression, along with Ach responsiveness. T supplementation, in combination with Tad, reverted some of the aforementioned alterations, restoring smooth muscle content, eNOS expression, as well as the relaxant response of penile strips to Ach, but not nNOS expression. BCN was associated with hypogonadism, probably of central origin. T supplementation in hypogonadal BCN rats ameliorates some aspects of BCN-induced ED, including collagenization of penile smooth muscle and endothelial dysfunction, except surgically induced altered nNOS expression.
EphA2 Expression Regulates Inflammation and Fibroproliferative Remodeling in Atherosclerosis.
Finney, Alexandra C; Funk, Steven D; Green, Jonette M; Yurdagul, Arif; Rana, Mohammad Atif; Pistorius, Rebecca; Henry, Miriam; Yurochko, Andrew; Pattillo, Christopher B; Traylor, James G; Chen, Jin; Woolard, Matthew D; Kevil, Christopher G; Orr, A Wayne
2017-08-08
Atherosclerotic plaque formation results from chronic inflammation and fibroproliferative remodeling in the vascular wall. We previously demonstrated that both human and mouse atherosclerotic plaques show elevated expression of EphA2, a guidance molecule involved in cell-cell interactions and tumorigenesis. Here, we assessed the role of EphA2 in atherosclerosis by deleting EphA2 in a mouse model of atherosclerosis (Apoe - /- ) and by assessing EphA2 function in multiple vascular cell culture models. After 8 to 16 weeks on a Western diet, male and female mice were assessed for atherosclerotic burden in the large vessels, and plasma lipid levels were analyzed. Despite enhanced weight gain and plasma lipid levels compared with Apoe -/- controls, EphA2 -/- Apoe -/- knockout mice show diminished atherosclerotic plaque formation, characterized by reduced proinflammatory gene expression and plaque macrophage content. Although plaque macrophages express EphA2, EphA2 deletion does not affect macrophage phenotype, inflammatory responses, and lipid uptake, and bone marrow chimeras suggest that hematopoietic EphA2 deletion does not affect plaque formation. In contrast, endothelial EphA2 knockdown significantly reduces monocyte firm adhesion under flow. In addition, EphA2 -/- Apoe -/- mice show reduced progression to advanced atherosclerotic plaques with diminished smooth muscle and collagen content. Consistent with this phenotype, EphA2 shows enhanced expression after smooth muscle transition to a synthetic phenotype, and EphA2 depletion reduces smooth muscle proliferation, mitogenic signaling, and extracellular matrix deposition both in atherosclerotic plaques and in vascular smooth muscle cells in culture. Together, these data identify a novel role for EphA2 in atherosclerosis, regulating both plaque inflammation and progression to advanced atherosclerotic lesions. Cell culture studies suggest that endothelial EphA2 contributes to atherosclerotic inflammation by promoting monocyte firm adhesion, whereas smooth muscle EphA2 expression may regulate the progression to advanced atherosclerosis by regulating smooth muscle proliferation and extracellular matrix deposition. © 2017 American Heart Association, Inc.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.
Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei
2018-07-01
Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effect of the edaphic factors and metal content in soil on the diversity of Trichoderma spp.
Racić, Gordana; Körmöczi, Péter; Kredics, László; Raičević, Vera; Mutavdžić, Beba; Vrvić, Miroslav M; Panković, Dejana
2017-02-01
Influence of edaphic factors and metal content on diversity of Trichoderma species at 14 different soil sampling locations, on two depths, was examined. Forty-one Trichoderma isolates from 14 sampling sites were determined as nine species based on their internal transcribed spacer (ITS) sequences. Our results indicate that weakly alkaline soils are rich sources of Trichoderma strains. Also, higher contents of available K and P are connected with higher Trichoderma diversity. Increased metal content in soil was not inhibiting factor for Trichoderma species occurrence. Relationship between these factors was confirmed by locally weighted sequential smoothing (LOESS) nonparametric smoothing analysis. Trichoderma strain (Szeged Microbiology Collection (SZMC) 22669) from soil with concentrations of Cr and Ni above remediation values should be tested for its potential for bioremediation of these metals in polluted soils.
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad Allen
EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can selectmore » a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.« less
NASA Astrophysics Data System (ADS)
Fathrio, Ibnu; Manda, Atsuyoshi; Iizuka, Satoshi; Kodama, Yasu-Masa; Ishida, Sachinobu
2018-05-01
This study presents ocean heat budget analysis on seas surface temperature (SST) anomalies during strong-weak Asian summer monsoon (southwest monsoon). As discussed by previous studies, there was close relationship between variations of Asian summer monsoon and SST anomaly in western Indian Ocean. In this study we utilized ocean heat budget analysis to elucidate the dominant mechanism that is responsible for generating SST anomaly during weak-strong boreal summer monsoon. Our results showed ocean advection plays more important role to initate SST anomaly than the atmospheric prcess (surface heat flux). Scatterplot analysis showed that vertical advection initiated SST anomaly in western Arabian Sea and southwestern Indian Ocean, while zonal advection initiated SST anomaly in western equatorial Indian Ocean.
WENO schemes on arbitrary mixed-element unstructured meshes in three space dimensions
NASA Astrophysics Data System (ADS)
Tsoutsanis, P.; Titarev, V. A.; Drikakis, D.
2011-02-01
The paper extends weighted essentially non-oscillatory (WENO) methods to three dimensional mixed-element unstructured meshes, comprising tetrahedral, hexahedral, prismatic and pyramidal elements. Numerical results illustrate the convergence rates and non-oscillatory properties of the schemes for various smooth and discontinuous solutions test cases and the compressible Euler equations on various types of grids. Schemes of up to fifth order of spatial accuracy are considered.
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
Garg, Hari G; Mrabat, Hicham; Yu, Lunyin; Hales, Charles A; Li, Boyangzi; Moore, Casey N; Zhang, Fuming; Linhardt, Robert J
2011-08-01
Heparin (HP) inhibits the growth of several cell types in vitro including bovine pulmonary artery (BPA) smooth muscle cells (SMCs). In initial studies we discovered that an O-hexanoylated low-molecular-weight (LMW) HP derivative having acyl groups with 6-carbon chain length was more potent inhibitor of BPA-SMCs than the starting HP. We prepared several O-acylated LMWHP derivatives having 4-, 6-, 8-, 10-, 12-, and 18- carbon acyl chain lengths to determine the optimal acyl chain length for maximum anti-proliferative properties of BPA-SMCs. The starting LMWHP was prepared from unfractionated HP by sodium periodate treatment followed by sodium borohydride reduction. The tri-n-butylammonium salt of this LMWHP was O-acylated with butanoic, hexanoic, octanoic, decanoic, dodecanoic, and stearyl anhydrides separately to give respective O-acylated LMWHP derivatives. Gradient polyacrylamide gel electrophoresis (PAGE) was used to examine the average molecular weights of those O-acylated LMWHP derivatives. NMR analysis indicated the presence of one O-acyl group per disaccharide residue. Measurement of the inhibition of BPA-SMCS as a function of O-acyl chain length shows two optima, at a carbon chain length of 6 (O-hexanoylated LMWHP) and at a carbon chain length 12-18 (O-dodecanoyl and O-stearyl LMWHPs). A solution competition SPR study was performed to test the ability of different O-acylated LMWHP derivatives to inhibit fibroblast growth factor (FGF) 1 and FGF2 binding to surface-immobilized heparin. All the LMWHP derivatives bound to FGF1 and FGF2 but each exhibited slightly different binding affinity.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Locomotion and attachment of leaf beetle larvae Gastrophysa viridula (Coleoptera, Chrysomelidae).
Zurek, Daniel B; Gorb, Stanislav N; Voigt, Dagmar
2015-02-06
While adult green dock leaf beetles Gastrophysa viridula use tarsal adhesive setae to attach to and walk on smooth vertical surfaces and ceilings, larvae apply different devices for similar purposes: pretarsal adhesive pads on thoracic legs and a retractable pygopod at the 10th abdominal segment. Both are soft smooth structures and capable of wet adhesion. We studied attachment ability of different larval instars, considering the relationship between body weight and real contact area between attachment devices and the substrate. Larval gait patterns were analysed using high-speed video recordings. Instead of the tripod gait of adults, larvae walked by swinging contralateral legs simultaneously while adhering by the pygopod. Attachment ability of larval instars was measured by centrifugation on a spinning drum, revealing that attachment force decreases relative to weight. Contributions of different attachment devices to total attachment ability were investigated by selective disabling of organs by covering them with melted wax. Despite their smaller overall contact area, tarsal pads contributed to a larger extent to total attachment ability, probably because of their distributed spacing. Furthermore, we observed different behaviour in adults and larvae when centrifuged: while adults gradually slipped outward on the centrifuge drum surface, larvae stayed at the initial position until sudden detachment.
Locomotion and attachment of leaf beetle larvae Gastrophysa viridula (Coleoptera, Chrysomelidae)
Zurek, Daniel B.; Gorb, Stanislav N.; Voigt, Dagmar
2015-01-01
While adult green dock leaf beetles Gastrophysa viridula use tarsal adhesive setae to attach to and walk on smooth vertical surfaces and ceilings, larvae apply different devices for similar purposes: pretarsal adhesive pads on thoracic legs and a retractable pygopod at the 10th abdominal segment. Both are soft smooth structures and capable of wet adhesion. We studied attachment ability of different larval instars, considering the relationship between body weight and real contact area between attachment devices and the substrate. Larval gait patterns were analysed using high-speed video recordings. Instead of the tripod gait of adults, larvae walked by swinging contralateral legs simultaneously while adhering by the pygopod. Attachment ability of larval instars was measured by centrifugation on a spinning drum, revealing that attachment force decreases relative to weight. Contributions of different attachment devices to total attachment ability were investigated by selective disabling of organs by covering them with melted wax. Despite their smaller overall contact area, tarsal pads contributed to a larger extent to total attachment ability, probably because of their distributed spacing. Furthermore, we observed different behaviour in adults and larvae when centrifuged: while adults gradually slipped outward on the centrifuge drum surface, larvae stayed at the initial position until sudden detachment. PMID:25657837
StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets
NASA Astrophysics Data System (ADS)
Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.
2018-05-01
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
Standing-up exerciser based on functional electrical stimulation and body weight relief.
Ferrarin, M; Pavan, E E; Spadone, R; Cardini, R; Frigo, C
2002-05-01
The goal of the present work was to develop and test an innovative system for the training of paraplegic patients when they are standing up. The system consisted of a computer-controlled stimulator, surface electrodes for quadricep muscle stimulation, two knee angle sensors, a digital proportional-integrative-derivative (PID) controller and a mechanical device to support, partially, the body weight (weight reliever (WR)). A biomechanical model of the combined WR and patient was developed to find an optimum reference trajectory for the PID controller. The system was tested on three paraplegic patients and was shown to be reliable and safe. One patient completed a 30-session training period. Initially he was able to stand up only with 62% body weight relief, whereas, after the training period, he performed a series of 30 standing-up/sitting-down cycles with 45% body weight relief. The closed-loop controller was able to keep the patient standing upright with minimum stimulation current, to compensate automatically for muscle fatigue and to smooth the sitting-down movement. The limitations of the controller in connection with a highly non-linear system are considered.
Joint groupwise registration and ADC estimation in the liver using a B-value weighted metric.
Sanz-Estébanez, Santiago; Rabanillo-Viloria, Iñaki; Royuela-Del-Val, Javier; Aja-Fernández, Santiago; Alberola-López, Carlos
2018-02-01
The purpose of this work is to develop a groupwise elastic multimodal registration algorithm for robust ADC estimation in the liver on multiple breath hold diffusion weighted images. We introduce a joint formulation to simultaneously solve both the registration and the estimation problems. In order to avoid non-reliable transformations and undesirable noise amplification, we have included appropriate smoothness constraints for both problems. Our metric incorporates the ADC estimation residuals, which are inversely weighted according to the signal content in each diffusion weighted image. Results show that the joint formulation provides a statistically significant improvement in the accuracy of the ADC estimates. Reproducibility has also been measured on real data in terms of the distribution of ADC differences obtained from different b-values subsets. The proposed algorithm is able to effectively deal with both the presence of motion and the geometric distortions, increasing accuracy and reproducibility in diffusion parameters estimation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Schultz, Howard
1990-01-01
The retrieval algorithm for spaceborne scatterometry proposed by Schultz (1985) is extended. A circular median filter (CMF) method is presented, which operates on wind directions independently of wind speed, removing any implicit wind speed dependence. A cell weighting scheme is included in the algorithm, permitting greater weights to be assigned to more reliable data. The mathematical properties of the ambiguous solutions to the wind retrieval problem are reviewed. The CMF algorithm is tested on twelve simulated data sets. The effects of spatially correlated likelihood assignment errors on the performance of the CMF algorithm are examined. Also, consideration is given to a wind field smoothing technique that uses a CMF.
High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables 'energy stable' modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
The co-seismic slip distribution of the Landers earthquake
Freymueller, J.; King, N.E.; Segall, P.
1994-01-01
We derived a model for the co-seismic slip distribution on the faults which ruptured during the Landers earthquake sequence of 28 June 1992. The model is based on the inversion of surface geodetic measurements, primarily vector displacements measured using the Global Positioning System (GPS). The inversion procedure assumes that the slip distribution is to some extent smooth and purely right-lateral strike slip. For a given fault geometry, a family of solutions of varying smoothness can be generated.We choose the optimal model from this family based on cross-validation, which measures the predictive power of the data, and the trade-off of misfit and roughness. Solutions which give roughly equal weight to misfit and smoothness are preferred and have certain features in common: (1) there are two main patches of slip, on the Johnson Valley fault, and on the Homestead Valley, Emerson, and Camp Rock faults; (2) virtually all slip is in the upper 10 to 12 km; and (3) the model reproduces the general features of the geologically measured surface displacements, without prior constraints on the surface slip. In all models, regardless of smoothing, very little slip is required on the fault that represents the Big Bear event, and the total moment of the Landers event is 9 · 1019 N-m. The nearly simultaneous rupture of multiple distinct faults suggests that much of the crust in this region must have been close to failure prior to the earthquake.
Censored quantile regression with recursive partitioning-based weights
Wey, Andrew; Wang, Lan; Rudser, Kyle
2014-01-01
Censored quantile regression provides a useful alternative to the Cox proportional hazards model for analyzing survival data. It directly models the conditional quantile of the survival time and hence is easy to interpret. Moreover, it relaxes the proportionality constraint on the hazard function associated with the popular Cox model and is natural for modeling heterogeneity of the data. Recently, Wang and Wang (2009. Locally weighted censored quantile regression. Journal of the American Statistical Association 103, 1117–1128) proposed a locally weighted censored quantile regression approach that allows for covariate-dependent censoring and is less restrictive than other censored quantile regression methods. However, their kernel smoothing-based weighting scheme requires all covariates to be continuous and encounters practical difficulty with even a moderate number of covariates. We propose a new weighting approach that uses recursive partitioning, e.g. survival trees, that offers greater flexibility in handling covariate-dependent censoring in moderately high dimensions and can incorporate both continuous and discrete covariates. We prove that this new weighting scheme leads to consistent estimation of the quantile regression coefficients and demonstrate its effectiveness via Monte Carlo simulations. We also illustrate the new method using a widely recognized data set from a clinical trial on primary biliary cirrhosis. PMID:23975800
Shi, Xun; Ayotte, Joseph D; Onda, Akikazu; Miller, Stephanie; Rees, Judy; Gilbert-Diamond, Diane; Onega, Tracy; Gui, Jiang; Karagas, Margaret; Moeschler, John
2015-04-01
There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Because drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, USA. We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, by using data for 1997-2009 stratified by maternal age. We smoothed the rates by using a locally weighted averaging method to increase the statistical stability. The town-level groundwater arsenic probability values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration >1 µg/L, probability >5 µg/L, and probability >10 µg/L. We calculated Pearson's correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic probability values, at both state and county levels. For preterm birth, younger mothers (maternal age <20) have a statewide r = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability >10 µg/L; for older mothers, r = 0.19 when the smoothing threshold = 3,500; a majority of county level r values are positive based on the arsenic data of probability >10 µg/L. For term LBW, younger mothers (maternal age <25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic concentration based on the data of probability >1 µg/L; for older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability >10 µg/L. At the county level for younger mothers, positive r values prevail, but for older mothers, it is a mix. For both birth problems, the several most populous counties-with 60-80 % of the state's population and clustering at the southwest corner of the state-are largely consistent in having a positive r across different smoothing thresholds. We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, USA. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic.
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
Eigenvectors of optimal color spectra.
Flinkman, Mika; Laamanen, Hannu; Tuomela, Jukka; Vahimaa, Pasi; Hauta-Kasari, Markku
2013-09-01
Principal component analysis (PCA) and weighted PCA were applied to spectra of optimal colors belonging to the outer surface of the object-color solid or to so-called MacAdam limits. The correlation matrix formed from this data is a circulant matrix whose biggest eigenvalue is simple and the corresponding eigenvector is constant. All other eigenvalues are double, and the eigenvectors can be expressed with trigonometric functions. Found trigonometric functions can be used as a general basis to reconstruct all possible smooth reflectance spectra. When the spectral data are weighted with an appropriate weight function, the essential part of the color information is compressed to the first three components and the shapes of the first three eigenvectors correspond to one achromatic response function and to two chromatic response functions, the latter corresponding approximately to Munsell opponent-hue directions 9YR-9B and 2BG-2R.
Seismic hazard in the Nation's breadbasket
Boyd, Oliver; Haller, Kathleen; Luco, Nicolas; Moschetti, Morgan P.; Mueller, Charles; Petersen, Mark D.; Rezaeian, Sanaz; Rubinstein, Justin L.
2015-01-01
The USGS National Seismic Hazard Maps were updated in 2014 and included several important changes for the central United States (CUS). Background seismicity sources were improved using a new moment-magnitude-based catalog; a new adaptive, nearest-neighbor smoothing kernel was implemented; and maximum magnitudes for background sources were updated. Areal source zones developed by the Central and Eastern United States Seismic Source Characterization for Nuclear Facilities project were simplified and adopted. The weighting scheme for ground motion models was updated, giving more weight to models with a faster attenuation with distance compared to the previous maps. Overall, hazard changes (2% probability of exceedance in 50 years, across a range of ground-motion frequencies) were smaller than 10% in most of the CUS relative to the 2008 USGS maps despite new ground motion models and their assigned logic tree weights that reduced the probabilistic ground motions by 5–20%.
1986-11-01
V2 + V 2 2 We can formulate the general weighted resampling formulas by giving an inter - polation formula and a sampling formula. Specifically...tessellation grids. 4.1. One-dimensional Adaptive Pyramid We suggest an interest operator based on the local " busyness " of the data. It has been observed...that in human perception a line with higher " busyness " seems longer than a straight line segment [6], as in Figure 7. Here, we will use a smoothed
On differences of linear positive operators
NASA Astrophysics Data System (ADS)
Aral, Ali; Inoan, Daniela; Raşa, Ioan
2018-04-01
In this paper we consider two different general linear positive operators defined on unbounded interval and obtain estimates for the differences of these operators in quantitative form. Our estimates involve an appropriate K-functional and a weighted modulus of smoothness. Similar estimates are obtained for Chebyshev functional of these operators as well. All considerations are based on rearrangement of the remainder in Taylor's formula. The obtained results are applied for some well known linear positive operators.
Laxy, M; Teuner, C; Holle, R; Kurz, C
2018-03-01
Obesity is a major public health problem. Detailed knowledge about the relationship between body mass index (BMI) and health-related quality of life (HRQL) is important for deriving effective and cost-effective prevention and weight management strategies. This study aims to describe the sex-, age- and ethnicity-specific association between BMI and HRQL in the US adult population. Analyses are based on pooled cross-sectional data from 41 459 participants of the Medical Expenditure Panel Survey (MEPS) Household Component (HC) for the years 2000-2003. BMI was calculated using self-reported height and weight, and HRQL was assessed with the EuroQol five-dimensional questionnaire. Generalized additive models were fitted with a smooth function for BMI and a smooth-factor interaction for BMI with sex adjusted for age, ethnicity, poverty, smoking and physical activity. Models were further stratified by age and ethnicity. The association between BMI and HRQL is inverse U-shaped with a HRQL high point at a BMI of 22 kg m -2 in women and a HRQL high plateau at BMI values of 22-30 kg m -2 in men. Men aged 50 years and older with a BMI of 29 kg m -2 reported on average five-point higher visual analog scale (VAS) scores than peers with a BMI of 20 kg m -2 . The inverse U-shaped association is more pronounced in older people, and the BMI-HRQL relationship differs between ethnicities. In Hispanics, the BMI associated with the highest HRQL is higher than in white people and, in black women, the BMI-HRQL association has an almost linear negative slope. The results show that a more differentiated use of BMI cutoffs in scientific discussions and daily practice is indicated. The findings should be considered in the design of future weight loss and weight management programs.
Garg, Hari G; Hales, Charles A; Yu, Lunyin; Butler, Melissa; Islam, Tasneem; Xie, Jin; Linhardt, Robert J
2006-11-06
Proliferation of pulmonary artery smooth muscle cells (PASMCs) appears to play a significant role in chronic pulmonary hypertension. The proliferation of PASMCs is strongly inhibited by some commercial heparin preparations. Heparin fragments were prepared by periodate treatment, followed by sodium borohydride reduction, to enhance potency. The tributylammonium salt of this fragmented heparin was O-acylated with hexanoic anhydride. Gradient polyacrylamide gel electrophoresis showed that the major heparin fragment contained eight disaccharide units. NMR analysis showed that approximately one hexanoyl group per disaccharide residue was present. The O-hexanoyl heparin fragments were assayed for growth inhibitory effect on bovine PASMCs in culture. This derivative was found to be more effective in growth inhibition of bovine PASMCs in culture than the heparin from which it was derived. In the future, it is envisioned that this or similar derivatives may be an effective treatment for pulmonary hypertension.
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
He, Xianmin; Wei, Qing; Sun, Meiqian; Fu, Xuping; Fan, Sichang; Li, Yao
2006-05-01
Biological techniques such as Array-Comparative genomic hybridization (CGH), fluorescent in situ hybridization (FISH) and affymetrix single nucleotide pleomorphism (SNP) array have been used to detect cytogenetic aberrations. However, on genomic scale, these techniques are labor intensive and time consuming. Comparative genomic microarray analysis (CGMA) has been used to identify cytogenetic changes in hepatocellular carcinoma (HCC) using gene expression microarray data. However, CGMA algorithm can not give precise localization of aberrations, fails to identify small cytogenetic changes, and exhibits false negatives and positives. Locally un-weighted smoothing cytogenetic aberrations prediction (LS-CAP) based on local smoothing and binomial distribution can be expected to address these problems. LS-CAP algorithm was built and used on HCC microarray profiles. Eighteen cytogenetic abnormalities were identified, among them 5 were reported previously, and 12 were proven by CGH studies. LS-CAP effectively reduced the false negatives and positives, and precisely located small fragments with cytogenetic aberrations.
Moving from Descriptive to Causal Analytics: Case Study of the Health Indicators Warehouse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack C.; Shankar, Mallikarjun; Xu, Songhua
The KDD community has described a multitude of methods for knowledge discovery on large datasets. We consider some of these methods and integrate them into an analyst s workflow that proceeds from the data-centric descriptive level to the model-centric causal level. Examples of the workflow are shown for the Health Indicators Warehouse, which is a public database for community health information that is a potent resource for conducting data science on a medium scale. We demonstrate the potential of HIW as a source of serious visual analytics efforts by showing correlation matrix visualizations, multivariate outlier analysis, multiple linear regression ofmore » Medicare costs, and scatterplot matrices for a broad set of health indicators. We conclude by sketching the first steps toward a causal dependence hypothesis.« less
Visualizing tumor evolution with the fishplot package for R.
Miller, Christopher A; McMichael, Joshua; Dang, Ha X; Maher, Christopher A; Ding, Li; Ley, Timothy J; Mardis, Elaine R; Wilson, Richard K
2016-11-07
Massively-parallel sequencing at depth is now enabling tumor heterogeneity and evolution to be characterized in unprecedented detail. Tracking these changes in clonal architecture often provides insight into therapeutic response and resistance. In complex cases involving multiple timepoints, standard visualizations, such as scatterplots, can be difficult to interpret. Current data visualization methods are also typically manual and laborious, and often only approximate subclonal fractions. We have developed an R package that accurately and intuitively displays changes in clonal structure over time. It requires simple input data and produces illustrative and easy-to-interpret graphs suitable for diagnosis, presentation, and publication. The simplicity, power, and flexibility of this tool make it valuable for visualizing tumor evolution, and it has potential utility in both research and clinical settings. The fishplot package is available at https://github.com/chrisamiller/fishplot .
Schuck, Peter; Gillis, Richard B.; Besong, Tabot M.D.; Almutairi, Fahad; Adams, Gary G.; Rowe, Arthur J.; Harding, Stephen E.
2014-01-01
Sedimentation equilibrium (analytical ultracentrifugation) is one of the most inherently suitable methods for the determination of average molecular weights and molecular weight distributions of polymers, because of its absolute basis (no conformation assumptions) and inherent fractionation ability (without the need for columns or membranes and associated assumptions over inertness). With modern instrumentation it is also possible to run up to 21 samples simultaneously in a single run. Its application has been severely hampered because of difficulties in terms of baseline determination (incorporating estimation of the concentration at the air/solution meniscus) and complexity of the analysis procedures. We describe a new method for baseline determination based on a smart-smoothing principle and built into the highly popular platform SEDFIT for the analysis of the sedimentation behavior of natural and synthetic polymer materials. The SEDFIT-MSTAR procedure – which takes only a few minutes to perform - is tested with four synthetic data sets (including a significantly non-ideal system) a naturally occurring protein (human IgG1) and two naturally occurring carbohydrate polymers (pullulan and λ–carrageenan) in terms of (i) weight average molecular weight for the whole distribution of species in the sample (ii) the variation in “point” average molecular weight with local concentration in the ultracentrifuge cell and (iii) molecular weight distribution. PMID:24244936
Erlacher-Reid, Claire; Dunn, J Lawrence; Camp, Tracy; Macha, Laurie; Mazzaro, Lisa; Tuttle, Allison D
2012-01-01
Bumblefoot (pododermatitis), often described as the most significant environmental disease of captive penguins, is commonly due to excessive pressure or trauma on the plantar surface of the avian foot, resulting in inflammation or necrosis and causing severe swelling, abrasions, or cracks in the skin. Although not formally evaluated in penguins, contributing factors for bumblefoot are thought to be similar to those initiating the condition in raptors and poultry. These factors include substrate, body weight, and lack of exercise. The primary purpose of this retrospective study was to evaluate variables potentially contributing to the development and duration of plantar lesions in aquarium-maintained African penguins (Spheniscus demersus), including sex, weight, age, season, exhibit activity, and territory substrate. Results indicate that males develop significantly more plantar lesions than females. Penguins weighing between 3.51 and 4.0 kg develop plantar lesions significantly more often than penguins weighing between 2.5 and 3.5 kg, and because male African penguins ordinarily weigh significantly more than females, weight is likely a contributing factor in the development of lesions in males compared with females. Significantly more plantar lesions were observed in penguins standing for greater than 50% of their time on exhibit than swimming. Penguins occupying smooth concrete territories developed more plantar lesions compared with penguins occupying grate territories. Recommendations for minimizing bumblefoot in African penguins include training penguins for monthly foot examinations for early detection of plantar lesions predisposing for the disease, encouraging swimming activity, and replacing smooth surfaces on exhibit with surfaces providing variable degrees of pressure and texture on the feet. © 2011 Wiley Periodicals, Inc.
Kanabar, V; Page, C P; Simcock, D E; Karner, C; Mahn, K; O'Connor, B J; Hirst, S J
2008-06-01
The glycosaminoglycan heparin has anti-inflammatory activity and is exclusively found in mast cells, which are localized within airway smooth muscle (ASM) bundles of asthmatic airways. Interleukin (IL)-13 induces the production of multiple inflammatory mediators from ASM including the eosinophil chemoattractant chemokine, eotaxin-1. Heparin and related glycosaminoglycan polymers having structurally heterogeneous polysaccharide side chains that varied in molecular weight, sulphation and anionic charge were used to identify features of the heparin molecule linked to anti-inflammatory activity. Cultured human ASM cells were stimulated with interleukin (IL)-13 in the absence or presence of heparin and related polymers. Eotaxin-1 was quantified using chemokine antibody arrays and ELISA. Unfractionated heparin attenuated IL-13-dependent eotaxin-1 production and this effect was reproduced with low molecular weight heparins (3 and 6 kDa), demonstrating a minimum activity fragment of at least 3 kDa. N-desulphated, 20% re-N-acetylated heparin (anticoagulant) was ineffective against IL-13-dependent eotaxin-1 production compared with 90% re-N-acetylated (anticoagulant) or O-desulphated (non-anticoagulant) heparin, suggesting a requirement for N-sulphation independent of anticoagulant activity. Other sulphated molecules with variable anionic charge and molecular weight exceeding 3 kDa (dextran sulphate, fucoidan, chondroitin sulphate B) inhibited IL-13-stimulated eotaxin-1 release to varying degrees. However, non-sulphated dextran had no effect. Inhibition of IL-13-dependent eotaxin-1 release by heparin involved but did not depend upon sulphation, though loss of N-sulphation reduced the attenuating activity, which could be restored by N-acetylation. This anti-inflammatory effect was also partially dependent on anionic charge, but independent of molecular size above 3 kDa and the anticoagulant action of heparin.
NASA Astrophysics Data System (ADS)
Tirani, M. D.; Maleki, M.; Kajani, M. T.
2014-11-01
A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.
A Systematic Methodology for Constructing High-Order Energy-Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2008-01-01
A third-order Energy Stable Weighted Essentially Non-Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter (AIAA 2008-2876, 2008) was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables \\energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
Self-Tuning of Design Variables for Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Lin, Chaung; Juang, Jer-Nan
2000-01-01
Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.
Magnetically induced orientation of mesochannels in mesoporous silica films at 30 tesla.
Yamauchi, Yusuke; Sawada, Makoto; Komatsu, Masaki; Sugiyama, Atsushi; Osaka, Tetsuya; Hirota, Noriyuki; Sakka, Yoshio; Kuroda, Kazuyuki
2007-12-03
We demonstrate the magnetically induced orientation of mesochannels in mesoporous silica films prepared with low-molecular-weight surfactants under an extremely high magnetic field of 30 T. This process is principally applicable to any type of surfactant that has magnetic anisotropy because such a high magnetic field provides sufficient magnetic energy for smooth magnetic orientation. Hexadecyltrimethylammonium bromide (CTAB) and polyoxyethylene-10-cetyl ether (Brij 56) were used as cationic and nonionic surfactants, respectively. According to XRD and cross-sectional TEM, mesochannels aligned perpendicular to the substrates were observed in films prepared with low-molecular-weight surfactants, although the effect was incomplete. The evolution of these types of films should lead to future applications such as highly sensitive chemical sensors and selective separation.
Formulating Spatially Varying Performance in the Statistical Fusion Framework
Landman, Bennett A.
2012-01-01
To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513
A Systematic Methodology for Constructing High-Order Energy Stable WENO Schemes
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2009-01-01
A third-order Energy Stable Weighted Essentially Non{Oscillatory (ESWENO) finite difference scheme developed by Yamaleev and Carpenter [1] was proven to be stable in the energy norm for both continuous and discontinuous solutions of systems of linear hyperbolic equations. Herein, a systematic approach is presented that enables "energy stable" modifications for existing WENO schemes of any order. The technique is demonstrated by developing a one-parameter family of fifth-order upwind-biased ESWENO schemes; ESWENO schemes up to eighth order are presented in the appendix. New weight functions are also developed that provide (1) formal consistency, (2) much faster convergence for smooth solutions with an arbitrary number of vanishing derivatives, and (3) improved resolution near strong discontinuities.
Optimal interpolation analysis of leaf area index using MODIS data
Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve
2006-01-01
A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.
FEFsem neuronal response during combined volitional and reflexive pursuit.
Bakst, Leah; Fleuriet, Jérome; Mustari, Michael J
2017-05-01
Although much is known about volitional and reflexive smooth eye movements individually, much less is known about how they are coordinated. It is hypothesized that separate cortico-ponto-cerebellar loops subserve these different types of smooth eye movements. Specifically, the MT-MST-DLPN pathway is thought to be critical for ocular following eye movements, whereas the FEF-NRTP pathway is understood to be vital for volitional smooth pursuit. However, the role that these loops play in combined volitional and reflexive behavior is unknown. We used a large, textured background moving in conjunction with a small target spot to investigate the eye movements evoked by a combined volitional and reflexive pursuit task. We also assessed the activity of neurons in the smooth eye movement subregion of the frontal eye field (FEFsem). We hypothesized that the pursuit system would show less contribution from the volitional pathway in this task, owing to the increased involvement of the reflexive pathway. In accordance with this hypothesis, a majority of FEFsem neurons (63%) were less active during pursuit maintenance in a combined volitional and reflexive pursuit task than during purely volitional pursuit. Interestingly and surprisingly, the neuronal response to the addition of the large-field motion was highly correlated with the neuronal response to a target blink. This suggests that FEFsem neuronal responses to these different perturbations-whether the addition or subtraction of retinal input-may be related. We conjecture that these findings are due to changing weights of both the volitional and reflexive pathways, as well as retinal and extraretinal signals.
FEFsem neuronal response during combined volitional and reflexive pursuit
Bakst, Leah; Fleuriet, Jérome; Mustari, Michael J.
2017-01-01
Although much is known about volitional and reflexive smooth eye movements individually, much less is known about how they are coordinated. It is hypothesized that separate cortico-ponto-cerebellar loops subserve these different types of smooth eye movements. Specifically, the MT-MST-DLPN pathway is thought to be critical for ocular following eye movements, whereas the FEF-NRTP pathway is understood to be vital for volitional smooth pursuit. However, the role that these loops play in combined volitional and reflexive behavior is unknown. We used a large, textured background moving in conjunction with a small target spot to investigate the eye movements evoked by a combined volitional and reflexive pursuit task. We also assessed the activity of neurons in the smooth eye movement subregion of the frontal eye field (FEFsem). We hypothesized that the pursuit system would show less contribution from the volitional pathway in this task, owing to the increased involvement of the reflexive pathway. In accordance with this hypothesis, a majority of FEFsem neurons (63%) were less active during pursuit maintenance in a combined volitional and reflexive pursuit task than during purely volitional pursuit. Interestingly and surprisingly, the neuronal response to the addition of the large-field motion was highly correlated with the neuronal response to a target blink. This suggests that FEFsem neuronal responses to these different perturbations—whether the addition or subtraction of retinal input—may be related. We conjecture that these findings are due to changing weights of both the volitional and reflexive pathways, as well as retinal and extraretinal signals. PMID:28538993
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
Hyaluronan mediates airway hyperresponsiveness in oxidative lung injury
Lazrak, Ahmed; Creighton, Judy; Yu, Zhihong; Komarova, Svetlana; Doran, Stephen F.; Aggarwal, Saurabh; Emala, Charles W.; Stober, Vandy P.; Trempus, Carol S.; Garantziotis, Stavros
2015-01-01
Chlorine (Cl2) inhalation induces severe oxidative lung injury and airway hyperresponsiveness (AHR) that lead to asthmalike symptoms. When inhaled, Cl2 reacts with epithelial lining fluid, forming by-products that damage hyaluronan, a constituent of the extracellular matrix, causing the release of low-molecular-weight fragments (L-HA, <300 kDa), which initiate a series of proinflammatory events. Cl2 (400 ppm, 30 min) exposure to mice caused an increase of L-HA and its binding partner, inter-α-trypsin-inhibitor (IαI), in the bronchoalveolar lavage fluid. Airway resistance following methacholine challenge was increased 24 h post-Cl2 exposure. Intratracheal administration of high-molecular-weight hyaluronan (H-HA) or an antibody against IαI post-Cl2 exposure decreased AHR. Exposure of human airway smooth muscle (HASM) cells to Cl2 (100 ppm, 10 min) or incubation with Cl2-exposed H-HA (which fragments it to L-HA) increased membrane potential depolarization, intracellular Ca2+, and RhoA activation. Inhibition of RhoA, chelation of intracellular Ca2+, blockade of cation channels, as well as postexposure addition of H-HA, reversed membrane depolarization in HASM cells. We propose a paradigm in which oxidative lung injury generates reactive species and L-HA that activates RhoA and Ca2+ channels of airway smooth muscle cells, increasing their contractility and thus causing AHR. PMID:25747964
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
Gu, Zi; Rolfe, Barbara E; Xu, Zhi P; Thomas, Anita C; Campbell, Julie H; Lu, Gao Q M
2010-07-01
Surgical procedures to remove atherosclerotic lesions and restore blood flow also injure the artery wall, promoting vascular smooth muscle cell (SMC) phenotypic change, migration, proliferation, matrix production and ultimately, restenosis of the artery. Hence identification of effective anti-restenotic strategies is a high priority in cardiovascular research, and SMCs are a key target for intervention. This paper presents the in vitro study of layered double hydroxides (LDHs) as drug delivery system for an anti-restenotic drug (low molecular weight heparin, LMWH). The cytotoxicity tests showed that LDH itself had very limited toxicity at concentrations below 50 microg/mL over 6-day incubation. LDH nanoparticles loaded with LMWH (LMWH-LDHs) were prepared and tested on rat vascular SMCs. When conjugated to LDH particles, LMWH enhanced its ability to inhibit SMC proliferation and migration, with greater than above 60% reduction compared with the control (growth medium) over 3 or 7-day incubation. Cellular uptake studies showed that compared with LMWH alone, LMWH-LDH hybrids were internalized by SMCs more rapidly, and uptake was sustained over a longer time, possibly revealing the mechanisms underlying the enhanced biological function of LMWH-LDH. The results suggest the potential of LMWH-LDH as an efficient anti-restenotic drug for clinical application. Copyright 2010 Elsevier Ltd. All rights reserved.
Vosgerau, Uwe; Lauer, Diljara; Unger, Thomas; Kaschina, Elena
2010-01-15
We previously reported that Brown Norway Katholiek rats, which feature a deficiency of plasma kininogens, develop severe abdominal aortic aneurysm. Increased activity of matrix metalloproteinases (MMPs) in the aortic wall, leading to degradation of extracellular matrix components, is considered to play a crucial role in aneurysm formation. Using an in vitro model of vascular smooth muscle cells (VSMCs), cultured from the rat aorta, we investigated whether the cleaved form of high molecular weight kininogen, designated HKa, affects the expression of MMP-9 and MMP-2 and their tissue inhibitors (TIMPs). Treatment of VSMCs with HKa reduced in a concentration-dependent manner IL-1alpha-induced release of MMP-9 and MMP-2, associated with decreased MMP enzymatic activity levels in conditioned media, as demonstrated by gelatin zymography and fluorescein-labeled gelatin substrate assay, respectively. Real-time PCR revealed that HKa reduced corresponding MMP-9 mRNA levels. Further investigations showed that this effect did not result from a modified rate of MMP-9 mRNA degradation. TIMP-1 mRNA levels, already increased as a result of cytokine-stimulation, were significantly enhanced by HKa. Furthermore, we found elevated basal mRNA expression levels of MMP-2 and TIMP-2 in VSMCs derived from kininogen-deficient Brown Norway Katholiek rats. These results demonstrate for the first time that HKa affects the regulation of MMPs in VSMCs.
Smooth muscle cell-specific knockout of androgen receptor: a new model for prostatic disease.
Welsh, Michelle; Moffat, Lindsey; McNeilly, Alan; Brownstein, David; Saunders, Philippa T K; Sharpe, Richard M; Smith, Lee B
2011-09-01
Androgen-driven stromal-epithelial interactions play a key role in normal prostate development and function as well as in the progression of common prostatic diseases such as benign prostatic hyperplasia and prostate cancer. However, exactly how, and via which cell type, androgens mediate their effects in the adult prostate remains unclear. This study investigated the role for smooth muscle (SM) androgen signaling in normal adult prostate homeostasis and function using mice in which androgen receptor was selectively ablated from prostatic SM cells. In adulthood the knockout (KO) mice displayed a 44% reduction in prostate weight and exhibited histological abnormalities such as hyperplasia, inflammation, fibrosis, and reduced expression of epithelial, SM, and stem cell identify markers (e.g. p63 reduced by 27% and Pten by 31%). These changes emerged beyond puberty and were not explained by changes in serum hormones. Furthermore, in response to exogenous estradiol, adult KO mice displayed an 8.5-fold greater increase in prostate weight than controls and developed urinary retention. KO mice also demonstrated a reduced response to castration compared with controls. Together these results demonstrate that prostate SM cells are vital in mediating androgen-driven stromal-epithelial interactions in adult mouse prostates, determining cell identity and function and limiting hormone-dependent epithelial cell proliferation. This novel mouse model provides new insight into the possible role for SM androgen action in prostate disease.
MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z; Qi, H; Wu, S
2016-06-15
Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotationalmore » invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74%, respectively.« less
Roles of polyuria and hyperglycemia in bladder dysfunction in diabetes.
Xiao, Nan; Wang, Zhiping; Huang, Yexiang; Daneshgari, Firouz; Liu, Guiming
2013-03-01
Diabetes mellitus causes diabetic bladder dysfunction. We identified the pathogenic roles of polyuria and hyperglycemia in diabetic bladder dysfunction in rats. A total of 72 female Sprague-Dawley® rats were divided into 6 groups, including age matched controls, and rats with sham urinary diversion, urinary diversion, streptozotocin induced diabetes mellitus after sham urinary diversion, streptozotocin induced diabetes mellitus after urinary diversion and 5% sucrose induced diuresis after sham urinary diversion. Urinary diversion was performed by ureterovaginostomy 10 days before diabetes mellitus induction. Animals were evaluated 20 weeks after diabetes mellitus or diuresis induction. We measured 24-hour drinking and voiding volumes, and cystometry. Bladders were harvested to quantify smooth muscle, urothelium and collagen. We measured nitrotyrosine and Mn superoxide dismutase in the bladder. Diabetes and diuresis caused increases in drinking and voiding volume, and bladder weight. Bladder weight decreased in the urinary diversion group and the urinary diversion plus diabetes group. The intercontractile interval, voided volume and compliance increased in the diuresis and diabetes groups, decreased in the urinary diversion group and further decreased in the urinary diversion plus diabetes group. Total cross-sectional tissue, smooth muscle and urothelium areas increased in the diuresis and diabetes groups, and decreased in the urinary diversion and urinary diversion plus diabetes groups. As a percent of total tissue area, collagen decreased in the diuresis and diabetes groups, and increased in the urinary diversion and urinary diversion plus diabetes groups. Smooth muscle and urothelium decreased in the urinary diversion and urinary diversion plus diabetes groups. Nitrotyrosine and Mn superoxide dismutase increased in rats with diabetes and urinary diversion plus diabetes. Polyuria induced bladder hypertrophy, while hyperglycemia induced substantial oxidative stress in the bladder, which may have a pathogenic role in late stage diabetic bladder dysfunction. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
Assessing a 3D smoothed seismicity model of induced earthquakes
NASA Astrophysics Data System (ADS)
Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan
2016-04-01
As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Birth weight centiles by gestational age for twins born in south India.
Premkumar, Prasanna; Antonisamy, Belavendra; Mathews, Jiji; Benjamin, Santhosh; Regi, Annie; Jose, Ruby; Kuruvilla, Anil; Mathai, Mathews
2016-03-24
Birth weight centile curves are commonly used as a screening tool and to assess the position of a newborn on a given reference distribution. Birth weight of twins are known to be less than those of comparable singletons and twin-specific birth weight centile curves are recommended for use. In this study, we aim to construct gestational age specific birth weight centile curves for twins born in south India. The study was conducted at the Christian Medical College, Vellore, south India. The birth records of all consecutive pregnancies resulting in twin births between 1991 and 2005 were reviewed. Only live twin births between 24 and 42 weeks of gestation were included. Birth weight centiles for gestational age were obtained using the methodology of generalized additive models for location, scale and shape (GAMLSS). Centiles curves were obtained separately for monochorionic and dichorionic twins. Of 1530 twin pregnancies delivered during the study period (1991-2005), 1304 were included in the analysis. The median gestational age at birth was 36 weeks (1st quartile 34, 3rd quartile 38 weeks). Smoothed percentile curves for birth weight by gestational age increased progressively till 38 weeks and levels off thereafter. Compared with dichorionic twins, monochorionic twins had lower birth weight for gestational age from after 27 weeks. We provide centile values of birth weight at 24 to 42 completed weeks of gestation for twins born in south India. These charts could be used both in routine clinical assessments and epidemiological studies.
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Bakst, Leah; Fleuriet, Jérome
2017-01-01
Neurons in the smooth eye movement subregion of the frontal eye field (FEFsem) are known to play an important role in voluntary smooth pursuit eye movements. Underlying this function are projections to parietal and prefrontal visual association areas and subcortical structures, all known to play vital but differing roles in the execution of smooth pursuit. Additionally, the FEFsem has been shown to carry a diverse array of signals (e.g., eye velocity, acceleration, gain control). We hypothesized that distinct subpopulations of FEFsem neurons subserve these diverse functions and projections, and that the relative weights of retinal and extraretinal signals could form the basis for categorization of units. To investigate this, we used a step-ramp tracking task with a target blink to determine the relative contributions of retinal and extraretinal signals in individual FEFsem neurons throughout pursuit. We found that the contributions of retinal and extraretinal signals to neuronal activity and behavior change throughout the time course of pursuit. A clustering algorithm revealed three distinct neuronal subpopulations: cluster 1 was defined by a higher sensitivity to eye velocity, acceleration, and retinal image motion; cluster 2 had greater activity during blinks; and cluster 3 had significantly greater eye position sensitivity. We also performed a comparison with a sample of medial superior temporal neurons to assess similarities and differences between the two areas. Our results indicate the utility of simple tests such as the target blink for parsing the complex and multifaceted roles of cortical areas in behavior. NEW & NOTEWORTHY The frontal eye field (FEF) is known to play a critical role in volitional smooth pursuit, carrying a variety of signals that are distributed throughout the brain. This study used a novel application of a target blink task during step ramp tracking to determine, in combination with a clustering algorithm, the relative contributions of retinal and extraretinal signals to FEF activity and the extent to which these contributions could form the basis for a categorization of neurons. PMID:28202571
Xun Shi,; Ayotte, Joseph; Akikazu Onda,; Stephanie Miller,; Judy Rees,; Diane Gilbert-Diamond,; Onega, Tracy L; Gui, Jiang; Karagas, Margaret R.; Moeschler, John B
2015-01-01
There is increasing evidence of the role of arsenic in the etiology of adverse human reproductive outcomes. Because drinking water can be a major source of arsenic to pregnant women, the effect of arsenic exposure through drinking water on human birth may be revealed by a geospatial association between arsenic concentration in groundwater and birth problems, particularly in a region where private wells substantially account for water supply, like New Hampshire, USA. We calculated town-level rates of preterm birth and term low birth weight (term LBW) for New Hampshire, by using data for 1997–2009 stratified by maternal age. We smoothed the rates by using a locally weighted averaging method to increase the statistical stability. The town-level groundwater arsenic probability values are from three GIS data layers generated by the US Geological Survey: probability of local groundwater arsenic concentration >1 µg/L, probability >5 µg/L, and probability >10 µg/L. We calculated Pearson’s correlation coefficients (r) between the reproductive outcomes (preterm birth and term LBW) and the arsenic probability values, at both state and county levels. For preterm birth, younger mothers (maternal age <20) have a statewider = 0.70 between the rates smoothed with a threshold = 2,000 births and the town mean arsenic level based on the data of probability >10 µg/L; for older mothers, r = 0.19 when the smoothing threshold = 3,500; a majority of county level r values are positive based on the arsenic data of probability >10 µg/L. For term LBW, younger mothers (maternal age <25) have a statewide r = 0.44 between the rates smoothed with a threshold = 3,500 and town minimum arsenic concentration based on the data of probability >1 µg/L; for older mothers, r = 0.14 when the rates are smoothed with a threshold = 1,000 births and also adjusted by town median household income in 1999, and the arsenic values are the town minimum based on probability >10 µg/L. At the county level for younger mothers, positive r values prevail, but for older mothers, it is a mix. For both birth problems, the several most populous counties—with 60–80% of the state’s population and clustering at the southwest corner of the state—are largely consistent in having a positive r across different smoothing thresholds. We found evident spatial associations between the two adverse human reproductive outcomes and groundwater arsenic in New Hampshire, USA. However, the degree of associations and their sensitivity to different representations of arsenic level are variable. Generally, preterm birth has a stronger spatial association with groundwater arsenic than term LBW, suggesting an inconsistency in the impact of arsenic on the two reproductive outcomes. For both outcomes, younger maternal age has stronger spatial associations with groundwater arsenic.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Displacement of Tethered Hydro-Acoustic Modems by Uniform Horizontal Currents
2009-12-01
smooth and plane surfaces (in incompressible flow ) in air and in water (From [4]) ..............22 Figure 13. Drag of streamline bodies, tested in...from a stationary sea- surface buoy or Unmanned Surface Vehicle (USV) weighted by a dense object at the free end (Figure 2). The equations of static...forces on the free end are caused by an attached ballast or float. The moored cable has a free -moving sub- surface buoy positioned at a water depth
Thomas, K A; Burr, R
1999-06-01
Incubator thermal environments produced by skin versus air servo-control were compared. Infant abdominal skin and incubator air temperatures were recorded from 18 infants in skin servo-control and 14 infants in air servo-control (26- to 29-week gestational age, 14 +/- 2 days postnatal age) for 24 hours. Differences in incubator and infant temperature, neutral thermal environment (NTE) maintenance, and infant and incubator circadian rhythm were examined using analysis of variance and scatterplots. Skin servo-control resulted in more variable air temperature, yet more stable infant temperature, and more time within the NTE. Circadian rhythm of both infant and incubator temperature differed by control mode and the relationship between incubator and infant temperature rhythms was a function of control mode. The differences between incubator control modes extend beyond temperature stability and maintenance of NTE. Circadian rhythm of incubator and infant temperatures is influenced by incubator control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-19
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less
Classification of pollen species using autofluorescence image analysis.
Mitsumoto, Kotaro; Yabusaki, Katsumi; Aoyagi, Hideki
2009-01-01
A new method to classify pollen species was developed by monitoring autofluorescence images of pollen grains. The pollens of nine species were selected, and their autofluorescence images were captured by a microscope equipped with a digital camera. The pollen size and the ratio of the blue to red pollen autofluorescence spectra (the B/R ratio) were calculated by image processing. The B/R ratios and pollen size varied among the species. Furthermore, the scatter-plot of pollen size versus the B/R ratio showed that pollen could be classified to the species level using both parameters. The pollen size and B/R ratio were confirmed by means of particle flow image analysis and the fluorescence spectra, respectively. These results suggest that a flow system capable of measuring both scattered light and the autofluorescence of particles could classify and count pollen grains in real time.
Explicit formula for the Holevo bound for two-parameter qubit-state estimation problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, Jun, E-mail: junsuzuki@uec.ac.jp
The main contribution of this paper is to derive an explicit expression for the fundamental precision bound, the Holevo bound, for estimating any two-parameter family of qubit mixed-states in terms of quantum versions of Fisher information. The obtained formula depends solely on the symmetric logarithmic derivative (SLD), the right logarithmic derivative (RLD) Fisher information, and a given weight matrix. This result immediately provides necessary and sufficient conditions for the following two important classes of quantum statistical models; the Holevo bound coincides with the SLD Cramér-Rao bound and it does with the RLD Cramér-Rao bound. One of the important results ofmore » this paper is that a general model other than these two special cases exhibits an unexpected property: the structure of the Holevo bound changes smoothly when the weight matrix varies. In particular, it always coincides with the RLD Cramér-Rao bound for a certain choice of the weight matrix. Several examples illustrate these findings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
On Heels and Toes: How Ants Climb with Adhesive Pads and Tarsal Friction Hair Arrays
Endlein, Thomas; Federle, Walter
2015-01-01
Ants are able to climb effortlessly on vertical and inverted smooth surfaces. When climbing, their feet touch the substrate not only with their pretarsal adhesive pads but also with dense arrays of fine hairs on the ventral side of the 3rd and 4th tarsal segments. To understand what role these different attachment structures play during locomotion, we analysed leg kinematics and recorded single-leg ground reaction forces in Weaver ants (Oecophylla smaragdina) climbing vertically on a smooth glass substrate. We found that the ants engaged different attachment structures depending on whether their feet were above or below their Centre of Mass (CoM). Legs above the CoM pulled and engaged the arolia (‘toes’), whereas legs below the CoM pushed with the 3rd and 4th tarsomeres (‘heels’) in surface contact. Legs above the CoM carried a significantly larger proportion of the body weight than legs below the CoM. Force measurements on individual ant tarsi showed that friction increased with normal load as a result of the bending and increasing side contact of the tarsal hairs. On a rough sandpaper substrate, the tarsal hairs generated higher friction forces in the pushing than in the pulling direction, whereas the reverse effect was found on the smooth substrate. When the tarsal hairs were pushed, buckling was observed for forces exceeding the shear forces found in climbing ants. Adhesion forces were small but not negligible, and higher on the smooth substrate. Our results indicate that the dense tarsal hair arrays produce friction forces when pressed against the substrate, and help the ants to push outwards during horizontal and vertical walking. PMID:26559941
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
Effect of chronic low-dose tadalafil on penile cavernous tissues in diabetic rats.
Mostafa, Mohamed E; Senbel, Amira M; Mostafa, Taymour
2013-06-01
To assess the effect of chronic low-dose administration of tadalafil (Td) on penile cavernous tissue in induced diabetic rats. The study investigaged 48 adult male albino rats, comprising a control group, sham controls, streptozotocin-induced diabetic rats, and induced diabetic rats that received Td low-dose daily (0.09 mg/200 g weight) for 2 months. The rats were euthanized 1 day after the last dose. Cavernous tissues were subjected to histologic, immunohistochemical, morphometric studies, and measurement of intracavernosal pressure and mean arterial pressure in anesthetized rats. Diabetic rats demonstrated dilated cavernous spaces, smooth muscles with heterochromatic nuclei, degenerated mitochondria, vacuolated cytoplasm, and negative smooth muscle immunoreactivity. Nerve fibers demonstrated a thick myelin sheath and intra-axonal edema, where blood capillaries exhibited thick basement membrane. Diabetic rats on Td showed improved cavernous organization with significant morphometric increases in the area percentage of smooth muscles and elastic tissue and a significant decrease of fibrous tissue. The Td-treated group showed enhanced erectile function (intracavernosal pressure/mean arterial pressure) at 0.3, 0.5, 1, 3, and 5 Hz compared with diabetic group values at the respective frequencies (P <.05) that approached control values. Chronic low-dose administration of Td in diabetic rats is associated with substantial improvement of the structure of penile cavernous tissue, with increased smooth muscles and elastic tissue, decreased fibrous tissue, and functional enhancement of the erectile function. This raises the idea that the change in penile architecture with Td treatment improves erectile function beyond its half-life and its direct pharmacologic action on phosphodiesterase type 5. Copyright © 2013 Elsevier Inc. All rights reserved.
Wong, W. S.; Bloomquist, S. L.; Bendele, A. M.; Fleisch, J. H.
1992-01-01
1. Parenchymal lung strip preparations have been widely used as an in vitro model of peripheral airway smooth muscle. The present study examined functional responses of 4 consecutive guinea-pig lung parenchymal strips isolated from the central region (segment 1) to the distal edge (segment 4) of the lower lung lobe. The middle two segments were designated as segments 2 and 3. 2. Lung segments 1 and 4 exhibited significantly greater contraction than the other 2 segments to KCl when responses were expressed as mg force per mg tissue weight. Contractile responses to bronchospastic agents including histamine, carbachol, endothelin-1, leukotrienes (LT) B4 and D4, and the thromboxane A2-mimetic U46619 demonstrated no significant difference in EC50 values among the 4 lung segments. 3. Contractile responses of segments 1 and 4 to antigen-challenge (ovalbumin), ionophore A23187 and substance P were significantly greater than the other 2 segments with respect to either sensitivity or maximum responsiveness. 4. U46619-induced contractions of the 4 lung segments were relaxed in similar manner by papaverine and theophylline up to 100%, salbutamol up to 80%, and sodium nitroprusside by only 20%. In contrast, sodium nitroprusside markedly reversed U46619-induced contraction of pulmonary arterial rings and bronchial rings. 5. Histological studies identified 2-4 layers of smooth muscle cells underlying the lung pleural surface. Mast cells were prominent in this area. Moreover, morphometric studies showed that segment 4 possessed the least amount of smooth muscle structures from bronchial/bronchiolar wall and vasculatures as compared to the other 3 segments, and a significant difference in this respect was evident between segment 1 and segment 4.(ABSTRACT TRUNCATED AT 250 WORDS) Images Figure 1 Figure 6 PMID:1378341
Biddinger, Jessica E.; Baquet, Zachary C.; Jones, Kevin R.; McAdams, Jennifer
2013-01-01
A large proportion of vagal afferents are dependent on neurotrophin-3 (NT-3) for survival. NT-3 is expressed in developing gastrointestinal (GI) smooth muscle, a tissue densely innervated by vagal mechanoreceptors, and thus could regulate their survival. We genetically ablated NT-3 from developing GI smooth muscle and examined the pattern of loss of NT-3 expression in the GI tract and whether this loss altered vagal afferent signaling or feeding behavior. Meal-induced c-Fos activation was reduced in the solitary tract nucleus and area postrema in mice with a smooth muscle-specific NT-3 knockout (SM-NT-3KO) compared with controls, suggesting a decrease in vagal afferent signaling. Daily food intake and body weight of SM-NT-3KO mice and controls were similar. Meal pattern analysis revealed that mutants, however, had increases in average and total daily meal duration compared with controls. Mutants maintained normal meal size by decreasing eating rate compared with controls. Although microstructural analysis did not reveal a decrease in the rate of decay of eating in SM-NT-3KO mice, they ate continuously during the 30-min meal, whereas controls terminated feeding after 22 min. This led to a 74% increase in first daily meal size of SM-NT-3KO mice compared with controls. The increases in meal duration and first meal size of SM-NT-3KO mice are consistent with reduced satiation signaling by vagal afferents. This is the first demonstration of a role for GI NT-3 in short-term controls of feeding, most likely involving effects on development of vagal GI afferents that regulate satiation. PMID:24068045
On Heels and Toes: How Ants Climb with Adhesive Pads and Tarsal Friction Hair Arrays.
Endlein, Thomas; Federle, Walter
2015-01-01
Ants are able to climb effortlessly on vertical and inverted smooth surfaces. When climbing, their feet touch the substrate not only with their pretarsal adhesive pads but also with dense arrays of fine hairs on the ventral side of the 3rd and 4th tarsal segments. To understand what role these different attachment structures play during locomotion, we analysed leg kinematics and recorded single-leg ground reaction forces in Weaver ants (Oecophylla smaragdina) climbing vertically on a smooth glass substrate. We found that the ants engaged different attachment structures depending on whether their feet were above or below their Centre of Mass (CoM). Legs above the CoM pulled and engaged the arolia ('toes'), whereas legs below the CoM pushed with the 3rd and 4th tarsomeres ('heels') in surface contact. Legs above the CoM carried a significantly larger proportion of the body weight than legs below the CoM. Force measurements on individual ant tarsi showed that friction increased with normal load as a result of the bending and increasing side contact of the tarsal hairs. On a rough sandpaper substrate, the tarsal hairs generated higher friction forces in the pushing than in the pulling direction, whereas the reverse effect was found on the smooth substrate. When the tarsal hairs were pushed, buckling was observed for forces exceeding the shear forces found in climbing ants. Adhesion forces were small but not negligible, and higher on the smooth substrate. Our results indicate that the dense tarsal hair arrays produce friction forces when pressed against the substrate, and help the ants to push outwards during horizontal and vertical walking.
Spering, Miriam; Montagnini, Anna; Gegenfurtner, Karl R
2008-11-24
Visual processing of color and luminance for smooth pursuit and saccadic eye movements was investigated using a target selection paradigm. In two experiments, stimuli were varied along the dimensions color and luminance, and selection of the more salient target was compared in pursuit and saccades. Initial pursuit was biased in the direction of the luminance component whereas saccades showed a relative preference for color. An early pursuit response toward luminance was often reversed to color by a later saccade. Observers' perceptual judgments of stimulus salience, obtained in two control experiments, were clearly biased toward luminance. This choice bias in perceptual data implies that the initial short-latency pursuit response agrees with perceptual judgments. In contrast, saccades, which have a longer latency than pursuit, do not seem to follow the perceptual judgment of salience but instead show a stronger relative preference for color. These substantial differences in target selection imply that target selection processes for pursuit and saccadic eye movements use distinctly different weights for color and luminance stimuli.
Effects of Laser Energies on Wear and Tensile Properties of Biomimetic 7075 Aluminum Alloy
NASA Astrophysics Data System (ADS)
Yuan, Yuhuan; Zhang, Peng; Zhao, Guoping; Gao, Yang; Tao, Lixi; Chen, Heng; Zhang, Jianlong; Zhou, Hong
2018-03-01
Inspired by the non-smooth surface of certain animals, a biomimetic coupling unit with various sizes, microstructure, and hardness was prepared on the surface of 7075 aluminum alloy. Following experimental studies were conducted to investigate the wear and tensile properties with various laser energy inputs. The results demonstrated that the non-smooth surface with biomimetic coupling units had a positive effect on both the wear resistance and tensile property of 7075 aluminum alloy. In addition, the sample with the unit fabricated by the laser energy of 420.1 J/cm2 exhibited the most significant improvement on the wear and tensile properties owing to the minimum grain size and the highest microhardness. Also, the weight loss of the sample was one-third of the untreated one's, and the yield strength, the ultimate tensile strength, and the elongation improved by 20, 20, and 34% respectively. Moreover, the mechanisms of wear and tensile properties improvement were also analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less
Stemshorn, B; Nielsen, K; Samagh, B
1981-01-01
Two methods are described for the partial purification of a high molecular weight, heat-resistant component (CO1) of sonicates of smooth and rough Brucella abortus which is precipitated by sera of some infected cattle. Method 1, a combination of gel filtration chromatography and polyacrylamide gel electrophoresis, was used to prepare CO1 from sonicates of a smooth field strain of B. abortus. Method 2, a combination of gel filtration chromatography and heat treatment, was used to obtain CO1, from sonicates of rough B. abortus strain 45/20. Rabbit antisera produced against CO1 prepared by either method contained only CO1 precipitins but were negative in standard agglutination and complement fixation tests conducted with whole cell antigens. Evidence is presented that CO1 is identical to Brucella antigen A2, and it is proposed that in future the designation A2 be employed. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. PMID:6791797
Increasing viscosity and inertia using a robotically-controlled pen improves handwriting in children
Ben-Pazi, Hilla; Ishihara, Abraham; Kukke, Sahana; Sanger, Terence D
2010-01-01
The aim of this study was to determine the effect of mechanical properties of the pen on the quality of handwriting in children. Twenty two school aged children, ages 8–14 years wrote in cursive using a pen attached to a robot. The robot was programmed to increase the effective weight (inertia) and stiffness (viscosity) of the pen. Speed, frequency, variability, and quality of the two handwriting samples were compared. Increased inertia and viscosity improved handwriting quality in 85% of children (p<0.05). Handwriting quality did not correlate with changes in speed, suggesting that improvement was not due to reduced speed. Measures of movement variability remained unchanged, suggesting that improvement was not due to mechanical smoothing of pen movement by the robot. Since improvement was not explained by reduced speed or mechanical smoothing, we conclude that children alter handwriting movements in response to pen mechanics. Altered movement could be caused by changes in proprioceptive sensory feedback. PMID:19794098
Multispectral image enhancement for H&E stained pathological tissue specimens
NASA Astrophysics Data System (ADS)
Bautista, Pinky A.; Abe, Tokiya; Yamaguchi, Masahiro; Ohyama, Nagaaki; Yagi, Yukako
2008-03-01
The presence of a liver disease such as cirrhosis can be determined by examining the proliferation of collagen fiber from a tissue slide stained with special stain such as the Masson's trichrome(MT) stain. Collagen fiber and smooth muscle, which are both stained the same in an H&E stained slide, are stained blue and pink respectively in an MT-stained slide. In this paper we show that with multispectral imaging the difference between collagen fiber and smooth muscle can be visualized even from an H&E stained image. In the method M KL bases are derived using the spectral data of those H&E stained tissue components which can be easily differentiated from each other, i.e. nucleus, cytoplasm, red blood cells, etc. and based on the spectral residual error of fiber weighting factors are determined to enhance spectral features at certain wavelengths. Results of our experiment demonstrate the capability of multispectral imaging and its advantage compared to the conventional RGB imaging systems to delineate tissue structures with subtle colorimetric difference.
NASA Astrophysics Data System (ADS)
Martin, Bradley; Fornberg, Bengt
2017-04-01
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.
BMI curves for preterm infants.
Olsen, Irene E; Lawson, M Louise; Ferguson, A Nicole; Cantrell, Rebecca; Grabich, Shannon C; Zemel, Babette S; Clark, Reese H
2015-03-01
Preterm infants experience disproportionate growth failure postnatally and may be large weight for length despite being small weight for age by hospital discharge. The objective of this study was to create and validate intrauterine weight-for-length growth curves using the contemporary, large, racially diverse US birth parameters sample used to create the Olsen weight-, length-, and head-circumference-for-age curves. Data from 391 681 US infants (Pediatrix Medical Group) born at 22 to 42 weeks' gestational age (born in 1998-2006) included birth weight, length, and head circumference, estimated gestational age, and gender. Separate subsamples were used to create and validate curves. Established methods were used to determine the weight-for-length ratio that was most highly correlated with weight and uncorrelated with length. Final smoothed percentile curves (3rd to 97th) were created by the Lambda Mu Sigma (LMS) method. The validation sample was used to confirm results. The final sample included 254 454 singleton infants (57.2% male) who survived to discharge. BMI was the best overall weight-for-length ratio for both genders and a majority of gestational ages. Gender-specific BMI-for-age curves were created (n = 127 446) and successfully validated (n = 126 988). Mean z scores for the validation sample were ∼0 (∼1 SD). BMI was different across gender and gestational age. We provide a set of validated reference curves (gender-specific) to track changes in BMI for prematurely born infants cared for in the NICU for use with weight-, length-, and head-circumference-for-age intrauterine growth curves. Copyright © 2015 by the American Academy of Pediatrics.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2016-01-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328
Wu, Jindan; Mao, Zhengwei; Gao, Changyou
2012-01-01
Cell migration is an important biological activity. Regulating the migration of vascular smooth muscle cells (VSMCs) is critical in tissue engineering and therapy of cardiovascular disease. In this work, methoxy poly(ethylene glycol) (mPEG) brushes of different molecular weight (Mw 2 kDa, 5 kDa and 10 kDa) and grafting mass (0-859 ng/cm(2)) were prepared on aldehyde-activated glass slides, and were characterized by X-ray photoelectron spectrometer (XPS) and quartz crystal microbalance with dissipation (QCM-d). Adhesion and migration processes of VSMCs were studied as a function of different mPEG Mw and grafting density. We found that these events were mainly regulated by the grafting mass of mPEG regardless of mPEG Mw and grafting density. The VSMCs migrated on the surfaces randomly without a preferential direction. Their migration rates increased initially and then decreased along with the increase of mPEG grafting mass. The fastest rates (~24 μm/h) appeared on the mPEG brushes with grafting mass of 300-500 ng/cm(2) depending on the Mw. Cell adhesion strength, arrangement of cytoskeleton, and gene and protein expression levels of adhesion related proteins were studied to unveil the intrinsic mechanism. It was found that the cell-substrate interaction controlled the cell mobility, and the highest migration rate was achieved on the surfaces with appropriate adhesion force. Copyright © 2011 Elsevier Ltd. All rights reserved.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.
Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John
2015-12-01
Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.
Fatigue Properties of the Ultra-High Strength Steel TM210A
Kang, Xia; Zhao, Gui-ping
2017-01-01
This paper presents the results of an experiment to investigate the high cycle fatigue properties of the ultra-high strength steel TM210A. A constant amplitude rotating bending fatigue experiment was performed at room temperature at stress ratio R = −1. In order to evaluate the notch effect, the fatigue experiment was carried out upon two sets of specimens, smooth and notched, respectively. In the experiment, the rotating bending fatigue life was tested using the group method, and the rotating bending fatigue limit was tested using the staircase method at 1 × 107 cycles. A double weighted least square method was then used to fit the stress-life (S–N) curve. The S–N curves of the two sets of specimens were obtained and the morphologies of the fractures of the two sets of specimens were observed with scanning electron microscopy (SEM). The results showed that the fatigue limit of the smooth specimen for rotating bending fatigue was 615 MPa; the ratio of the fatigue limit to tensile strength was 0.29, and the cracks initiated at the surface of the smooth specimen; while the fatigue limit of the notched specimen for rotating bending fatigue was 363 MPa, and the cracks initiated at the edge of the notch. The fatigue notch sensitivity index of the ultra-high strength maraging steel TM210A was 0.69. PMID:28891934
Mean-field theory of a plastic network of integrate-and-fire neurons.
Chen, Chun-Chung; Jasnow, David
2010-01-01
We consider a noise-driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic-weight distribution, respectively.
Weighted image de-fogging using luminance dark prior
NASA Astrophysics Data System (ADS)
Kansal, Isha; Kasana, Singara Singh
2017-10-01
In this work, the weighted image de-fogging process based upon dark channel prior is modified by using luminance dark prior. Dark channel prior estimates the transmission by using three colour channels whereas luminance dark prior does the same by making use of only Y component of YUV colour space. For each pixel in a patch of ? size, the luminance dark prior uses ? pixels, rather than ? pixels used in DCP technique, which speeds up the de-fogging process. To estimate the transmission map, weighted approach based upon difference prior is used which mitigates halo artefacts at the time of transmission estimation. The major drawback of weighted technique is that it does not maintain the constancy of the transmission in a local patch even if there are no significant depth disruptions, due to which the de-fogged image looks over smooth and has low contrast. Apart from this, in some images, weighted transmission still carries less visible halo artefacts. Therefore, Gaussian filter is used to blur the estimated weighted transmission map which enhances the contrast of de-fogged images. In addition to this, a novel approach is proposed to remove the pixels belonging to bright light source(s) during the atmospheric light estimation process based upon histogram of YUV colour space. To show the effectiveness, the proposed technique is compared with existing techniques. This comparison shows that the proposed technique performs better than the existing techniques.
Song, Qi; Wu, Xiaodong; Liu, Yunlong; Smith, Mark; Buatti, John; Sonka, Milan
2009-01-01
We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness a priori shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.
Colombian reference growth curves for height, weight, body mass index and head circumference.
Durán, Paola; Merker, Andrea; Briceño, Germán; Colón, Eugenia; Line, Dionne; Abad, Verónica; Del Toro, Kenny; Chahín, Silvia; Matallana, Audrey Mary; Lema, Adriana; Llano, Mauricio; Céspedes, Jaime; Hagenäs, Lars
2016-03-01
Published Growth studies from Latin America are limited to growth references from Argentina and Venezuela. The aim of this study was to construct reference growth curves for height, weight, body mass index (BMI) and head circumference of Colombian children in a format that is useful for following the growth of the individual child and as a tool for public health. Prospective measurements from 27 209 Colombian children from middle and upper socio-economic level families were processed using the generalised additive models for location, scale and shape (GAMLSS). Descriptive statistics for length and height, weight, BMI and head circumference for age are given as raw and smoothed values. Final height was 172.3 cm for boys and 159.4 cm for girls. Weight at 18 years of age was 64.0 kg for boys and 54 kg for girls. Growth curves are presented in a ± 3 SD format using logarithmic axes. The constructed reference growth curves are a start for following secular trends in Colombia and are also in the presented layout an optimal clinical tool for health care. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
The, Bertram; Hosman, Anton; Kootstra, Johan; Kralj-Iglic, Veronika; Flivik, Gunnar; Verdonschot, Nico; Diercks, Ron
2008-01-01
The main concern in the long run of total hip replacements is aseptic loosening of the prosthesis. Optimization of the biomechanics of the hip joint is necessary for optimization of long-term success. A widely implementable tool to predict biomechanical consequences of preoperatively planned reconstructions still has to be developed. A potentially useful model to this purpose has been developed previously. The aim of this study is to quantify the association between the estimated hip joint contact force by this biomechanical model and RSA-measured wear rates in a clinical setting. Thirty-one patients with a total hip replacement were measured with RSA, the gold standard for clinical wear measurements. The reference examination was done within 1 week of the operation and the follow-up examinations were done at 1, 2 and 5 years. Conventional pelvic X-rays were taken on the same day. The contact stress distribution in the hip joint was determined by the computer program HIPSTRESS. The procedure for the determination of the hip joint contact stress distribution is based on the mathematical model of the resultant hip force in the one-legged stance and the mathematical model of the contact stress distribution. The model for the force requires as input data, several geometrical parameters of the hip and the body weight, while the model for stress requires as input data, the magnitude and direction of the resultant hip force. The stress distribution is presented by the peak stress-the maximal value of stress on the weight-bearing area (p(max)) and also by the peak stress calculated with respect to the body weight (p(max)/W(B)) which gives the effect of hip geometry. Visualization of the relations between predicted values by the model and the wear at different points in the follow-up was done using scatterplots. Correlations were expressed as Pearson r values. The predicted p(max) and wear were clearly correlated in the first year post-operatively (r = 0.58, p = 0.002), while this correlation is weaker after 2 years (r = 0.19, p = 0.337) and 5 years (r = 0.24, p = 0.235). The wear values at 1, 2 and 5 years post-operatively correlate with each other in the way that is expected considering the wear velocity curve of the whole group. The correlation between the predicted p(max) values of two observers who were blinded for each other's results was very good (r = 0.93, p < 0.001). We conclude that the biomechanical model used in this paper provides a scientific foundation for the development of a new way of constructing preoperative biomechanical plans for total hip replacements.
Poly (ricinoleic acid) based novel thermosetting elastomer.
Ebata, Hiroki; Yasuda, Mayumi; Toshima, Kazunobu; Matsumura, Shuichi
2008-01-01
A novel bio-based thermosetting elastomer was prepared by the lipase-catalyzed polymerization of methyl ricinoleate with subsequent vulcanization. Some mechanical properties of the cured carbon black-filled polyricinoleate compounds were evaluated as a thermosetting elastomer. It was found that the carbon black-filled polyricinoleate compounds were readily cured by sulfur curatives to produce a thermosetting elastomer that formed a rubber-like sheet with a smooth and non-sticky surface. The curing behaviors and mechanical properties were dependent on both the molecular weight of the polyricinoleate and the amount of the sulfur curatives. Cured compounds consisting of polyricinoleate with a molecular weight of 100,800 showed good mechanical properties, such as a hardness of 48 A based on the durometer A measurements, a tensile strength at break of 6.91 MPa and an elongation at break of 350%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glover, W. J., E-mail: williamjglover@gmail.com
2014-11-07
State averaged complete active space self-consistent field (SA-CASSCF) is a workhorse for determining the excited-state electronic structure of molecules, particularly for states with multireference character; however, the method suffers from known issues that have prevented its wider adoption. One issue is the presence of discontinuities in potential energy surfaces when a state that is not included in the state averaging crosses with one that is. In this communication I introduce a new dynamical weight with spline (DWS) scheme that mimics SA-CASSCF while removing energy discontinuities due to unweighted state crossings. In addition, analytical gradients for DWS-CASSCF (and other dynamically weightedmore » schemes) are derived for the first time, enabling energy-conserving excited-state ab initio molecular dynamics in instances where SA-CASSCF fails.« less
Statistical Optimality in Multipartite Ranking and Ordinal Regression.
Uematsu, Kazuki; Lee, Yoonkyung
2015-05-01
Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.
Hosseini, Sayed-Mohsen; Maracy, Mohamad-Reza; Sarrafzade, Sheida; Kelishadi, Roya
2014-01-01
Background: Growth is one of the most important indices in child health. The best and most effective way to investigate child health is measuring the physical growth indices such as weight, height and head circumference. Among these measures, weight growth is the simplest and the most effective way to determine child growth status. Weight trend at a given age is the result of cumulative growth experience, whereas growth velocity represents what is happening at the time. Methods: This longitudinal study was conducted among 606 children repeatedly measured from birth until 2 years of age. We used linear mixed model to analyze repeated measures and to determine factors affecting the growth trajectory. LOWESS smooth curve was used to draw velocity curves. Results: Gender, child rank, birth status and feeding mode had a significant effect on weight trajectory. Boys had higher weight during the study. Infants with exclusive breast feeding had higher weight than other infants. Boys had higher growth velocity up to age 6 month. Breast fed infants had higher growth velocity up to 6 month, but thereafter the velocity was higher in other infants. Conclusions: Many of the studies have investigated child growth, but most of them used cross-sectional design. In this study, we used longitudinal method to determine effective factors on weight trend in children from birth until 2-year-old. The effects of perinatal factors on further growth should be considered for prevention of growth disorders and their late complications. PMID:24829720
Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete
2015-01-01
The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the NASA Gulfstream GIII test bed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight.
Clinical and histopathological features of adenomas of the ciliary pigment epithelium.
Chang, Ying; Wei, Wen Bin; Shi, Ji Tong; Xian, Jun Fang; Yang, Wen Li; Xu, Xiao Lin; Bai, Hai Xia; Li, Bin; Jonas, Jost B
2016-11-01
Adenomas of the ciliary pigment epithelium (CPE) are rare benign tumours which have mainly to be differentiated from malignant ciliary body melanomas. Here we report on a consecutive series of patients with CPE adenomas and describe their characteristics. The retrospective hospital-based case series study included all patients who were consecutively operated for CPE adenomas. Of the 110 patients treated for ciliary body tumours, five patients (4.5%) had a CPE adenoma. Mean age was 59.0 ± 9.9 years (range: 46-72 years). Mean tumour apical thickness was 6.6 ± 1.7 mm. Tumour colour was mostly homogenously brown to black, and the tumour surface was smooth. The tumour masses pushed the iris tissue forward without infiltrating iris or anterior chamber angle. Sonography revealed an irregular echogram with sharp lesion borders and signs of blood flow in Color Doppler flow imaging. Ultrasonographic biomicroscopy demonstrated medium-low internal reflectivity and acoustic attenuation. In magnetic resonance imaging (MRI), the tumours as compared to brain were hyperintense on T1-weighted images and hypointense on T2-weighted images. Tumour tissue consisted of cords and nests of pigment epithelium cells separated by septa of vascularized fibrous connective tissue, leading to a pseudo-glandular appearance. The melanin granules in the cytoplasm were large and mostly spherical in shape. In four patients, the tumours were hyperpigmented. Tumour cells were large with round or oval nuclei and clearly detectable nucleoli. These clinical characteristics of CPE adenomas, such as homogenous dark brown colour, smooth surface, iris dislocation and anterior chamber angle narrowing but no iris infiltration, segmental cataract, pigment dispersion, and, as compared to brain tissue, hypointensity and, as compared to extraocular muscles or lacrimal gland, hyperintensity on T2-weighted MRI images, may be helpful for the differentiation from ciliary body malignant melanomas. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Bakst, Leah; Fleuriet, Jérome; Mustari, Michael J
2017-05-01
Neurons in the smooth eye movement subregion of the frontal eye field (FEFsem) are known to play an important role in voluntary smooth pursuit eye movements. Underlying this function are projections to parietal and prefrontal visual association areas and subcortical structures, all known to play vital but differing roles in the execution of smooth pursuit. Additionally, the FEFsem has been shown to carry a diverse array of signals (e.g., eye velocity, acceleration, gain control). We hypothesized that distinct subpopulations of FEFsem neurons subserve these diverse functions and projections, and that the relative weights of retinal and extraretinal signals could form the basis for categorization of units. To investigate this, we used a step-ramp tracking task with a target blink to determine the relative contributions of retinal and extraretinal signals in individual FEFsem neurons throughout pursuit. We found that the contributions of retinal and extraretinal signals to neuronal activity and behavior change throughout the time course of pursuit. A clustering algorithm revealed three distinct neuronal subpopulations: cluster 1 was defined by a higher sensitivity to eye velocity, acceleration, and retinal image motion; cluster 2 had greater activity during blinks; and cluster 3 had significantly greater eye position sensitivity. We also performed a comparison with a sample of medial superior temporal neurons to assess similarities and differences between the two areas. Our results indicate the utility of simple tests such as the target blink for parsing the complex and multifaceted roles of cortical areas in behavior. NEW & NOTEWORTHY The frontal eye field (FEF) is known to play a critical role in volitional smooth pursuit, carrying a variety of signals that are distributed throughout the brain. This study used a novel application of a target blink task during step ramp tracking to determine, in combination with a clustering algorithm, the relative contributions of retinal and extraretinal signals to FEF activity and the extent to which these contributions could form the basis for a categorization of neurons. Copyright © 2017 the American Physiological Society.
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-01-01
Introduction There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Methods and analysis Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Discussion Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. PMID:27591026
Neethling, Ian; Jelsma, Jennifer; Ramma, Lebogang; Schneider, Helen; Bradshaw, Debbie
2016-01-01
The global burden of disease (GBD) 2010 study used a universal set of disability weights to estimate disability adjusted life years (DALYs) by country. However, it is not clear whether these weights can be applied universally in calculating DALYs to inform local decision-making. This study derived disability weights for a resource-constrained community in Cape Town, South Africa, and interrogated whether the GBD 2010 disability weights necessarily represent the preferences of economically disadvantaged communities. A household survey was conducted in Lavender Hill, Cape Town, to assess the health state preferences of the general public. The responses from a paired comparison valuation method were assessed using a probit regression. The probit coefficients were anchored onto the 0 to 1 disability weight scale by running a lowess regression on the GBD 2010 disability weights and interpolating the coefficients between the upper and lower limit of the smoothed disability weights. Heroin and opioid dependence had the highest disability weight of 0.630, whereas intellectual disability had the lowest (0.040). Untreated injuries ranked higher than severe mental disorders. There were some counterintuitive results, such as moderate (15th) and severe vision impairment (16th) ranking higher than blindness (20th). A moderate correlation between the disability weights of the local study and those of the GBD 2010 study was observed (R(2)=0.440, p<0.05). This indicates that there was a relationship, although some conditions, such as untreated fracture of the radius or ulna, showed large variability in disability weights (0.488 in local study and 0.043 in GBD 2010). Respondents seemed to value physical mobility higher than cognitive functioning, which is in contrast to the GBD 2010 study. This study shows that not all health state preferences are universal. Studies estimating DALYs need to derive local disability weights using methods that are less cognitively demanding for respondents.
Hartmann-Boyce, Jamie; Jebb, Susan; Albury, Charlotte; Nourse, Rebecca; Aveyard, Paul
2017-01-01
Background Significant weight loss takes several months to achieve, and behavioral support can enhance weight loss success. Weight loss apps could provide ongoing support and deliver innovative interventions, but to do so, developers must ensure user satisfaction. Objective The aim of this study was to conduct a review of Google Play Store apps to explore what users like and dislike about weight loss and weight-tracking apps and to examine qualitative feedback through analysis of user reviews. Methods The Google Play Store was searched and screened for weight loss apps using the search terms weight loss and weight track*, resulting in 179 mobile apps. A content analysis was conducted based on the Oxford Food and Activity Behaviors taxonomy. Correlational analyses were used to assess the association between complexity of mobile health (mHealth) apps and popularity indicators. The sample was then screened for popular apps that primarily focus on weight-tracking. For the resulting subset of 15 weight-tracking apps, 569 user reviews were sampled from the Google Play Store. Framework and thematic analysis of user reviews was conducted to assess which features users valued and how design influenced users’ responses. Results The complexity (number of components) of weight loss apps was significantly positively correlated with the rating (r=.25; P=.001), number of reviews (r=.28; P<.001), and number of downloads (r=.48; P<.001) of the app. In contrast, in the qualitative analysis of weight-tracking apps, users expressed preference for simplicity and ease of use. In addition, we found that positive reinforcement through detailed feedback fostered users’ motivation for further weight loss. Smooth functioning and reliable data storage emerged as critical prerequisites for long-term app usage. Conclusions Users of weight-tracking apps valued simplicity, whereas users of comprehensive weight loss apps appreciated availability of more features, indicating that complexity demands are specific to different target populations. The provision of feedback on progress can motivate users to continue their weight loss attempts. Users value seamless functioning and reliable data storage. PMID:29273575
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
High-performance functional ecopolymers based on flora and fauna.
Kaneko, Tatsuo
2007-01-01
Liquid crystalline (LC) polymers of rigid monomers based on flora and fauna were prepared by in-bulk polymerization. Para-coumaric (p-coumaric) acid [4-hydroxycinnamic acid (4HCA)] and its derivatives were selected as phytomonomers and bile acids were selected as biomonomers. The 4HCA homopolymer showed a thermotropic LC phase only in a state of low molecular weight. The copolymers of 4HCA with bile acids such as lithocholic acid (LCA) and cholic acid (CA) showed excellent cell compatibilities but low molecular weights. However, P(4HCA-co-CA)s allowed LC spinning to create molecularly oriented biofibers, presumably due to the chain entanglement that occurs during in-bulk chain propagation into hyperbranching architecture. P[4HCA-co-3,4-dihydroxycinnamic acid (DHCA)]s showed high molecular weight, high mechanical strength, high Young's modulus, and high softening temperature, which may be achieved through the entanglement by in-bulk formation of hyperbranching, rigid structures. P(4HCA-co-DHCA)s showed a smooth hydrolysis, in-soil degradation, and photo-tunable hydrolysis. Thus, P(4HCA-co-DHCA)s might be applied as an environmentally degradable plastic with extremely high performance.
Cholesterol Curves to Identify Population Norms by Age and Sex in Healthy Weight Children
Skinner, Asheley Cockrell; Steiner, Michael J.; Chung, Arlene E.; Perrin, Eliana M.
2012-01-01
Objective Develop clinically applicable charts of lipid values illustrating fluctuations throughout childhood and by sex among healthy weight children. Methods The National Health and Nutrition Examination Survey (1999–2008) was used to estimate total cholesterol, high-density lipoprotein (HDL), low-density lipoprotein (LDL), and triglycerides by age and sex in healthy weight children age 3 to 17 years. Using LMS procedures, the authors created smoothed curves demonstrating population-based 50th percentile for age and the 75th and 95th percentiles. Results The curves were based on 7681 children meeting inclusion criteria. Total cholesterol, HDL, and LDL demonstrated peaks at approximately 8 to 12 years for boys. Similar peaks were evident for girls at slightly younger ages, approximately 7 to 11 years. Triglycerides showed peaks for girls, but values were similar across ages for boys. Conclusions The use of fixed lipid value cutoffs in established guidelines regardless of age or sex likely mislabels many children as abnormal. The authors’ charts may allow for a more nuanced interpretation based on population norms. PMID:22157422
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
NASA Astrophysics Data System (ADS)
Verma, Aditya; Kumar, Manoj; Patil, Anil Kumar
2018-04-01
The application of compact heat exchangers in any thermal system improves overall performance with a considerable reduction in size and weight. Inserts of different geometrical features have been used as turbulence promoting devices to increase the heat transfer rates. The present study deals with the experimental investigation of heat transfer and fluid flow characteristics of a tubular heat exchanger fitted with modified helical coiled inserts. Experiments have been carried out for a smooth tube without insert, tube fitted with helical coiled inserts, and modified helical coiled inserts. The helical coiled inserts are tested by varying the pitch ratio and wire diameter ratio from 0.5-1.5, and 0.063-0.125, respectively for the Reynolds number range of 1400 to 11,000. Experimental data have also been collected for the modified helical coiled inserts with gradually increasing pitch (GIP) and gradually decreasing pitch (GDP) configurations. The Nusselt number and friction factor values for helical coiled inserts are enhanced in the range of 1.42-2.62, 3.4-27.4, relative to smooth tube, respectively. The modified helical coiled insert showed enhancements in Nusselt number and friction factor values in the range of 1.49-3.14, 11.2-19.9, relative to smooth tube, respectively. The helical coiled and modified helical coiled inserts have thermo-hydraulic performance factor in the range of 0.59-1.29, 0.6-1.39, respectively. The empirical correlations of Nusselt number and friction factor for helical coiled inserts are proposed.
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
Using the LMS method to calculate z-scores for the Fenton preterm infant growth chart.
Fenton, T R; Sauve, R S
2007-12-01
The use of exact percentiles and z-scores permit optimal assessment of infants' growth. In addition, z-scores allow the precise description of size outside of the 3rd and 97th percentiles of a growth reference. To calculate percentiles and z-scores, health professionals require the LMS parameters (Lambda for the skew, Mu for the median, and Sigma for the generalized coefficient of variation; Cole, 1990). The objective of this study was to calculate the LMS parameters for the Fenton preterm growth chart (2003). Secondary data analysis of the Fenton preterm growth chart data. The Cole methods were used to produce the LMS parameters and to smooth the L parameter. New percentiles were generated from the smooth LMS parameters, which were then compared with the original growth chart percentiles. The maximum differences between the original percentile curves and the percentile curves generated from the LMS parameters were: for weight; a difference of 66 g (2.9%) at 32 weeks along the 90th percentile; for head circumference; some differences of 0.3 cm (0.6-1.0%); and for length; a difference of 0.5 cm (1.6%) at 22 weeks on the 97th percentile. The percentile curves generated from the smoothed LMS parameters for the Fenton growth chart are similar to the original curves. These LMS parameters for the Fenton preterm growth chart facilitate the calculation of z-scores, which will permit the more precise assessment of growth of infants who are born preterm.
Adaptive regularization of the NL-means: application to image and video denoising.
Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François
2014-08-01
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
Extraction and electrospinning of gelatin from fish skin.
Songchotikunpan, Panida; Tattiyakul, Jirarat; Supaphol, Pitt
2008-04-01
Ultra-fine gelatin fibers were successfully fabricated by electrospinning from the solutions of Nile tilapia (Oreochromis niloticus) skin-extracted gelatin in either acetic acid or formic acid aqueous solutions. The extracted gelatin contained 7.3% moisture, 89.4% protein, 0.3% lipid, and 0.4% ash contents (on the basis of wet weight), while the bloom gel strength, the shear viscosity, and the pH values were 328 g, 17.8 mPa s, and 5.0, respectively. Both the acid concentration and the concentration of the gelatin solutions strongly influenced the properties of the as-prepared solutions and the obtained gelatin fibers. At low acid concentrations (i.e., 15% (w/v) extracted gelatin solutions in 10 and 20% (v/v) acetic acid solvents or 10-60% (v/v) formic acid solvents), a combination between smooth and beaded fibers was observed. At low concentrations of the gelatin solutions in either 40% (v/v) acetic acid solvent or 80% (v/v) formic acid solvent (i.e., 5-11%, w/v), either discrete beads or beaded fibers were obtained, while, at higher concentrations (i.e., 14-29%, w/v), only smooth or a combination of smooth and beaded fibers were obtained. The average diameters of the obtained fibers, regardless of the types of the acid solvents used, ranged between 109 and 761 nm. Lastly, cross-linking of the obtained gelatin fiber mats with glutaraldehyde vapor caused slight shrinkage from their original dimension, and the cross-linked gelatin fiber mats became stiffer.
MARD—A moving average rose diagram application for the geosciences
NASA Astrophysics Data System (ADS)
Munro, Mark A.; Blenkinsop, Thomas G.
2012-12-01
MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.
Automatic intelligibility classification of sentence-level pathological speech
Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.
2014-01-01
Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544
SU-E-I-01: Iterative CBCT Reconstruction with a Feature-Preserving Penalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyu, Q; Li, B; Southern Medical University, Guangzhou
2015-06-15
Purpose: Low-dose CBCT is desired in various clinical applications. Iterative image reconstruction algorithms have shown advantages in suppressing noise in low-dose CBCT. However, due to the smoothness constraint enforced during the reconstruction process, edges may be blurred and image features may lose in the reconstructed image. In this work, we proposed a new penalty design to preserve image features in the image reconstructed by iterative algorithms. Methods: Low-dose CBCT is reconstructed by minimizing the penalized weighted least-squares (PWLS) objective function. Binary Robust Independent Elementary Features (BRIEF) of the image were integrated into the penalty of PWLS. BRIEF is a generalmore » purpose point descriptor that can be used to identify important features of an image. In this work, BRIEF distance of two neighboring pixels was used to weigh the smoothing parameter in PWLS. For pixels of large BRIEF distance, weaker smooth constraint will be enforced. Image features will be better preserved through such a design. The performance of the PWLS algorithm with BRIEF penalty was evaluated by a CatPhan 600 phantom. Results: The image quality reconstructed by the proposed PWLS-BRIEF algorithm is superior to that by the conventional PWLS method and the standard FDK method. At matched noise level, edges in PWLS-BRIEF reconstructed image are better preserved. Conclusion: This study demonstrated that the proposed PWLS-BRIEF algorithm has great potential on preserving image features in low-dose CBCT.« less
NASA Astrophysics Data System (ADS)
Nasir, Ahmad Fakhri Ab; Suhaila Sabarudin, Siti; Majeed, Anwar P. P. Abdul; Ghani, Ahmad Shahrizan Abdul
2018-04-01
Chicken egg is a source of food of high demand by humans. Human operators cannot work perfectly and continuously when conducting egg grading. Instead of an egg grading system using weight measure, an automatic system for egg grading using computer vision (using egg shape parameter) can be used to improve the productivity of egg grading. However, early hypothesis has indicated that more number of egg classes will change when using egg shape parameter compared with using weight measure. This paper presents the comparison of egg classification by the two above-mentioned methods. Firstly, 120 images of chicken eggs of various grades (A–D) produced in Malaysia are captured. Then, the egg images are processed using image pre-processing techniques, such as image cropping, smoothing and segmentation. Thereafter, eight egg shape features, including area, major axis length, minor axis length, volume, diameter and perimeter, are extracted. Lastly, feature selection (information gain ratio) and feature extraction (principal component analysis) are performed using k-nearest neighbour classifier in the classification process. Two methods, namely, supervised learning (using weight measure as graded by egg supplier) and unsupervised learning (using egg shape parameters as graded by ourselves), are conducted to execute the experiment. Clustering results reveal many changes in egg classes after performing shape-based grading. On average, the best recognition results using shape-based grading label is 94.16% while using weight-based label is 44.17%. As conclusion, automated egg grading system using computer vision is better by implementing shape-based features since it uses image meanwhile the weight parameter is more suitable by using weight grading system.
Lee, Ju-Hee; Lee, Hyunseung; Joung, Yoon Ki; Jung, Kyung Hee; Choi, Jong-Hoon; Lee, Don-Haeng; Park, Ki Dong; Hong, Soon-Sun
2011-02-01
Low molecular weight heparin (LH) has been reported to have anti-fibrotic and anti-cancer effects. To enhance the efficacy and minimize adverse effects of LH, a low molecular weight heparin-pluronic nanogel (LHP) was synthesized by conjugating carboxylated pluronic F127 to LH. The LHP reduced anti-coagulant activity by about 33% of the innate activity. Liver fibrosis was induced by the injection of 1% dimethylnitrosamine (DMN) in rats, and LH or LHP (1000 IU/kg body weight) was treated once daily for 4 weeks. LHP administration prevented DMN-mediated liver weight loss and decreased the values of aspartate transaminase, alanine transaminase, total bilirubin, and direct bilirubin. LHP markedly reduced the fibrotic area compared to LH. Also, LHP potently inhibited mRNA or protein expression of alpha-smooth muscle actin, collagen type I, matrix metalloproteinase-2, and tissue inhibitor of metalloproteinase-1 compared to LH, in DMN-induced liver fibrosis. In addition, LHP decreased the expression of transforming growth factor-β(1) (TGF-β(1)), p-Smad 2, and p-Smad 3, which are all important molecules of the TGF-β/Smad signaling pathway. The results support an LHP shows anti-fibrotic effect in the liver via inhibition of the TGF-β/Smad pathway as well as by the elimination of the extracellular matrix. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Huihui
2017-10-01
For the gas-vacuum interface problem with physical singularity and the sound speed being {C^{{1}/{2}}}-Hölder continuous near vacuum boundaries of the isentropic compressible Euler equations with damping, the global existence of smooth solutions and the convergence to Barenblatt self-similar solutions of the corresponding porous media equation are proved in this paper for spherically symmetric motions in three dimensions; this is done by overcoming the analytical difficulties caused by the coordinate's singularity near the center of symmetry, and the physical vacuum singularity to which standard methods of symmetric hyperbolic systems do not apply. Various weights are identified to resolve the singularity near the vacuum boundary and the center of symmetry globally in time. The results obtained here contribute to the theory of global solutions to vacuum boundary problems of compressible inviscid fluids, for which the currently available results are mainly for the local-in-time well-posedness theory, and also to the theory of global smooth solutions of dissipative hyperbolic systems which fail to be strictly hyperbolic.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN.
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP 0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP 0 filters outperform the more complex known boundary filters. NP 0 filters typically reduce the L ∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP 0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute.
EXPLICIT LEAST-DEGREE BOUNDARY FILTERS FOR DISCONTINUOUS GALERKIN*
Nguyen, Dang-Manh; Peters, Jörg
2017-01-01
Convolving the output of Discontinuous Galerkin (DG) computations using spline filters can improve both smoothness and accuracy of the output. At domain boundaries, these filters have to be one-sided for non-periodic boundary conditions. Recently, position-dependent smoothness-increasing accuracy-preserving (PSIAC) filters were shown to be a superset of the well-known one-sided RLKV and SRV filters. Since PSIAC filters can be formulated symbolically, PSIAC filtering amounts to forming linear products with local DG output and so offers a more stable and efficient implementation. The paper introduces a new class of PSIAC filters NP0 that have small support and are piecewise constant. Extensive numerical experiments for the canonical hyperbolic test equation show NP0 filters outperform the more complex known boundary filters. NP0 filters typically reduce the L∞ error in the boundary region below that of the interior where optimally superconvergent symmetric filters of the same support are applied. NP0 filtering can be implemented as forming linear combinations of the data with short rational weights. Exact derivatives of the convolved output are easy to compute. PMID:29081643
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
Decidualized endometrioma during pregnancy: recognizing an imaging mimic of ovarian malignancy.
Poder, Liina; Coakley, Fergus V; Rabban, Joseph T; Goldstein, Ruth B; Aziz, Seerat; Chen, Lee-may
2008-01-01
To present the ultrasound and magnetic resonance imaging findings that may allow for a prospective diagnosis and expectant management of decidualized endometriomas because the rare occurrence of decidualization in the ectopic endometrial stroma of an endometrioma during pregnancy can mimic ovarian cancer at imaging. Smooth lobulated mural nodules with prominent internal vascularity were noted in an apparent right ovarian endometrioma on serial ultrasound studies in a 34-year-old woman at 12, 21, 27, and 30 weeks of gestation. Magnetic resonance imaging demonstrated the nodules to be strikingly similar in intensity and texture to the decidualized endometrium in the uterus on T2-weighted sequences. A provisional diagnosis of decidualized endometrioma allowed for expectant management with immediate postpartum resection and confirmation of the diagnosis. Decidualized endometrioma can mimic ovarian malignancy during pregnancy, but a prospective diagnosis may be possible when solid smoothly lobulated nodules with prominent internal vascularity within an endometrioma are seen from early in pregnancy, and the nodules demonstrate marked similarity in signal intensity and texture with the decidualized endometrium in the uterus at magnetic resonance imaging.
Abdelhedi, Ola; Mora, Leticia; Jemil, Ines; Jridi, Mourad; Toldrá, Fidel; Nasri, Moncef; Nasri, Rim
2017-09-01
The effect of ultrasound (US) pre-treatment on the evolution of Maillard reaction (MR), induced between low molecular weight (LMW) peptides and sucrose, was studied. LMW peptides (<1kDa) were obtained by the ultrafiltration of smooth hound viscera protein hydrolysates, produced by Neutrase, Esperase and Purafect. MR was induced by heating the LMW peptides in the presence of sucrose for 2h at 90°C, without or with US pre-treatment. During the reaction, a marked decrease in pH values, coupled to the increase in colour of the Maillard reaction products (MRPs), were recorded. In addition, after sonication, the glycation degree was significantly enhanced in Esperase-derived peptides/sucrose conjugates (p<0.05). Moreover, results showed that thermal heating, particularly after US treatment, reduced the bitter taste and enhanced the antioxidant capacities of the resulting conjugates. Hence, it could be concluded that US leads to efficient mixing of sugar-protein solution and efficient heat/mass transfer, contributing to increase the MR rate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes
Li, Degui; Li, Runze
2016-01-01
In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894
Spectral saliency via automatic adaptive amplitude spectrum analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan
2016-03-01
Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.
Dynamo-based scheme for forecasting the magnitude of solar activity cycles
NASA Technical Reports Server (NTRS)
Layden, A. C.; Fox, P. A.; Howard, J. M.; Sarajedini, A.; Schatten, K. H.
1991-01-01
This paper presents a general framework for forecasting the smoothed maximum level of solar activity in a given cycle, based on a simple understanding of the solar dynamo. This type of forecasting requires knowledge of the sun's polar magnetic field strength at the preceding activity minimum. Because direct measurements of this quantity are difficult to obtain, the quality of a number of proxy indicators already used by other authors is evaluated, which are physically related to the sun's polar field. These indicators are subjected to a rigorous statistical analysis, and the analysis technique for each indicator is specified in detail in order to simplify and systematize reanalysis for future use. It is found that several of these proxies are in fact poorly correlated or uncorrelated with solar activity, and thus are of little value for predicting activity maxima. Also presented is a scheme in which the predictions of the individual proxies are combined via an appropriately weighted mean to produce a compound prediction. The scheme is then applied to the current cycle 22, and a maximum smoothed international sunspot number of 171 + or - 26 is estimated.
C1,1 regularity for degenerate elliptic obstacle problems
NASA Astrophysics Data System (ADS)
Daskalopoulos, Panagiota; Feehan, Paul M. N.
2016-03-01
The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.
Hepatic perivascular epithelioid cell tumor: five case reports and literature review.
Liu, Zhen; Qi, Yafei; Wang, Chuanzhuo; Zhang, Xiaobo; Wang, Baosheng
2015-01-01
Perivascular epithelioid cell tumor (PEComa) is a rare tumor. Here, we present data regarding clinical presentations, diagnoses, management, and prognosis of five cases of hepatic PEComa between January 2002 and December 2008. Ultrasonography showed hyperechoic masses in all patients. Precontrast computed tomography (CT) showed that all lesions scanned were heterogeneous in density and were heterogeneously enhanced in arterial phase images. In two cases, magnetic resonance imaging showed hypointensity on T1-weighted images and hyperintensity on T2-weighted images. In enhanced scanning, lesions showed asymmetrical enhancement during arterial phase imaging. All tumors were composed of varying proportions of smooth muscle, adipose tissue, and thick-walled blood vessels, and showed positive immunohistochemical staining for Human Melanoma Black-45. All patients underwent hepatectomy, and there was no evidence of recurrence or metastasis during the follow-up period. Copyright © 2012. Published by Elsevier Taiwan.
NASA Astrophysics Data System (ADS)
Park, Dubok; Han, David K.; Ko, Hanseok
2017-05-01
Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.
NASA Astrophysics Data System (ADS)
Hardy, Jason; Campbell, Mark; Miller, Isaac; Schimpf, Brian
2008-10-01
The local path planner implemented on Cornell's 2007 DARPA Urban Challenge entry vehicle Skynet utilizes a novel mixture of discrete and continuous path planning steps to facilitate a safe, smooth, and human-like driving behavior. The planner first solves for a feasible path through the local obstacle map using a grid based search algorithm. The resulting path is then refined using a cost-based nonlinear optimization routine with both hard and soft constraints. The behavior of this optimization is influenced by tunable weighting parameters which govern the relative cost contributions assigned to different path characteristics. This paper studies the sensitivity of the vehicle's performance to these path planner weighting parameters using a data driven simulation based on logged data from the National Qualifying Event. The performance of the path planner in both the National Qualifying Event and in the Urban Challenge is also presented and analyzed.
Non-Abelian vortices of higher winding numbers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Minoru; Konishi, Kenichi; Vinci, Walter
2006-09-15
We make a detailed study of the moduli space of winding number two (k=2) axially symmetric vortices (or equivalently, of coaxial composite of two fundamental vortices), occurring in U(2) gauge theory with two flavors in the Higgs phase, recently discussed by Hashimoto and Tong and by Auzzi, Shifman, and Yung. We find that it is a weighted projective space WCP{sub (2,1,1)}{sup 2}{approx_equal}CP{sup 2}/Z{sub 2}. This manifold contains an A{sub 1}-type (Z{sub 2}) orbifold singularity even though the full moduli space including the relative position moduli is smooth. The SU(2) transformation properties of such vortices are studied. Our results are thenmore » generalized to U(N) gauge theory with N flavors, where the internal moduli space of k=2 axially symmetric vortices is found to be a weighted Grassmannian manifold. It contains singularities along a submanifold.« less
Optical properties of flexible fluorescent films prepared by screen printing technology
NASA Astrophysics Data System (ADS)
Chen, Yan; Ke, Taiyan; Chen, Shuijin; He, Xin; Zhang, Mei; Li, Dong; Deng, Jinfeng; Zeng, Qingguang
2018-05-01
In this work, we prepared a fluorescent film comprised phosphors and silicone on flexible polyethylene terephthalate (PET) substrate using a screen printing technology. The effects of mesh number and weight ratio of phosphors to silicone on the optical properties of the flexible films were investigated. The results indicate that the emission intensity of the film increase as the mesh decreased from 400 to 200, but the film surface gradually becomes uneven. The fluorescent film with high emission intensity and smooth surface can be obtained when the weight ratio of phosphor to gel is 2:1, and mesh number is 300. The luminous efficiency of the fabricated LEDs combined the fluorescent films with 460 nm Ga(In)N chip module can reach 75 lm/W. The investigation indicates that the approach can be applied in the remote fluorescent film conversion and decreases the requirements of the particle size and the dispersion state of fluorescent materials.
[Weight and height local growth charts of Algerian children and adolescents (6-18 years of age)].
Bahchachi, N; Dahel-Mekhancha, C C; Rolland-Cachera, M F; Badis, N; Roelants, M; Hauspie, R; Nezzal, L
2016-04-01
Measurements of height and weight provide important information on growth and development, puberty, and nutritional status in children and adolescents. The aim of this study was to develop contemporary reference growth centiles for Algerian children and adolescents (6-18 years of age). A cross-sectional growth survey was conducted in government schools on 7772 healthy schoolchildren (45.1% boys and 54.9% girls) aged 6-18 years in Constantine (eastern Algeria) in 2008. Height and weight were measured with portable stadiometers and calibrated scales, respectively. Smooth reference curves of height and weight were estimated with the LMS method. These height and weight curves are presented together with local data from Arab countries and with the growth references of France, Belgium (Flanders), and the World Health Organization (WHO) 2007. In girls, median height and weight increased until 16 and 17 years of age, respectively, whereas in boys, they increased through age 18 years. Between ages 11 and 13 years (puberty), girls were taller and heavier than boys. After puberty, boys became taller than girls, by up to 13 cm by the age of 18 years. Median height and weight of Algerian boys and girls were generally intermediate between those observed in other Arab countries. They were higher than the French reference values up to the age of 13 years and lower than Belgian and WHO reference values at all ages. The present study provides Algerian height- and weight-for-age growth charts, which should be recommended as a national reference for monitoring growth and development in children and adolescents. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
OpinionSeer: interactive visualization of hotel customer feedback.
Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin
2010-01-01
The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.
Rodríguez-Fuentes, Gabriela; Marín-López, Valeria; Hernández-Márquez, Esperanza
2016-12-01
Since several reports have indicated that cholinesterases (ChE) type and distribution is species specific and that in some species there is a relationship among gender, size and ChE activities, characterization has been suggested. The aim of the present study was to characterize the ChE present in head and muscle of Gambusia yucatana (using selective substrates and inhibitors) and to find its relationship with total length or gender. Results indicated that the ChE present in G. yucatana is an acetylcholinesterase (AChE) with high sensitivity to BW284C51 and an atypical smaller Km with butyrylthiocholine. Scatterplots indicated that there is no linearity between total length and AChE in male or female wild mosquitofish. There were no sex differences in AChE activities. Results indicated significant differences between a single collection site in the Yucatan peninsula and depurated organisms. This study emphasized the importance of characterizing ChE before usage in biomonitoring.
Bastone, Alessandra de Carvalho; Moreira, Bruno de Souza; Vieira, Renata Alvarenga; Kirkwood, Renata Noce; Dias, João Marcos Domingues; Dias, Rosângela Corrêa
2014-07-01
The purpose of this study was to assess the validity of the Human Activity Profile (HAP) by comparing scores with accelerometer data and by objectively testing its cutoff points. This study included 120 older women (age 60-90 years). Average daily time spent in sedentary, moderate, and hard activity; counts; number of steps; and energy expenditure were measured using an accelerometer. Spearman rank order correlations were used to evaluate the correlation between the HAP scores and accelerometer variables. Significant relationships were detected (rho = .47-.75, p < .001), indicating that the HAP estimates physical activity at a group level well; however, scatterplots showed individual errors. Receiver operating characteristic curves were constructed to determine HAP cutoff points on the basis of physical activity level recommendations, and the cutoff points found were similar to the original HAP cutoff points. The HAP is a useful indicator of physical activity levels in older women.
Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.
Ruddick, K G; Ovidio, F; Rijkeboer, M
2000-02-20
The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.
Orme, Geoffrey J; Kehoe, E James
2014-04-01
This study tested whether cognitive hardiness moderates the adverse effects of deployment-related stressors on health and well-being of soldiers on short-tour (4-7 months), peacekeeping operations. Australian Army reservists (N = 448) were surveyed at the start, end, and up to 24 months after serving as peacekeepers in Timor-Leste or the Solomon Islands. They retained sound mental health throughout (Kessler 10, Post-Traumatic Checklist-Civilian, Depression Anxiety Stress Scale 42). Ratings of either traumatic or nontraumatic stress were low. Despite range restrictions, scores on the Cognitive Hardiness Scale moderated the relationship between deployment stressors and a composite measure of psychological distress. Scatterplots revealed an asymmetric pattern for hardiness scores and measures of psychological distress. When hardiness scores were low, psychological distress scores were widely dispersed. However, when hardiness scores were higher, psychological distress scores became concentrated at a uniformly low level. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
Spangenberg, J E; Dionisi, F
2001-09-01
The fatty acids from cocoa butters of different origins, varieties, and suppliers and a number of cocoa butter equivalents (Illexao 30-61, Illexao 30-71, Illexao 30-96, Choclin, Coberine, Chocosine-Illipé, Chocosine-Shea, Shokao, Akomax, Akonord, and Ertina) were investigated by bulk stable carbon isotope analysis and compound specific isotope analysis. The interpretation is based on principal component analysis combining the fatty acid concentrations and the bulk and molecular isotopic data. The scatterplot of the two first principal components allowed detection of the addition of vegetable fats to cocoa butters. Enrichment in heavy carbon isotope ((13)C) of the bulk cocoa butter and of the individual fatty acids is related to mixing with other vegetable fats and possibly to thermally or oxidatively induced degradation during processing (e.g., drying and roasting of the cocoa beans or deodorization of the pressed fat) or storage. The feasibility of the analytical approach for authenticity assessment is discussed.
Thresholds for small for gestational age among newborns of East Asian and South Asian ancestry.
Ray, Joel G; Jiang, Depeng; Sgro, Michael; Shah, Rajiv; Singh, Gita; Mamdani, Muhammad M
2009-04-01
To determine the risk that newborn infants of East Asian and South Asian ancestry may be misclassified as small for gestational age (SGA). We performed a single-centre, cross-sectional study of a cohort of liveborn infants born to women who had been born in Canada (n = 2362), East Asia (n = 1565) and South Asia (n = 753) and generated smoothed birth weight curves for males and females. We determined the rate of misclassification of infants of East Asian and South Asian maternal origin as SGA, using conventional weight centile cut-offs, rather than those specific to each ethnic group. Infants of Canadian-born mothers had a mean birth weight that was 144 g and 218 g greater than newborns of mothers of East Asian and South Asian origin, respectively. Using the 3rd centile cut-off for infants of Canadian-born mothers, 7 per 1000 female and 14 per 1000 male infants of East Asian maternal origin were potentially miscategorized as SGA at birth. Among female and male infants of mothers of South Asian origin, the corresponding rates were 29 and 46 per 1000. Birth weight curves may need to be modified for newborns of East Asian and South Asian parentage to make a more accurate diagnosis of SGA.
Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields
Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.
2009-01-01
We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791
Dictionary-based fiber orientation estimation with improved spatial consistency.
Ye, Chuyang; Prince, Jerry L
2018-02-01
Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparing interpolation techniques for annual temperature mapping across Xinjiang region
NASA Astrophysics Data System (ADS)
Ren-ping, Zhang; Jing, Guo; Tian-gang, Liang; Qi-sheng, Feng; Aimaiti, Yusupujiang
2016-11-01
Interpolating climatic variables such as temperature is challenging due to the highly variable nature of meteorological processes and the difficulty in establishing a representative network of stations. In this paper, based on the monthly temperature data which obtained from the 154 official meteorological stations in the Xinjiang region and surrounding areas, we compared five spatial interpolation techniques: Inverse distance weighting (IDW), Ordinary kriging, Cokriging, thin-plate smoothing splines (ANUSPLIN) and Empirical Bayesian kriging(EBK). Error metrics were used to validate interpolations against independent data. Results indicated that, the ANUSPLIN performed best than the other four interpolation methods.
[The antitussive effect of theophylline].
Nemceková, E; Nosál'ová, G; Rybár, A
1994-08-01
Theophylline belongs to a group of medicaments used in asthma therapy. It yields an antiinflammatory effect, reduces allergic reactions, and in respiratory airways it improves the mucociliary clearance and eminently dilates smooth muscles. Therefore, the main aim of our interest is its effect on the cough reflex. Cough was evoked by mechanical irritation of the airways in cats with chronic tracheal cannula. It has been discovered that theophylline, when dosed 10 mg per kg of body weight i.p. achieved a more intensive effect than dextromethorphane, namely in evaluation of cough parameters, but it had a lower suppressive effect than codeine. (Fig. 3, Ref. 13.)
Polishing compound for plastic surfaces
Stowell, Michael S.
1995-01-01
A polishing compound for plastic surfaces. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains fine particles silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS.TM., LEXAN.TM., LUCITE.TM., polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired.
Polishing compound for plastic surfaces
Stowell, M.S.
1993-01-01
A polishing compound for plastic surfaces is disclosed. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains colloidal silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS{sup TM}, LEXAN{sup TM}, LUCITE{sup TM}, polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired.
Polishing compound for plastic surfaces
Stowell, M.S.
1995-08-22
A polishing compound for plastic surfaces is disclosed. The compound contains by weight approximately 4 to 17 parts at least one petroleum distillate lubricant, 1 to 6 parts mineral spirits, 2.5 to 15 parts abrasive particles, and 2.5 to 10 parts water. The abrasive is tripoli or a similar material that contains fine particles silica. Preferably, most of the abrasive particles are less than approximately 10 microns, more preferably less than approximately 5 microns in size. The compound is used on PLEXIGLAS{trademark}, LEXAN{trademark}, LUCITE{trademark}, polyvinyl chloride (PVC) and similar plastic materials whenever a smooth, clear polished surface is desired. 5 figs.
Noise-reduction measurements of stiffened and unstiffened cylindrical models of an airplane fuselage
NASA Technical Reports Server (NTRS)
Willis, C. M.; Mayes, W. H.
1984-01-01
Noise-reduction measurements are presented for a stiffened and an unstiffened model of an airplane fuselage. The cylindrical models were tested in a reverberant-field noise environment over a frequency range from 20 Hz to 6 kHz. An unstiffened metal fuselage provided more noise reduction than a fuselage having the same sidewall weight divided between skin and stiffening stringers and ring frames. The addition of acoustic insulation to the models tended to smooth out the interior-noise spectrum by reducing or masking the noise associated with the structural response at some of the resonant frequencies.
Muntean, C T; Herring, A D; Riley, D G; Gill, C A; Sawyer, J E; Sanders, J O
2018-05-12
This study evaluated reproductive, maternal performance, and longevity traits of 143 F1 cows sired by Brahman (Br), Boran (Bo), or Tuli (T) bulls from Angus or Hereford cows from 1994 to 2011. Cow traits were measured at 7 yr of age in 1999 and 2000 for 1992- and 1993-born cows, respectively. From 2004 to 2010, excluding 2008, incisor condition (solid, broken, smooth) and condition scores were assigned to cows remaining in production; scores were evaluated with two models. Broken and solid mouths were each assigned a score of 1, and smooth assigned "0", Br- (0.76) and Bo-sired cows (0.71) had higher scores (P < 0.05) than T-sired cows (0.54). When solid mouths were scored 1 and smooth and broken scored 0, Br-sired cows (0.34) were higher than T (0.01) (P < 0.05), and Bo (0.23) sired cows were not different from either (P > 0.05). Age level of the cow within birth year was important for all modeled calf traits (P < 0.05). Birth weights were not different among cow inheritance (P > 0.05). Cow type influenced (P < 0.05) 205-d adjusted weaning weight of calves; Br-sired dams (228.1 ± 2.37 kg) produced the greatest weaning weight, followed by Bo- (213.7 ± 3.10 kg), and T- (201.6 ± 2.69 kg) sired dams (P < 0.05). Adjusted means for calving rate for Bo-sired (0.92 ± 0.02) cows were higher (P < 0.05) than Br- (0.86 ± 0.02) and T- sired (0.86 ± 0.02) cows. Adjusted mean weaning rate was greater (P < 0.05) for Bo-sired cows (0.86 ± 0.02) than for cows sired by Br (0.77 ± 0.02) bulls, but weaning rate for T-sired cows (0.80 ± 0.02) were similar (P > 0.05). Cow weight was greater (P < 0.05) for Br-sired cows (590.5 ± 8.35 kg), than for Bo-sired (505.8 ± 10.46 kg) or T-sired cows (508.5 ± 9.37 kg). Body condition score at weaning for 7-yr-old cows was similar (P = 0.08) for Br-sired and Bo-sired cows and lower for T-sired cows (P = 0.0005, condition scores 6.0, 6.3, and 5.8, respectively). Bo-sired cows were older when they were removed from the herd, on average (12.7 ± 0.74 y, P = 0.03) than T-sired (10.6 ± 0.61 y); Br-sired cow persistence was intermediate and not different (11.05 ± 0.60 y, P > 0.06) from the others. Bo-sired cows had higher calving and weaning rates and better mouth scores than the other groups; consequently, they had greater longevity as well. Bo- and T-sired cows were moderate in size and weighed less than B-sired cows, throughout the study. T-sired cows weaned the lightest calves and had the most tooth deterioration as they aged.
NASA Astrophysics Data System (ADS)
Cairns, Iver H.; Robinson, P. A.; Anderson, Roger R.; Strangeway, R. J.
1997-10-01
Plasma wave data are compared with ISEE 1's position in the electron foreshock for an interval with unusually constant (but otherwise typical) solar wind magnetic field and plasma characteristics. For this period, temporal variations in the wave characteristics can be confidently separated from sweeping of the spatially varying foreshock back and forth across the spacecraft. The spacecraft's location, particularly the coordinate Df downstream from the foreshock boundary (often termed DIFF), is calculated by using three shock models and the observed solar wind magnetometer and plasma data. Scatterplots of the wave field versus Df are used to constrain viable shock models, to investigate the observed scatter in the wave fields at constant Df, and to test the theoretical predictions of linear instability theory. The scatterplots confirm the abrupt onset of the foreshock waves near the upstream boundary, the narrow width in Df of the region with high fields, and the relatively slow falloff of the fields at large Df, as seen in earlier studies, but with much smaller statistical scatter. The plots also show an offset of the high-field region from the foreshock boundary. It is shown that an adaptive, time-varying shock model with no free parameters, determined by the observed solar wind data and published shock crossings, is viable but that two alternative models are not. Foreshock wave studies can therefore remotely constrain the bow shock's location. The observed scatter in wave field at constant Df is shown to be real and to correspond to real temporal variations, not to unresolved changes in Df. By comparing the wave data with a linear instability theory based on a published model for the electron beam it is found that the theory can account qualitatively and semiquantitatively for the abrupt onset of the waves near Df=0, for the narrow width and offset of the high-field region, and for the decrease in wave intensity with increasing Df. Quantitative differences between observations and theory remain, including large overprediction of the wave fields and the slower than predicted falloff at large Df of the wave fields. These differences, as well as the unresolved issue of the electron beam speed in the high-field region of the foreshock, are discussed. The intrinsic temporal variability of the wave fields, as well as their overprediction based on homogeneous plasma theory, are indicative of stochastic growth physics, which causes wave growth to be random and varying in sign, rather than secular.
Lim, Jung Sub; Lim, Se Won; Ahn, Ju Hyun; Song, Bong Sub; Shim, Kye Shik; Hwang, Il Tae
2014-09-01
To construct new Korean reference curves for birth weight by sex and gestational age using contemporary Korean birth weight data and to compare them with the Lubchenco and the 2010 United States (US) intrauterine growth curves. Data of 2,336,727 newborns by the Korean Statistical Information Service (2008-2012) were used. Smoothed percentile curves were created by the Lambda Mu Sigma method using subsample of singleton. The new Korean reference curves were compared with the Lubchenco and the 2010 US intrauterine growth curves. Reference of the 3rd, 10th, 25th, 50th, 75th, 90th, and 97th percentiles birth weight by gestational age were made using 2,249,804 (male, 1,159,070) singleton newborns with gestational age 23-43 weeks. Separate birth weight curves were constructed for male and female. The Korean reference curves are similar to the 2010 US intrauterine growth curves. However, the cutoff values for small for gestational age (<10th percentile) of the new Korean curves differed from those of the Lubchenco curves for each gestational age. The Lubchenco curves underestimated the percentage of infants who were born small for gestational age. The new Korean reference curves for birth weight show a different pattern from the Lubchenco curves, which were made from white neonates more than 60 years ago. Further research on short-term and long-term health outcomes of small for gestational age babies based on the new Korean reference data is needed.
Constructing statistically unbiased cortical surface templates using feature-space covariance
NASA Astrophysics Data System (ADS)
Parvathaneni, Prasanna; Lyu, Ilwoo; Huo, Yuankai; Blaber, Justin; Hainline, Allison E.; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.
2018-03-01
The choice of surface template plays an important role in cross-sectional subject analyses involving cortical brain surfaces because there is a tendency toward registration bias given variations in inter-individual and inter-group sulcal and gyral patterns. In order to account for the bias and spatial smoothing, we propose a feature-based unbiased average template surface. In contrast to prior approaches, we factor in the sample population covariance and assign weights based on feature information to minimize the influence of covariance in the sampled population. The mean surface is computed by applying the weights obtained from an inverse covariance matrix, which guarantees that multiple representations from similar groups (e.g., involving imaging, demographic, diagnosis information) are down-weighted to yield an unbiased mean in feature space. Results are validated by applying this approach in two different applications. For evaluation, the proposed unbiased weighted surface mean is compared with un-weighted means both qualitatively and quantitatively (mean squared error and absolute relative distance of both the means with baseline). In first application, we validated the stability of the proposed optimal mean on a scan-rescan reproducibility dataset by incrementally adding duplicate subjects. In the second application, we used clinical research data to evaluate the difference between the weighted and unweighted mean when different number of subjects were included in control versus schizophrenia groups. In both cases, the proposed method achieved greater stability that indicated reduced impacts of sampling bias. The weighted mean is built based on covariance information in feature space as opposed to spatial location, thus making this a generic approach to be applicable to any feature of interest.
Corradi, Lara S; Góes, Rejane M; Carvalho, Hernandes F; Taboga, Sebastião R
2004-06-01
Prostatic differentiation during embryogenesis and its further homeostatic state maintenance during adult life depend on androgens. Dihydrotestosterone, which is synthesized from testosterone by 5 alpha-reductase (5 alpha-r), is the active molecule triggering androgen action within the prostate. In the present work, we examined the effects of 5 alpha-reductase inhibition by finasteride in the ventral prostate (VP) of the adult gerbil, employing histochemical and electron microscopy techniques to demonstrate the morphological and organizational changes of the organ. After 10 days of finasteride treatment at a dose of 100 mg/kg/day, the prostatic complex (VP and dorsolateral prostate) absolute weight was reduced to about 18%. The epithelial cells became short and cuboidal, with less secretory blebs and reduced acid phosphatase activity. The luminal sectional area diminished, suggestive of decreased secretory activity. The stromal/epithelial ratio increased, the stroma becoming thicker but less cellular. There was a striking accumulation of collagen fibrils, which was accompanied by an increase in deposits of amorphous granular material adjacent to the basal lamina and in the clefts between smooth muscle cells (SMC). Additionally, the periacinar smooth muscle became loosely packed. Some SMC were atrophic and showed a denser array of the cytoskeleton, whereas other SMC had a highly irregular outline with numerous spine-like projections. The present data indicate that 5 alpha-r inhibition causes epithelial and stromal changes by affecting intra-prostatic hormone levels. These alterations are probably the result of an imbalance of the homeostatic interaction between the epithelium and the underlying stroma.
Football boot insoles and sensitivity to extent of ankle inversion movement.
Waddington, G; Adams, R
2003-04-01
The capacity of the plantar sole of the foot to convey information about foot position is reduced by conventional smooth boot insoles, compared with barefoot surface contact. To test the hypothesis that movement discrimination may be restored by inserting textured replacement insoles, achieved by changing footwear conditions and measuring the accuracy of judgments of the extent of ankle inversion movement. An automated testing device, the ankle movement extent discrimination apparatus (AMEDA), developed to assess active ankle function in weight bearing without a balance demand, was used to test the effects of sole inserts in soccer boots. Seventeen elite soccer players, the members of the 2000 Australian Women's soccer squad (34 ankles), took part in the study. Subjects were randomly allocated to start testing in: bare feet, their own football boots, own football boot and replacement insole, and on the left or right side. Subjects underwent six 50 trial blocks, in which they completed all footwear conditions. The sole inserts were cut to size for each foot from textured rubber "finger profile" sheeting. Movement discrimination scores were significantly worse when subjects wore their football boots and socks, compared with barefoot data collected at the same time. The substitution of textured insoles for conventional smooth insoles in the football boots was found to restore movement discrimination to barefoot levels. The lower active movement discrimination scores of athletes when wearing football boots with smooth insoles suggest that the insole is one aspect of football boot and sport shoe design that could be modified to provide the sensory feedback needed for accurate foot positioning.
Stuart, Deborah; Chapman, Mark; Rees, Sara; Woodward, Stephanie; Kohan, Donald E
2013-08-01
Endothelin-1 binding to endothelin A receptors (ETA) elicits profibrogenic, proinflammatory, and proliferative effects that can promote a wide variety of diseases. Although ETA antagonists are approved for the treatment of pulmonary hypertension, their clinical utility in several other diseases has been limited by fluid retention. ETA blocker-induced fluid retention could be due to inhibition of ETA activation in the heart, vasculature, and/or kidney; consequently, the current study was designed to define which of these sites are involved. Mice were generated with absence of ETA specifically in cardiomyocytes (heart), smooth muscle, the nephron, the collecting duct, or no deletion (control). Administration of the ETA antagonist ambrisentan or atrasentan for 2 weeks caused fluid retention in control mice on a high-salt diet as assessed by increases in body weight, total body water, and extracellular fluid volume (using impedance plethysmography), as well as decreases in hematocrit (hemodilution). Mice with heart ETA knockout retained fluid in a similar manner as controls when treated with ambrisentan or atrasentan. Mice with smooth muscle ETA knockout had substantially reduced fluid retention in response to either ETA antagonist. Mice with nephron or collecting duct ETA disruption were completely prevented from ETA blocker-induced fluid retention. Taken together, these findings suggest that ETA antagonist-induced fluid retention is due to a direct effect of this class of drug on the collecting duct, is partially related to the vascular action of the drugs, and is not due to alterations in cardiac function.
Somvipart, Siraporn; Kanokpanont, Sorada; Rangkupan, Rattapol; Ratanavaraporn, Juthamas; Damrongsakkul, Siriporn
2013-04-01
Thai silk fibroin and gelatin are attractive biomaterials for tissue engineering and controlled release applications due to their biocompatibility, biodegradability, and bioactive properties. The development of electrospun fiber mats from silk fibroin and gelatin were reported previously. However, burst drug release from such fiber mats remained the problem. In this study, the formation of beads on the fibers aiming to be used for the sustained release of drug was of our interest. The beaded fiber mats were fabricated using electrospinning technique by controlling the solution concentration, weight blending ratio of Thai silk fibroin/gelatin blend, and applied voltage. It was found that the optimal conditions including the solution concentration and the weight blending ratio of Thai silk fibroin/gelatin at 8-10% (w/v) and 70/30, respectively, with the applied voltage at 18 kV provided the fibers with homogeneous formation of beads. Then, the beaded fiber mats obtained were crosslinked by the reaction of carbodiimide hydrochloride (EDC)/N-hydroxysuccinimide (NHS). Methylene blue as a model active compound was loaded on the fiber mats. The release test of methylene blue from the beaded fiber mats was carried out in comparison to that of the smooth fiber mats without beads. It was found that the beaded fiber mats could prolong the release of methylene blue, comparing to the smooth fiber mats without beads. This was possibly due to the beaded fiber mats that would absorb and retain higher amount of methylene blue than the fiber mats without beads. Thai silk fibroin/gelatin beaded fiber mats were established as an effective carrier for the controlled release applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Tsanas, Athanasios; Clifford, Gari D
2015-01-01
Sleep spindles are critical in characterizing sleep and have been associated with cognitive function and pathophysiological assessment. Typically, their detection relies on the subjective and time-consuming visual examination of electroencephalogram (EEG) signal(s) by experts, and has led to large inter-rater variability as a result of poor definition of sleep spindle characteristics. Hitherto, many algorithmic spindle detectors inherently make signal stationarity assumptions (e.g., Fourier transform-based approaches) which are inappropriate for EEG signals, and frequently rely on additional information which may not be readily available in many practical settings (e.g., more than one EEG channels, or prior hypnogram assessment). This study proposes a novel signal processing methodology relying solely on a single EEG channel, and provides objective, accurate means toward probabilistically assessing the presence of sleep spindles in EEG signals. We use the intuitively appealing continuous wavelet transform (CWT) with a Morlet basis function, identifying regions of interest where the power of the CWT coefficients corresponding to the frequencies of spindles (11-16 Hz) is large. The potential for assessing the signal segment as a spindle is refined using local weighted smoothing techniques. We evaluate our findings on two databases: the MASS database comprising 19 healthy controls and the DREAMS sleep spindle database comprising eight participants diagnosed with various sleep pathologies. We demonstrate that we can replicate the experts' sleep spindles assessment accurately in both databases (MASS database: sensitivity: 84%, specificity: 90%, false discovery rate 83%, DREAMS database: sensitivity: 76%, specificity: 92%, false discovery rate: 67%), outperforming six competing automatic sleep spindle detection algorithms in terms of correctly replicating the experts' assessment of detected spindles.
Kulick, Michael I
2010-06-01
Cellulite is a condition usually limited to women. The most common location for this surface irregularity is the thigh. Evaluation of treatment efficacy is difficult because of the reliance on patient satisfaction surveys and flash photography, which can "flatten" surface texture. Reproducibility of photographs is also difficult, as subtle changes in body position can affect appearance. Twenty women with mild to moderate cellulite of their lateral thighs were enrolled. Pretreatment and posttreatment assessment included patient weight, body mass index, percentage body fat, standard digital photographs, VECTRA three-dimensional images, and patient questionnaire. Patients received two treatments per week for 4 weeks. Treatment time was 15 minutes per thigh using the SmoothShapes device. Patients were evaluated 1, 3, and 6 months after their last treatment. To be considered improved after treatment, both thighs needed clear improvement in contour as determined by the "untextured" images obtained with the VECTRA camera system. This device depicts skin contour independent of incident lighting. There were no complications. Seventeen patients had complete data for analysis. Ninety-four percent of the patients felt their cellulite was improved. VECTRA analysis showed 82 percent improvement at 1 month, 76 percent improvement at 3 months, and 76 percent improvement at 6 months. Initial cellulite irregularities and improvement were more difficult to discern using standard digital photographs. There was an average increase in patient weight, body mass index, and percentage body fat at 6 months. The SmoothShapes device provided improvement in surface contour (cellulite) 6 months after the last treatment in the majority of the patients based on patient survey and VECTRA analysis.
Terrestrial cross-calibrated assimilation of various datasources
NASA Astrophysics Data System (ADS)
Groß, André; Müller, Richard; Schömer, Elmar; Trentmann, Jörg
2014-05-01
We introduce a novel software tool, ANACLIM, for the efficient assimilation of multiple two-dimensional data sets using a variational approach. We consider a single objective function in two spatial coordinates with higher derivatives. This function measures the deviation of the input data from the target data set. By using the Euler-Lagrange formalism the minimization of this objective function can be transformed into a sparse system of linear equations, which can be efficiently solved by a conjugate gradient solver on a desktop workstation. The objective function allows for a series of physically-motivated constraints. The user can control the relative global weights, as well as the individual weight of each constraint on a per-grid-point level. The different constraints are realized as separate terms of the objective function: One similarity term for each input data set and two additional smoothness terms, penalizing high gradient and curvature values. ANACLIM is designed to combine similarity and smoothness operators easily and to choose different solvers. We performed a series of benchmarks to calibrate and verify our solution. We use, for example, terrestrial stations of BSRN and GEBA for the solar incoming flux and AERONET stations for aerosol optical depth. First results show that the combination of these data sources gain a significant benefit against the input datasets with our approach. ANACLIM also includes a region growing algorithm for the assimilation of ground based data. The region growing algorithm computes the maximum area around a station that represents the station data. The regions are grown under several constraints like the homogeneity of the area. The resulting dataset is then used within the assimilation process. Verification is performed by cross-validation. The method and validation results will be presented and discussed.
Shark fisheries in the Southeast Pacific: A 61-year analysis from Peru
Gonzalez-Pestana, Adriana; Kouri J., Carlos; Velez-Zuazo, Ximena
2016-01-01
Peruvian waters exhibit high conservation value for sharks. This contrasts with a lag in initiatives for their management and a lack of studies about their biology, ecology and fishery. We investigated the dynamics of Peruvian shark fishery and its legal framework identifying information gaps for recommending actions to improve management. Further, we investigated the importance of the Peruvian shark fishery from a regional perspective. From 1950 to 2010, 372,015 tons of sharks were landed in Peru. From 1950 to 1969, we detected a significant increase in landings; but from 2000 to 2011 there was a significant decrease in landings, estimated at 3.5% per year. Six species represented 94% of landings: blue shark ( Prionace glauca), shortfin mako ( Isurus oxyrinchus), smooth hammerhead ( Sphyrna zygaena), common thresher ( Alopias vulpinus), smooth-hound ( Mustelus whitneyi) and angel shark ( Squatina californica). Of these, the angel shark exhibits a strong and significant decrease in landings: 18.9% per year from 2000 to 2010. Peru reports the highest accumulated historical landings in the Pacific Ocean; but its contribution to annual landings has decreased since 1968. Still, Peru is among the top 12 countries exporting shark fins to the Hong Kong market. Although the government collects total weight by species, the number of specimens landed as well as population parameters (e.g. sex, size and weight) are not reported. Further, for some genera, species-level identification is deficient and so overestimates the biomass landed by species and underestimates the species diversity. Recently, regional efforts to regulate shark fishery have been implemented to support the conservation of sharks but in Peru work remains to be done. PMID:27635216
Shark fisheries in the Southeast Pacific: A 61-year analysis from Peru.
Gonzalez-Pestana, Adriana; Kouri J, Carlos; Velez-Zuazo, Ximena
2014-01-01
Peruvian waters exhibit high conservation value for sharks. This contrasts with a lag in initiatives for their management and a lack of studies about their biology, ecology and fishery. We investigated the dynamics of Peruvian shark fishery and its legal framework identifying information gaps for recommending actions to improve management. Further, we investigated the importance of the Peruvian shark fishery from a regional perspective. From 1950 to 2010, 372,015 tons of sharks were landed in Peru. From 1950 to 1969, we detected a significant increase in landings; but from 2000 to 2011 there was a significant decrease in landings, estimated at 3.5% per year. Six species represented 94% of landings: blue shark ( Prionace glauca), shortfin mako ( Isurus oxyrinchus), smooth hammerhead ( Sphyrna zygaena), common thresher ( Alopias vulpinus), smooth-hound ( Mustelus whitneyi) and angel shark ( Squatina californica). Of these, the angel shark exhibits a strong and significant decrease in landings: 18.9% per year from 2000 to 2010. Peru reports the highest accumulated historical landings in the Pacific Ocean; but its contribution to annual landings has decreased since 1968. Still, Peru is among the top 12 countries exporting shark fins to the Hong Kong market. Although the government collects total weight by species, the number of specimens landed as well as population parameters (e.g. sex, size and weight) are not reported. Further, for some genera, species-level identification is deficient and so overestimates the biomass landed by species and underestimates the species diversity. Recently, regional efforts to regulate shark fishery have been implemented to support the conservation of sharks but in Peru work remains to be done.
Multicomponent ensemble models to forecast induced seismicity
NASA Astrophysics Data System (ADS)
Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.
2018-01-01
In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Baya Botti, A; Pérez-Cueto, F J A; Vasquez Monllor, P A; Kolsteren, P W
2009-01-01
Anthropometry is important as clinical tool for individual follow-up as well as for planning and health policy-making at population level. Recent references of Bolivian Adolescents are not available. The aim of this cross sectional study was to provide age and sex specific centile values and charts of Body Mass Index, height, weight, arm, wrist and abdominal circumference from Bolivian Adolescents. Data from the MEtabolic Syndrome in Adolescents (MESA) study was used. Thirty-two Bolivian clusters from urban and rural areas were selected randomly considering population proportions, 3445 school going adolescents, 12 to 18 y, 45% males; 55% females underwent anthropometric evaluation by trained personnel using standardized protocols for all interviews and examinations. Weight, height, wrist, arm and abdominal circumference data were collected. Body Mass Index was calculated. Smoothed age- and gender specific 3rd, 5th, 10th, 25th, 50th, 75th, 85th, 90th, 95th and 97th Bolivian adolescent percentiles(BAP) and Charts(BAC) where derived using LMS regression. Percentile-based reference data for the antropometrics of for Bolivian Adolescents are presented for the first time.
Murase, E; Siegelman, E S; Outwater, E K; Perez-Jaffe, L A; Tureck, R W
1999-01-01
Leiomyomas are the most common uterine neoplasm and are composed of smooth muscle with varying amounts of fibrous connective tissue. As leiomyomas enlarge, they may outgrow their blood supply, resulting in various types of degeneration: hyaline or myxoid degeneration, calcification, cystic degeneration, and red degeneration. Leiomyomas are classified as submucosal, intramural, or subserosal; the latter may become pedunculated and simulate ovarian neoplasms. Although most leiomyomas are asymptomatic, patients may present with abnormal uterine bleeding, pressure on adjacent organs, pain, infertility, or a palpable abdominalpelvic mass. Magnetic resonance (MR) imaging is the most accurate imaging technique for detection and localization of leiomyomas. On T2-weighted images, nondegenerated leiomyomas appear as well-circumscribed masses of decreased signal intensity; however, cellular leiomyomas can have relatively higher signal intensity on T2-weighted images and demonstrate enhancement on contrast material-enhanced images. Degenerated leiomyomas have variable appearances on T2-weighted images and contrast-enhanced images. The differential diagnosis of leiomyomas includes adenomyosis, solid adnexal mass, focal myometrial contraction, and uterine leiomyosarcoma. For patients with symptoms, medical or surgical treatment may be indicated. MR imaging also has a role in treatment of leiomyomas by assisting in surgical planning and monitoring the response to medical therapy.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
Ignition of Hydrogen-Oxygen Rocket Combustor with Chlorine Trifluoride and Triethylaluminum
NASA Technical Reports Server (NTRS)
Gregory, John W.; Straight, David M.
1961-01-01
Ignition of a nominal-125-pound-thrust cold (2000 R) gaseous-hydrogen - liquid-oxygen rocket combustor with chlorine trifluoride (hypergolic with hydrogen) and triethylaluminum (hypergolic with oxygen) resulted in consistently smooth starting transients for a wide range of combustor operating conditions. The combustor exhaust nozzle discharged into air at ambient conditions. Each starting transient consisted of the following sequence of events: injection of the lead main propellant, injection of the igniter chemical, ignition of these two chemicals, injection of the second main propellant, ignition of the two main propellants, increase in chamber pressure to its terminal value, and cutoff of igniter-chemical flow. Smooth ignition was obtained with an ignition delay of less than 100 milliseconds for the reaction of the lead propellant with the igniter chemical using approximately 0.5 cubic inch (0-038 lb) of chlorine trifluoride or 1.0 cubic inch (0-031 lb) of triethylaluminum. These quantities of igniter chemical were sufficient to ignite a 20-percent-fuel hydrogen-oxygen mixture with a delay time of less than 15 milliseconds. Test results indicated that a simple, light weight chemical ignition system for hydrogen-oxygen rocket engines may be possible.
NASA Astrophysics Data System (ADS)
Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.
2012-01-01
This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Robust pulmonary lobe segmentation against incomplete fissures
NASA Astrophysics Data System (ADS)
Gu, Suicheng; Zheng, Qingfeng; Siegfried, Jill; Pu, Jiantao
2012-03-01
As important anatomical landmarks of the human lung, accurate lobe segmentation may be useful for characterizing specific lung diseases (e.g., inflammatory, granulomatous, and neoplastic diseases). A number of investigations showed that pulmonary fissures were often incomplete in image depiction, thereby leading to the computerized identification of individual lobes a challenging task. Our purpose is to develop a fully automated algorithm for accurate identification of individual lobes regardless of the integrity of pulmonary fissures. The underlying idea of the developed lobe segmentation scheme is to use piecewise planes to approximate the detected fissures. After a rotation and a global smoothing, a number of small planes were fitted using local fissures points. The local surfaces are finally combined for lobe segmentation using a quadratic B-spline weighting strategy to assure that the segmentation is smooth. The performance of the developed scheme was assessed by comparing with a manually created reference standard on a dataset of 30 lung CT examinations. These examinations covered a number of lung diseases and were selected from a large chronic obstructive pulmonary disease (COPD) dataset. The results indicate that our scheme of lobe segmentation is efficient and accurate against incomplete fissures.
Vascular Responsiveness in Adrenalectomized Rats with Corticosterone Replacement
NASA Technical Reports Server (NTRS)
Darlington, Daniel N.; Kaship, Kapil; Keil, Lanny C.; Dallman, Mary F.
1989-01-01
To determine under resting, unstressed conditions the circulating glucocorticoid concentrations that best maintain sensitivity of the vascular smooth muscle and baroreceptor responses to vasoactive agents, rats with vascular cannulas were sham-adrenalectomized (sham) or adrenalectomized (ADRX) and provided with four levels of corticosterone replacement (-100 mg fused pellets of corticosterone: cholesterol 0, 20, 40, and 80% implanted subcutaneously at the time of adrenal surgery). Changes in vascular and baroreflex responses were determined after intravenous injection of varying doses of phenylephrine and nitroglycerin with measurement of arterial blood pressure and heart rate in the conscious, chronically cannulated rats. Vascular sensitivity was decreased, and resting arterial blood pressure tended to be decreased in the adrenalectomized rats; both were restored to normal with levels of corticosterone (40%), which also maintained body weight gain, thymus weight, and plasma corticosteroid binding globulin concentrations at normal values. The baroreflex curve generated from the sham group was different from the curves generated from the ADRX+O, 20, and 40% groups, but not different from that of the ADRX+80% group, suggesting that the baroreflex is maintained by higher levels of corticosterone than are necessary for the maintenance of the other variables. These data demonstrate that physiological levels of corticosterone (40% pellet) restore vascular responsiveness, body weight, thymus weight, and transcortin levels to normal in ADRX rats, whereas higher levels (80% pellet) are necessary for restoration of the baroreflex.
de Carvalho, Wellington Roberto Gomes; de Moraes, Anderson Marques; Roman, Everton Paulo; Santos, Keila Donassolo; Medaets, Pedro Augusto Rodrigues; Veiga-Junior, Nélio Neves; Coelho, Adrielle Caroline Lace de Moraes; Krahenbühl, Tathyane; Sewaybricker, Leticia Esposito; Barros-Filho, Antonio de Azevedo; Morcillo, Andre Moreno; Guerra-Júnior, Gil
2015-01-01
Aims To establish normative data for phalangeal quantitative ultrasound (QUS) measures in Brazilian students. Methods The sample was composed of 6870 students (3688 females and 3182 males), aged 6 to 17 years. The bone status parameter, Amplitude Dependent Speed of Sound (AD-SoS) was assessed by QUS of the phalanges using DBM Sonic BP (IGEA, Carpi, Italy) equipment. Skin color was obtained by self-evaluation. The LMS method was used to derive smoothed percentiles reference charts for AD-SoS according to sex, age, height and weight and to generate the L, M, and S parameters. Results Girls showed higher AD-SoS values than boys in the age groups 7–16 (p<0.001). There were no differences on AD-SoS Z-scores according to skin color. In both sexes, the obese group showed lower values of AD-SoS Z-scores compared with subjects classified as thin or normal weight. Age (r2 = 0.48) and height (r2 = 0.35) were independent predictors of AD-SoS in females and males, respectively. Conclusion AD-SoS values in Brazilian children and adolescents were influenced by sex, age and weight status, but not by skin color. Our normative data could be used for monitoring AD-SoS in children or adolescents aged 6–17 years. PMID:26043082
Effect of Blade-surface Finish on Performance of a Single-stage Axial-flow Compressor
NASA Technical Reports Server (NTRS)
Moses, Jason J; Serovy, George, K
1951-01-01
A set of modified NACA 5509-34 rotor and stator blades was investigated with rough-machine, hand-filed, and highly polished surface finishes over a range of weight flows at six equivalent tip speeds from 672 to 1092 feet per second to determine the effect of blade-surface finish on the performance of a single-stage axial-flow compressor. Surface-finish effects decreased with increasing compressor speed and with decreasing flow at a given speed. In general, finishing blade surfaces below the roughness that may be considered aerodynamically smooth on the basis of an admissible-roughness formula will have no effect on compressor performance.
Generalized Momentum Control of the Spin-Stabilized Magnetospheric Multiscale Formation
NASA Technical Reports Server (NTRS)
Queen, Steven Z.; Shah, Neerav; Benegalrao, Suyog S.; Blackman, Kathie
2015-01-01
The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories elliptically orbiting the Earth in a tetrahedron formation. The on-board attitude control system adjusts the angular momentum of the system using a generalized thruster-actuated control system that simultaneously manages precession, nutation and spin. Originally developed using Lyapunov control-theory with rate-feedback, a published algorithm has been augmented to provide a balanced attitude/rate response using a single weighting parameter. This approach overcomes an orientation sign-ambiguity in the existing formulation, and also allows for a smoothly tuned-response applicable to both a compact/agile spacecraft, as well as one with large articulating appendages.
Solar concentrators for advanced solar-dynamic power systems in space
NASA Technical Reports Server (NTRS)
Rockwell, Richard
1993-01-01
This report summarizes the results of a study performed by Hughes Danbury Optical Systems, HDOS, (formerly Perkin-Elmer) to design, fabricate, and test a lightweight (2 kg/sq M), self supporting, and highly reflective sub-scale concentrating mirror panel suitable for use in space. The HDOS panel design utilizes Corning's 'micro sheet' glass as the top layer of a composite honeycomb sandwich. This approach, whose manufacturability was previously demonstrated under an earlier NASA contract, provides a smooth (specular) reflective surface without the weight of a conventional glass panel. The primary result of this study is a point design and it's performance assessment.
Large-aperture focusing of x rays with micropore optics using dry etching of silicon wafers.
Ezoe, Yuichiro; Moriyama, Teppei; Ogawa, Tomohiro; Kakiuchi, Takuya; Mitsuishi, Ikuyuki; Mitsuda, Kazuhisa; Aoki, Tatsuhiko; Morishita, Kohei; Nakajima, Kazuo
2012-03-01
Large-aperture focusing of Al K(α) 1.49 keV x-ray photons using micropore optics made from a dry-etched 4 in. (100 mm) silicon wafer is demonstrated. Sidewalls of the micropores are smoothed with high-temperature annealing to work as x-ray mirrors. The wafer is bent to a spherical shape to collect parallel x rays into a focus. Our result supports that this new type of optics allows for the manufacturing of ultralight-weight and high-performance x-ray imaging optics with large apertures at low cost. © 2012 Optical Society of America
Multi-sensor physical activity recognition in free-living.
Ellis, Katherine; Godbole, Suneeta; Kerr, Jacqueline; Lanckriet, Gert
Physical activity monitoring in free-living populations has many applications for public health research, weight-loss interventions, context-aware recommendation systems and assistive technologies. We present a system for physical activity recognition that is learned from a free-living dataset of 40 women who wore multiple sensors for seven days. The multi-level classification system first learns low-level codebook representations for each sensor and uses a random forest classifier to produce minute-level probabilities for each activity class. Then a higher-level HMM layer learns patterns of transitions and durations of activities over time to smooth the minute-level predictions. [Formula: see text].
A new optimal seam method for seamless image stitching
NASA Astrophysics Data System (ADS)
Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng
2017-07-01
A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.
NASA Astrophysics Data System (ADS)
Itatani, Keiichi; Okada, Takashi; Uejima, Tokuhisa; Tanaka, Tomohiko; Ono, Minoru; Miyaji, Kagami; Takenaka, Katsu
2013-07-01
We have developed a system to estimate velocity vector fields inside the cardiac ventricle by echocardiography and to evaluate several flow dynamical parameters to assess the pathophysiology of cardiovascular diseases. A two-dimensional continuity equation was applied to color Doppler data using speckle tracking data as boundary conditions, and the velocity component perpendicular to the echo beam line was obtained. We determined the optimal smoothing method of the color Doppler data, and the 8-pixel standard deviation of the Gaussian filter provided vorticity without nonphysiological stripe shape noise. We also determined the weight function at the bilateral boundaries given by the speckle tracking data of the ventricle or vascular wall motion, and the weight function linear to the distance from the boundary provided accurate flow velocities not only inside the vortex flow but also around near-wall regions on the basis of the results of the validation of a digital phantom of a pipe flow model.
Photo-enhanced performance and photo-tunable degradation in LC ecopolymers
NASA Astrophysics Data System (ADS)
Kaneko, Tatsuo
2007-05-01
Photosensitive, liquid crystalline (LC) polymers were prepared by in-bulk polymerization of phytomonomers such as cinnamic acid derivatives. The p-coumaric acid (4HCA) homopolymer showed a thermotropic LC phase where a photoreaction of [2+2] cycloaddition occurred by ultraviolet irradiation. LC phase was exhibited only in a low molecular weight state but the polymer was too brittle to materialize. Then we copolymerized 4HCA with multifunctional cinnamate, 3,4 dihydroxycinnamic acid (caffeic acid; DHCA), to prepare the hyperbranching architecture. Many branches increased the apparent size of the polymer chain but kept the low number-average molecular weight. P(4HCA-co-DHCA)s showed high performances which may be attained through the entanglement by in-bulk formation of hyperbranching, rigid structures. P(4HCA-co-DHCA)s showed a smooth hydrolysis, an in-soil degradation and a photoreaction cross-linking from conjugated cinnamate esters to aliphatic esters. The change in photoconversion degree tuned the polymer performance and chain hydrolysis.
Runge-Kutta discontinuous Galerkin method using a new type of WENO limiters on unstructured meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Zhong, Xinghui; Shu, Chi-Wang; Qiu, Jianxian
2013-09-01
In this paper we generalize a new type of limiters based on the weighted essentially non-oscillatory (WENO) finite volume methodology for the Runge-Kutta discontinuous Galerkin (RKDG) methods solving nonlinear hyperbolic conservation laws, which were recently developed in [32] for structured meshes, to two-dimensional unstructured triangular meshes. The key idea of such limiters is to use the entire polynomials of the DG solutions from the troubled cell and its immediate neighboring cells, and then apply the classical WENO procedure to form a convex combination of these polynomials based on smoothness indicators and nonlinear weights, with suitable adjustments to guarantee conservation. The main advantage of this new limiter is its simplicity in implementation, especially for the unstructured meshes considered in this paper, as only information from immediate neighbors is needed and the usage of complicated geometric information of the meshes is largely avoided. Numerical results for both scalar equations and Euler systems of compressible gas dynamics are provided to illustrate the good performance of this procedure.
Use of MAGSAT anomaly data for crustal structure and mineral resources in the US Midcontinent
NASA Technical Reports Server (NTRS)
Carmichael, R. S. (Principal Investigator)
1981-01-01
The analysis and preliminary interpretation of investigator-B MAGSAT data are addressed. The data processing included: (1) removal of spurious data points; (2) statistical smoothing along individual data tracks, to reduce the effect of geomagnetic transient disturbances; (3) comparison of data profiles spatially coincident in track location but acquired at different times; (4) reduction of data by weighted averaging to a grid with 1 deg xl deg latitude/longitude spacing, and with elevations interpolated and weighted to a common datum of 400 km; (5) wavelength filtering; and (6) reduction of the anomaly map to the magnetic pole. Agreement was found between a magnitude data anomaly map and a reduce-to-the-pole map supporting the general assumption that, on a large scale (long wavelength), it is induced crustal magnetization which is responsible for major anamalies. Anomalous features are identified and explanations are suggested with regard to crustal structure, petrologic characteristics, and Curie temperature isotherms.
Hyperviscosity for unstructured ALE meshes
NASA Astrophysics Data System (ADS)
Cook, Andrew W.; Ulitsky, Mark S.; Miller, Douglas S.
2013-01-01
An artificial viscosity, originally designed for Eulerian schemes, is adapted for use in arbitrary Lagrangian-Eulerian simulations. Changes to the Eulerian model (dubbed 'hyperviscosity') are discussed, which enable it to work within a Lagrangian framework. New features include a velocity-weighted grid scale and a generalised filtering procedure, applicable to either structured or unstructured grids. The model employs an artificial shear viscosity for treating small-scale vorticity and an artificial bulk viscosity for shock capturing. The model is based on the Navier-Stokes form of the viscous stress tensor, including the diagonal rate-of-expansion tensor. A second-order version of the model is presented, in which Laplacian operators act on the velocity divergence and the grid-weighted strain-rate magnitude to ensure that the velocity field remains smooth at the grid scale. Unlike sound-speed-based artificial viscosities, the hyperviscosity model is compatible with the low Mach number limit. The new model outperforms a commonly used Lagrangian artificial viscosity on a variety of test problems.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Nationwide singleton birth weight percentiles by gestational age in Taiwan, 1998-2002.
Hsieh, Wu-Shiun; Wu, Hui-Chen; Jeng, Suh-Fang; Liao, Hua-Fang; Su, Yi-Ning; Lin, Shio-Jean; Hsieh, Chia-Jung; Chen, Pau-Chung
2006-01-01
There are limited nationwide population-based data about birth weight percentiles by gestational age in Taiwan. The purpose of this study was to develop updated intrauterine growth charts that are population based and contain the information of birth weight percentiles by gestational age for singleton newborns in Taiwan. We abstracted and analyzed the birth registration database from the Ministry of the Interior in Taiwan during the period of 1998-2002 that consisted of over one million singleton births. Percentiles of birth weight for each increment of gestational week from 21 to 44 weeks were estimated using smoothed means and standard deviations. The analyses revealed that birth weight rose with advancing gestational age, with greater slopes during the third trimester and then leveled off beyond 40 weeks of gestational age. The male to female ratio ranged from 1.088 to 1.096. The mean birth weights during the period of 1998-2002 were higher than those previously reported for the period of 1945-1967; while the birth weight distribution and percentile during the period of 1998-2002 were similar to those reported for the period of 1979-1989. The 10th, 50th, and 90th percentiles of birth weigh at 40th gestational age among the male newborns were 2914, 3374, and 3890 g respectively; and for the female newborns 2816, 3250, and 3747 g. At the gestational age of 37 weeks, the 10th, 50th, and 90th percentiles of birth weigh among the male newborns were 2499, 2941, and 3433 g respectively; and for the female newborns 2391, 2832, and 3334 g. From 1998 to 2002, there was a gradual increase in the prevalence of low birth weight and preterm birth together with the percentage of infants born to foreign-born mothers. This study provides the first nationwide singleton intrauterine growth charts in Taiwan that are population-based and gender-specific. The normative data are particularly useful for the investigation of predictors and outcomes of altered fetal growth.
Anticipatory scaling of grip forces when lifting objects of everyday life.
Hermsdörfer, Joachim; Li, Yong; Randerath, Jennifer; Goldenberg, Georg; Eidenmüller, Sandra
2011-07-01
The ability to predict and anticipate the mechanical demands of the environment promotes smooth and skillful motor actions. Thus, the finger forces produced to grasp and lift an object are scaled to the physical properties such as weight. While grip force scaling is well established for neutral objects, only few studies analyzed objects known from daily routine and none studied grip forces. In the present study, eleven healthy subjects each lifted twelve objects of everyday life that encompassed a wide range of weights. The finger pads were covered with force sensors that enabled the measurement of grip force. A scale registered load forces. In a control experiment, the objects were wrapped into paper to prevent recognition by the subjects. Data from the first lift of each object confirmed that object weight was anticipated by adequately scaled forces. The maximum grip force rate during the force increase phase emerged as the most reliable measure to verify that weight was actually predicted and to characterize the precision of this prediction, while other force measures were scaled to object weight also when object identity was not known. Variability and linearity of the grip force-weight relationship improved for time points reached after liftoff, suggesting that sensory information refined the force adjustment. The same mechanism seemed to be involved with unrecognizable objects, though a lower precision was reached. Repeated lifting of the same object within a second and third presentation block did not improve the precision of the grip force scaling. Either practice was too variable or the motor system does not prioritize the optimization of the internal representation when objects are highly familiar.
Dorvel, Brian; Boopalachandran, Praveenkumar; Chen, Ida; Bowling, Andrew; Williams, Kerry; King, Steve
2018-05-01
Decking is one of the largest applications for the treated wood market. The most challenging property to obtain for treated wood is dimensional stability, which can be achieved, in part, by cell wall bulking, cell wall polymer crosslinking and removal of hygroscopic components in the cell wall. A commonly accepted key requirement is for the actives to infuse through the cell wall, which has a microporosity of ∼5-13 nm. Equally as challenging is being able to measure and quantify the cell wall penetration. Branched polyethylenimine (PEI) was studied as a model polymer for penetration due to its water solubility, polarity, variable molecular weight ranges, and ability to form a chelation complex with preservative metals to treat lumbers. Two different molecular weight polyethylenimines (PEI), one with a weight average molecular weight (Mw) equal to 800 Da and the other 750 000 Da, were investigated for penetration by microscopy and spectroscopy techniques. Analytical methods were developed to both create smooth interfaces and for relative quantitation and visualisation of PEI penetration into the wood. The results showed both PEI with Mw of 800 Da and PEI with Mw of 750 000 Da coated the lumens in high density. However, only the PEI with Mw of 800 appeared to penetrate the cell walls in sufficient levels. Literature has shown the hydrodynamic radii of PEI 750 000 is near 29 nm, whereas a smaller PEI at 25 K showed 4.5 nm. Most importantly the results, based on methods developed, show how molecular weight and tertiary structure of the polymer can affect its penetration, with the microporosity of the wood being the main barrier. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmüller, U.; Strozzi, T.
2012-12-01
The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.
Parallel algorithms for the molecular conformation problem
NASA Astrophysics Data System (ADS)
Rajan, Kumar
Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.
Chen, Jing; Du, Yuzhang; Que, Wenxiu; Xing, Yonglei; Chen, Xiaofeng; Lei, Bo
2015-12-01
Crack-free organic-inorganic hybrid monoliths with controlled biomineralization activity and mechanical property have an important role for highly efficient bone tissue regeneration. Here, biomimetic and crack-free polydimethylsiloxane (PDMS)-modified bioactive glass (BG)-poly(ethylene glycol) (PEG) (PDMS-BG-PEG) hybrids monoliths were prepared by a facile sol-gel technique. Results indicate that under the assist of co-solvents, BG sol and PDMS and PEG could be hybridized at a molecular level, and effects of the PEG molecular weight on the structure, biomineralization activity, and mechanical property of the as-prepared hybrid monoliths were also investigated in detail. It is found that an addition of low molecular weight PEG can significantly prevent the formation of cracks and speed up the gelation of the hybrid monoliths, and the surface microstructure of the hybrid monoliths can be changed from the porous to the smooth as the PEG molecular weight increases. Additionally, the hybrid monoliths with low molecular weight PEG show the high formation of the biological apatite layer, while the hybrids with high molecular weight PEG exhibit negligible biomineralization ability in simulated body fluid (SBF). Furthermore, the PDMS-BG-PEG 600 hybrid monolith has significantly high compressive strength (32 ± 3 MPa) and modulus (153 ± 11 MPa), as well as good cell biocompatibility by supporting osteoblast (MC3T3-E1) attachment and proliferation. These results indicate that the as-prepared PDMS-BG-PEG hybrid monoliths may have promising applications for bone tissue regeneration. Copyright © 2015 Elsevier B.V. All rights reserved.
Bong, YB; Shariff, AA; Majid, AM; Merican, AF
2012-01-01
Background: Reference charts are widely used in healthcare as a screening tool. This study aimed to produce reference growth charts for school children from West Malaysia in comparison with the United States Centers for Disease Control and Prevention (CDC) chart. Methods: A total of 14,360 school children ranging from 7 to 17 years old from six states in West Malaysia were collected. A two-stage stratified random sampling technique was used to recruit the subjects. Curves were adjusted using Cole’s LMS method. The LOWESS method was used to smooth the data. Results: The means and standard deviations for height and weight for both genders are presented. The results showed good agreement with growth patterns in other countries, i.e., males tend to be taller and heavier than females for most age groups. Height and weight of females reached a plateau at 17 years of age; however, males were still growing at this age. The growth charts for West Malaysian school children were compared with the CDC 2000 growth charts for school children in the United States. Conclusion: The height and weight for males and females at the start of school-going ages were almost similar. The comparison between the growth charts from this study and the CDC 2000 growth charts indicated that the growth patterns of West Malaysian school children have improved, although the height and weight of American school children were higher than those for West Malaysian school children. PMID:23113132
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-12-31
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less
Bisphenol A Exposure Enhances Atherosclerosis in WHHL Rabbits
Fang, Chao; Ning, Bo; Waqar, Ahmed Bilal; Niimi, Manabu; Li, Shen; Satoh, Kaneo; Shiomi, Masashi; Ye, Ting; Dong, Sijun; Fan, Jianglin
2014-01-01
Bisphenol A (BPA) is an environmental endocrine disrupter. Excess exposure to BPA may increase susceptibility to many metabolic disorders, but it is unclear whether BPA exposure has any adverse effects on the development of atherosclerosis. To determine whether there are such effects, we investigated the response of Watanabe heritable hyperlipidemic (WHHL) rabbits to 400-µg/kg BPA per day, administered orally by gavage, over the course of 12 weeks and compared aortic and coronary atherosclerosis in these rabbits to the vehicle group using histological and morphometric methods. In addition, serum BPA, cytokines levels and plasma lipids as well as pathologic changes in liver, adipose and heart were analyzed. Moreover, we treated human umbilical cord vein endothelial cells (HUVECs) and rabbit aortic smooth muscle cells (SMCs) with different doses of BPA to investigate the underlying molecular mechanisms involved in BPA action(s). BPA treatment did not change the plasma lipids and body weights of the WHHL rabbits; however, the gross atherosclerotic lesion area in the aortic arch was increased by 57% compared to the vehicle group. Histological and immunohistochemical analyses revealed marked increases in advanced lesions (37%) accompanied by smooth muscle cells (60%) but no significant changes in the numbers of macrophages. With regard to coronary atherosclerosis, incidents of coronary stenosis increased by 11% and smooth muscle cells increased by 73% compared to the vehicle group. Furthermore, BPA-treated WHHL rabbits showed increased adipose accumulation and hepatic and myocardial injuries accompanied by up-regulation of endoplasmic reticulum (ER) stress and inflammatory and lipid metabolism markers in livers. Treatment with BPA also induced the expression of ER stress and inflammation related genes in cultured HUVECs. These results demonstrate for the first time that BPA exposure may increase susceptibility to atherosclerosis in WHHL rabbits. PMID:25333893
A systematic review and meta-analysis to revise the Fenton growth chart for preterm infants.
Fenton, Tanis R; Kim, Jae H
2013-04-20
The aim of this study was to revise the 2003 Fenton Preterm Growth Chart, specifically to: a) harmonize the preterm growth chart with the new World Health Organization (WHO) Growth Standard, b) smooth the data between the preterm and WHO estimates, informed by the Preterm Multicentre Growth (PreM Growth) study while maintaining data integrity from 22 to 36 and at 50 weeks, and to c) re-scale the chart x-axis to actual age (rather than completed weeks) to support growth monitoring. Systematic review, meta-analysis, and growth chart development. We systematically searched published and unpublished literature to find population-based preterm size at birth measurement (weight, length, and/or head circumference) references, from developed countries with: Corrected gestational ages through infant assessment and/or statistical correction; Data percentiles as low as 24 weeks gestational age or lower; Sample with greater than 500 infants less than 30 weeks. Growth curves for males and females were produced using cubic splines to 50 weeks post menstrual age. LMS parameters (skew, median, and standard deviation) were calculated. Six large population-based surveys of size at preterm birth representing 3,986,456 births (34,639 births < 30 weeks) from countries Germany, United States, Italy, Australia, Scotland, and Canada were combined in meta-analyses. Smooth growth chart curves were developed, while ensuring close agreement with the data between 24 and 36 weeks and at 50 weeks. The revised sex-specific actual-age growth charts are based on the recommended growth goal for preterm infants, the fetus, followed by the term infant. These preterm growth charts, with the disjunction between these datasets smoothing informed by the international PreM Growth study, may support an improved transition of preterm infant growth monitoring to the WHO growth charts.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Shen, Yunzhong; Chen, Wu; Zhang, Xingfu; Hsu, Houze
2016-06-01
The main contribution of this study is to improve the GRACE gravity field solution by taking errors of non-conservative acceleration and attitude observations into account. Unlike previous studies, the errors of the attitude and non-conservative acceleration data, and gravity field parameters, as well as accelerometer biases are estimated by means of weighted least squares adjustment. Then we compute a new time series of monthly gravity field models complete to degree and order 60 covering the period Jan. 2003 to Dec. 2012 from the twin GRACE satellites' data. The derived GRACE solution (called Tongji-GRACE02) is compared in terms of geoid degree variances and temporal mass changes with the other GRACE solutions, namely CSR RL05, GFZ RL05a, and JPL RL05. The results show that (1) the global mass signals of Tongji-GRACE02 are generally consistent with those of CSR RL05, GFZ RL05a, and JPL RL05; (2) compared to CSR RL05, the noise of Tongji-GRACE02 is reduced by about 21 % over ocean when only using 300 km Gaussian smoothing, and 60 % or more over deserts (Australia, Kalahari, Karakum and Thar) without using Gaussian smoothing and decorrelation filtering; and (3) for all examples, the noise reductions are more significant than signal reductions, no matter whether smoothing and filtering are applied or not. The comparison with GLDAS data supports that the signals of Tongji-GRACE02 over St. Lawrence River basin are close to those from CSR RL05, GFZ RL05a and JPL RL05, while the GLDAS result shows the best agreement with the Tongji-GRACE02 result.
Obusez, E C; Hui, F; Hajj-Ali, R A; Cerejo, R; Calabrese, L H; Hammad, T; Jones, S E
2014-08-01
High-resolution MR imaging is an emerging tool for evaluating intracranial artery disease. It has an advantage of defining vessel wall characteristics of intracranial vascular diseases. We investigated high-resolution MR imaging arterial wall characteristics of CNS vasculitis and reversible cerebral vasoconstriction syndrome to determine wall pattern changes during a follow-up period. We retrospectively reviewed 3T-high-resolution MR imaging vessel wall studies performed on 26 patients with a confirmed diagnosis of CNS vasculitis and reversible cerebral vasoconstriction syndrome during a follow-up period. Vessel wall imaging protocol included black-blood contrast-enhanced T1-weighted sequences with fat suppression and a saturation band, and time-of-flight MRA of the circle of Willis. Vessel wall characteristics including enhancement, wall thickening, and lumen narrowing were collected. Thirteen patients with CNS vasculitis and 13 patients with reversible cerebral vasoconstriction syndrome were included. In the CNS vasculitis group, 9 patients showed smooth, concentric wall enhancement and thickening; 3 patients had smooth, eccentric wall enhancement and thickening; and 1 patient was without wall enhancement and thickening. Six of 13 patients had follow-up imaging; 4 patients showed stable smooth, concentric enhancement and thickening; and 2 patients had resoluton of initial imaging findings. In the reversible cerebral vasoconstriction syndrome group, 10 patients showed diffuse, uniform wall thickening with negligible-to-mild enhancement. Nine patients had follow-up imaging, with 8 patients showing complete resolution of the initial findings. Postgadolinium 3T-high-resolution MR imaging appears to be a feasible tool in differentiating vessel wall patterns of CNS vasculitis and reversible cerebral vasoconstriction syndrome changes during a follow-up period. © 2014 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Castable Amorphous Metal Mirrors and Mirror Assemblies
NASA Technical Reports Server (NTRS)
Hofmann, Douglas C.; Davis, Gregory L.; Agnes, Gregory S.; Shapiro, Andrew A.
2013-01-01
A revolutionary way to produce a mirror and mirror assembly is to cast the entire part at once from a metal alloy that combines all of the desired features into the final part: optical smoothness, curvature, flexures, tabs, isogrids, low CTE, and toughness. In this work, it has been demonstrated that castable mirrors are possible using bulk metallic glasses (BMGs, also called amorphous metals) and BMG matrix composites (BMGMCs). These novel alloys have all of the desired mechanical and thermal properties to fabricate an entire mirror assembly without machining, bonding, brazing, welding, or epoxy. BMGs are multi-component metal alloys that have been cooled in such a manner as to avoid crystallization leading to an amorphous (non-crystalline) microstructure. This lack of crystal structure and the fact that these alloys are glasses, leads to a wide assortment of mechanical and thermal properties that are unlike those observed in crystalline metals. Among these are high yield strength, carbide-like hardness, low melting temperatures (making them castable like aluminum), a thermoplastic processing region (for improving smoothness), low stiffness, high strength-to-weight ratios, relatively low CTE, density similar to titanium alloys, high elasticity and ultra-smooth cast parts (as low as 0.2-nm surface roughness has been demonstrated in cast BMGs). BMGMCs are composite alloys that consist of a BMG matrix with crystalline dendrites embedded throughout. BMGMCs are used to overcome the typically brittle failure observed in monolithic BMGs by adding a soft phase that arrests the formation of cracks in the BMG matrix. In some cases, BMGMCs offer superior castability, toughness, and fatigue resistance, if not as good a surface finish as BMGs. This work has demonstrated that BMGs and BMGMCs can be cast into prototype mirrors and mirror assemblies without difficulty.
Hwuang, Eileen; Danish, Shabbar; Rusu, Mirabela; Sparks, Rachel; Toth, Robert; Madabhushi, Anant
2013-01-01
MRI-guided laser-induced interstitial thermal therapy (LITT) is a form of laser ablation and a potential alternative to craniotomy in treating glioblastoma multiforme (GBM) and epilepsy patients, but its effectiveness has yet to be fully evaluated. One way of assessing short-term treatment of LITT is by evaluating changes in post-treatment MRI as a measure of response. Alignment of pre- and post-LITT MRI in GBM and epilepsy patients via nonrigid registration is necessary to detect subtle localized treatment changes on imaging, which can then be correlated with patient outcome. A popular deformable registration scheme in the context of brain imaging is Thirion's Demons algorithm, but its flexibility often introduces artifacts without physical significance, which has conventionally been corrected by Gaussian smoothing of the deformation field. In order to prevent such artifacts, we instead present the Anisotropic smoothing regularizer (AnSR) which utilizes edge-detection and denoising within the Demons framework to regularize the deformation field at each iteration of the registration more aggressively in regions of homogeneously oriented displacements while simultaneously regularizing less aggressively in areas containing heterogeneous local deformation and tissue interfaces. In contrast, the conventional Gaussian smoothing regularizer (GaSR) uniformly averages over the entire deformation field, without carefully accounting for transitions across tissue boundaries and local displacements in the deformation field. In this work we employ AnSR within the Demons algorithm and perform pairwise registration on 2D synthetic brain MRI with and without noise after inducing a deformation that models shrinkage of the target region expected from LITT. We also applied Demons with AnSR for registering clinical T1-weighted MRI for one epilepsy and one GBM patient pre- and post-LITT. Our results demonstrate that by maintaining select displacements in the deformation field, AnSR outperforms both GaSR and no regularizer (NoR) in terms of normalized sum of squared differences (NSSD) with values such as 0.743, 0.807, and 1.000, respectively, for GBM.
Four types of ensemble coding in data visualizations.
Szafir, Danielle Albers; Haroz, Steve; Gleicher, Michael; Franconeri, Steven
2016-01-01
Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research.
Scanning fluorescent microscopy is an alternative for quantitative fluorescent cell analysis.
Varga, Viktor Sebestyén; Bocsi, József; Sipos, Ferenc; Csendes, Gábor; Tulassay, Zsolt; Molnár, Béla
2004-07-01
Fluorescent measurements on cells are performed today with FCM and laser scanning cytometry. The scientific community dealing with quantitative cell analysis would benefit from the development of a new digital multichannel and virtual microscopy based scanning fluorescent microscopy technology and from its evaluation on routine standardized fluorescent beads and clinical specimens. We applied a commercial motorized fluorescent microscope system. The scanning was done at 20 x (0.5 NA) magnification, on three channels (Rhodamine, FITC, Hoechst). The SFM (scanning fluorescent microscopy) software included the following features: scanning area, exposure time, and channel definition, autofocused scanning, densitometric and morphometric cellular feature determination, gating on scatterplots and frequency histograms, and preparation of galleries of the gated cells. For the calibration and standardization Immuno-Brite beads were used. With application of shading compensation, the CV of fluorescence of the beads decreased from 24.3% to 3.9%. Standard JPEG image compression until 1:150 resulted in no significant change. The change of focus influenced the CV significantly only after +/-5 microm error. SFM is a valuable method for the evaluation of fluorescently labeled cells. Copyright 2004 Wiley-Liss, Inc.
Olea, Ricardo A.; Luppens, James A.
2012-01-01
There are multiple ways to characterize uncertainty in the assessment of coal resources, but not all of them are equally satisfactory. Increasingly, the tendency is toward borrowing from the statistical tools developed in the last 50 years for the quantitative assessment of other mineral commodities. Here, we briefly review the most recent of such methods and formulate a procedure for the systematic assessment of multi-seam coal deposits taking into account several geological factors, such as fluctuations in thickness, erosion, oxidation, and bed boundaries. A lignite deposit explored in three stages is used for validating models based on comparing a first set of drill holes against data from infill and development drilling. Results were fully consistent with reality, providing a variety of maps, histograms, and scatterplots characterizing the deposit and associated uncertainty in the assessments. The geostatistical approach was particularly informative in providing a probability distribution modeling deposit wide uncertainty about total resources and a cumulative distribution of coal tonnage as a function of local uncertainty.
ExAtlas: An interactive online tool for meta-analysis of gene expression data.
Sharov, Alexei A; Schlessinger, David; Ko, Minoru S H
2015-12-01
We have developed ExAtlas, an on-line software tool for meta-analysis and visualization of gene expression data. In contrast to existing software tools, ExAtlas compares multi-component data sets and generates results for all combinations (e.g. all gene expression profiles versus all Gene Ontology annotations). ExAtlas handles both users' own data and data extracted semi-automatically from the public repository (GEO/NCBI database). ExAtlas provides a variety of tools for meta-analyses: (1) standard meta-analysis (fixed effects, random effects, z-score, and Fisher's methods); (2) analyses of global correlations between gene expression data sets; (3) gene set enrichment; (4) gene set overlap; (5) gene association by expression profile; (6) gene specificity; and (7) statistical analysis (ANOVA, pairwise comparison, and PCA). ExAtlas produces graphical outputs, including heatmaps, scatter-plots, bar-charts, and three-dimensional images. Some of the most widely used public data sets (e.g. GNF/BioGPS, Gene Ontology, KEGG, GAD phenotypes, BrainScan, ENCODE ChIP-seq, and protein-protein interaction) are pre-loaded and can be used for functional annotations.
Igene, Helen
2008-01-01
The aim of the study was to provide information on the global health inequality pattern produced by the increasing incidence of breast cancer and its relationship with the health expenditure of developing countries with emphasis on sub-Saharan Africa. It examines the difference between the health expenditure of developed and developing countries, and how this affects breast cancer incidence and mortality. The data collected from the World Health Organization and World Bank were examined, using bivariate analysis, through scatter-plots and Pearson's product moment correlation coefficient. Multivariate analysis was carried out by multiple regression analysis. National income, health expenditure affects breast cancer incidence, particularly between the developed and developing countries. However, these factors do not adequately explain variations in mortality rates. The study reveals the risk posed to developing countries to solving the present and predicted burden of breast cancer, currently characterized by late presentation, inadequate health care systems, and high mortality. Findings from this study contribute to the knowledge of the burden of disease in developing countries, especially sub-Saharan Africa, and how that is related to globalization and health inequalities.
Titan's surface from the Cassini RADAR radiometry data during SAR mode
Paganelli, F.; Janssen, M.A.; Lopes, R.M.; Stofan, E.; Wall, S.D.; Lorenz, R.D.; Lunine, J.I.; Kirk, R.L.; Roth, L.; Elachi, C.
2008-01-01
We present initial results on the calibration and interpretation of the high-resolution radiometry data acquired during the Synthetic Aperture Radar (SAR) mode (SAR-radiometry) of the Cassini Radar Mapper during its first five flybys of Saturn's moon Titan. We construct maps of the brightness temperature at the 2-cm wavelength coincident with SAR swath imaging. A preliminary radiometry calibration shows that brightness temperature in these maps varies from 64 to 89 K. Surface features and physical properties derived from the SAR-radiometry maps and SAR imaging are strongly correlated; in general, we find that surface features with high radar reflectivity are associated with radiometrically cold regions, while surface features with low radar reflectivity correlate with radiometrically warm regions. We examined scatterplots of the normalized radar cross-section ??0 versus brightness temperature, outlining signatures that characterize various terrains and surface features. The results indicate that volume scattering is important in many areas of Titan's surface, particularly Xanadu, while other areas exhibit complex brightness temperature variations consistent with variable slopes or surface material and compositional properties. ?? 2007.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-22
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less
Geomorphometric comparative analysis of Latin-American volcanoes
NASA Astrophysics Data System (ADS)
Camiz, Sergio; Poscolieri, Maurizio; Roverato, Matteo
2017-07-01
The geomorphometric classifications of three groups of volcanoes situated in the Andes Cordillera, Central America, and Mexico are performed and compared. Input data are eight local topographic gradients (i.e. elevation differences) obtained by processing each volcano raster ASTER-GDEM data. The pixels of each volcano DEM have been classified into 17 classes through a K-means clustering procedure following principal component analysis of the gradients. The spatial distribution of the classes, representing homogeneous terrain units, is shown on thematic colour maps, where colours are assigned according to mean slope and aspect class values. The interpretation of the geomorphometric classification of the volcanoes is based on the statistics of both gradients and morphometric parameters (slope, aspect and elevation). The latter were used for a comparison of the volcanoes, performed through classes' slope/aspect scatterplots and multidimensional methods. In this paper, we apply the mentioned methodology on 21 volcanoes, randomly chosen from Mexico to Patagonia, to show how it may contribute to detect geomorphological similarities and differences among them. As such, both its descriptive and graphical abilities may be a useful complement to future volcanological studies.
Bispectral infrared forest fire detection and analysis using classification techniques
NASA Astrophysics Data System (ADS)
Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando
2004-01-01
Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.
A preliminary analysis of quantifying computer security vulnerability data in "the wild"
NASA Astrophysics Data System (ADS)
Farris, Katheryn A.; McNamara, Sean R.; Goldstein, Adam; Cybenko, George
2016-05-01
A system of computers, networks and software has some level of vulnerability exposure that puts it at risk to criminal hackers. Presently, most vulnerability research uses data from software vendors, and the National Vulnerability Database (NVD). We propose an alternative path forward through grounding our analysis in data from the operational information security community, i.e. vulnerability data from "the wild". In this paper, we propose a vulnerability data parsing algorithm and an in-depth univariate and multivariate analysis of the vulnerability arrival and deletion process (also referred to as the vulnerability birth-death process). We find that vulnerability arrivals are best characterized by the log-normal distribution and vulnerability deletions are best characterized by the exponential distribution. These distributions can serve as prior probabilities for future Bayesian analysis. We also find that over 22% of the deleted vulnerability data have a rate of zero, and that the arrival vulnerability data is always greater than zero. Finally, we quantify and visualize the dependencies between vulnerability arrivals and deletions through a bivariate scatterplot and statistical observations.
Income Smoothing: Methodology and Models.
1986-05-01
studies have all followed a similar research process (Figure 1). All were expost studies and included the following steps: 1. A smoothing technique(s) or...researcher methodological decisions used in past empirical studies of income smoothing (design type, smoothing device norm, and income target) are discussed...behavior. The identification of smoothing, and consequently the conclusions to be drawn from smoothing studies , is found to be sensitive to the three
Proximal-distal differences in movement smoothness reflect differences in biomechanics.
Salmond, Layne H; Davidson, Andrew D; Charles, Steven K
2017-03-01
Smoothness is a hallmark of healthy movement. Past research indicates that smoothness may be a side product of a control strategy that minimizes error. However, this is not the only reason for smooth movements. Our musculoskeletal system itself contributes to movement smoothness: the mechanical impedance (inertia, damping, and stiffness) of our limbs and joints resists sudden change, resulting in a natural smoothing effect. How the biomechanics and neural control interact to result in an observed level of smoothness is not clear. The purpose of this study is to 1 ) characterize the smoothness of wrist rotations, 2 ) compare it with the smoothness of planar shoulder-elbow (reaching) movements, and 3 ) determine the cause of observed differences in smoothness. Ten healthy subjects performed wrist and reaching movements involving different targets, directions, and speeds. We found wrist movements to be significantly less smooth than reaching movements and to vary in smoothness with movement direction. To identify the causes underlying these observations, we tested a number of hypotheses involving differences in bandwidth, signal-dependent noise, speed, impedance anisotropy, and movement duration. Our simulations revealed that proximal-distal differences in smoothness reflect proximal-distal differences in biomechanics: the greater impedance of the shoulder-elbow filters neural noise more than the wrist. In contrast, differences in signal-dependent noise and speed were not sufficiently large to recreate the observed differences in smoothness. We also found that the variation in wrist movement smoothness with direction appear to be caused by, or at least correlated with, differences in movement duration, not impedance anisotropy. NEW & NOTEWORTHY This article presents the first thorough characterization of the smoothness of wrist rotations (flexion-extension and radial-ulnar deviation) and comparison with the smoothness of reaching (shoulder-elbow) movements. We found wrist rotations to be significantly less smooth than reaching movements and determined that this difference reflects proximal-distal differences in biomechanics: the greater impedance (inertia, damping, stiffness) of the shoulder-elbow filters noise in the command signal more than the impedance of the wrist. Copyright © 2017 the American Physiological Society.
Kovic, Bruno; Guyatt, Gordon; Brundage, Michael; Thabane, Lehana; Bhatnagar, Neera; Xie, Feng
2016-09-02
There is an increasing number of new oncology drugs being studied, approved and put into clinical practice based on improvement in progression-free survival, when no overall survival benefits exist. In oncology, the association between progression-free survival and health-related quality of life is currently unknown, despite its importance for patients with cancer, and the unverified assumption that longer progression-free survival indicates improved health-related quality of life. Thus far, only 1 study has investigated this association, providing insufficient evidence and inconclusive results. The objective of this study protocol is to provide increased transparency in supporting a systematic summary of the evidence bearing on this association in oncology. Using the OVID platform in MEDLINE, Embase and Cochrane databases, we will conduct a systematic review of randomised controlled human trials addressing oncology issues published starting in 2000. A team of reviewers will, in pairs, independently screen and abstract data using standardised, pilot-tested forms. We will employ numerical integration to calculate mean incremental area under the curve between treatment groups in studies for health-related quality of life, along with total related error estimates, and a 95% CI around incremental area. To describe the progression-free survival to health-related quality of life association, we will construct a scatterplot for incremental health-related quality of life versus incremental progression-free survival. To estimate the association, we will use a weighted simple regression approach, comparing mean incremental health-related quality of life with either median incremental progression-free survival time or the progression-free survival HR, in the absence of overall survival benefit. Identifying direction and magnitude of association between progression-free survival and health-related quality of life is critically important in interpreting results of oncology trials. Systematic evidence produced from our study will contribute to improvement of patient care and practice of evidence-based medicine in oncology. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
SeeSway - A free web-based system for analysing and exploring standing balance data.
Clark, Ross A; Pua, Yong-Hao
2018-06-01
Computerised posturography can be used to assess standing balance, and can predict poor functional outcomes in many clinical populations. A key limitation is the disparate signal filtering and analysis techniques, with many methods requiring custom computer programs. This paper discusses the creation of a freely available web-based software program, SeeSway (www.rehabtools.org/seesway), which was designed to provide powerful tools for pre-processing, analysing and visualising standing balance data in an easy to use and platform independent website. SeeSway links an interactive web platform with file upload capability to software systems including LabVIEW, Matlab, Python and R to perform the data filtering, analysis and visualisation of standing balance data. Input data can consist of any signal that comprises an anterior-posterior and medial-lateral coordinate trace such as center of pressure or mass displacement. This allows it to be used with systems including criterion reference commercial force platforms and three dimensional motion analysis, smartphones, accelerometers and low-cost technology such as Nintendo Wii Balance Board and Microsoft Kinect. Filtering options include Butterworth, weighted and unweighted moving average, and discrete wavelet transforms. Analysis methods include standard techniques such as path length, amplitude, and root mean square in addition to less common but potentially promising methods such as sample entropy, detrended fluctuation analysis and multiresolution wavelet analysis. These data are visualised using scalograms, which chart the change in frequency content over time, scatterplots and standard line charts. This provides the user with a detailed understanding of their results, and how their different pre-processing and analysis method selections affect their findings. An example of the data analysis techniques is provided in the paper, with graphical representation of how advanced analysis methods can better discriminate between someone with neurological impairment and a healthy control. The goal of SeeSway is to provide a simple yet powerful educational and research tool to explore how standing balance is affected in aging and clinical populations. Copyright © 2018 Elsevier B.V. All rights reserved.
Use of multiple relocation techniques to better understand seismotectonic structure in Greece
NASA Astrophysics Data System (ADS)
Bozionelos, George; Ganas, Athanassios; Karastathis, Vassilios; Moshou, Alexandra
2015-04-01
The identification of the structure of seismicity associated with active faults is of great significance particularly for the densely populated areas of Greece, such as Corinth Gulf, SW Peloponnese and central Crete. Manual analysis of the seismicity that has been recorded by the Hellenic Unified Seismological Network (HUSN) for the recent years provides the opportunity to determine accurate hypocentral solutions using the weighted P and S wave arrival times for these regions. The purpose is to perform precise event location and relative relocation so as to obtain the spatial distribution of the recorded seismicity with the needed resolution. In order to investigate the influence of the velocity model on the seismicity distribution and to find the most reliable hypocentral locations, different velocity models (both 1-D and 3-D) and location schemes are adopted and thoroughly tested. Initially, to test the models, the hypocentral locations, including the determination of the location uncertainties, are obtained applying the non-linear location tool, NonLinLoc. To approximate the likelihood function, the much more robust in the presence of outliers, Equal Differential Time (EDT) is selected. To locate the earthquakes the Oct-tree search is used. Histograms with the RMS error, the spatial errors and the maximum half-axis (LEN3) of the 68% confidence ellipsoid are created. Moreover, the form of density scatterplots and the difference between maximum likelihood and expectation locations is taken into account. As an additional procedure, the travel-time residuals are examined separately for each station as a function of epicentral distance. Finaly, several cross sections are constructed in various azimuths and the spatial distribution of the earthquakes is evaluated and compared with the active fault structures. In order to highlight the activated faults, an additional relocation procedure is performed, using the double-difference algorithm HYPODD and incorporating the traveltime data of the best fitting velocity models. The accurate determination of seismicity will play a key role in revealing the mechanisms that contribute to the crustal deformation and to active tectonics. Note: this research was funded by the ASPIDA project.
Torstrick, F Brennan; Klosterhoff, Brett S; Westerlund, L Erik; Foley, Kevin T; Gochuico, Joanna; Lee, Christopher S D; Gall, Ken; Safranski, David L
2018-05-01
Various surface modifications, often incorporating roughened or porous surfaces, have recently been introduced to enhance osseointegration of interbody fusion devices. However, these topographical features can be vulnerable to damage during clinical impaction. Despite the potential negative impact of surface damage on clinical outcomes, current testing standards do not replicate clinically relevant impaction loading conditions. The purpose of this study was to compare the impaction durability of conventional smooth polyether-ether-ketone (PEEK) cervical interbody fusion devices with two surface-modified PEEK devices that feature either a porous structure or plasma-sprayed titanium coating. A recently developed biomechanical test method was adapted to simulate clinically relevant impaction loading conditions during cervical interbody fusion procedures. Three cervical interbody fusion devices were used in this study: smooth PEEK, plasma-sprayed titanium-coated PEEK, and porous PEEK (n=6). Following Kienle et al., devices were impacted between two polyurethane blocks mimicking vertebral bodies under a constant 200 N preload. The posterior tip of the device was placed at the entrance between the polyurethane blocks, and a guided 1-lb weight was impacted upon the anterior face with a maximum speed of 2.6 m/s to represent the strike force of a surgical mallet. Impacts were repeated until the device was fully impacted. Porous PEEK durability was assessed using micro-computed tomography (µCT) pre- and postimpaction. Titanium-coating coverage pre- and postimpaction was assessed using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy. Changes to the surface roughness of smooth and titanium-coated devices were also evaluated. Porous PEEK and smooth PEEK devices showed minimal macroscopic signs of surface damage, whereas the titanium-coated devices exhibited substantial visible coating loss. Quantification of the porous PEEK deformation demonstrated that the porous structure maintained a high porosity (>65%) following impaction that would be available for bone ingrowth, and exhibited minimal changes to pore size and depth. SEM and energy dispersive X-ray spectroscopy analysis of titanium-coated devices demonstrated substantial titanium coating loss after impaction that was corroborated with a decrease in surface roughness. Smooth PEEK showed minimal signs of damage using SEM, but demonstrated a decrease in surface roughness. Although recent surface modifications to interbody fusion devices are beneficial for osseointegration, they may be susceptible to damage and wear during impaction. The current study found porous PEEK devices to show minimal damage during simulated cervical impaction, whereas titanium-coated PEEK devices lost substantial titanium coverage. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes
NASA Astrophysics Data System (ADS)
Zhu, Jun; Qiu, Jianxian
2017-11-01
In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.
Plasma zinc's alter ego is a low-molecular-weight humoral factor.
Ou, Ou; Allen-Redpath, Keith; Urgast, Dagmar; Gordon, Margaret-Jane; Campbell, Gill; Feldmann, Jörg; Nixon, Graeme F; Mayer, Claus-Dieter; Kwun, In-Sook; Beattie, John H
2013-09-01
Mild dietary zinc deprivation in humans and rodents has little effect on blood plasma zinc levels, and yet cellular consequences of zinc depletion can be detected in vascular and other tissues. We proposed that a zinc-regulated humoral factor might mediate the effects of zinc deprivation. Using a novel approach, primary rat vascular smooth muscle cells (VSMCs) were treated with plasma from zinc-deficient (<1 mg Zn/kg) or zinc-adequate (35 mg Zn/kg, pair-fed) adult male rats, and zinc levels were manipulated to distinguish direct and indirect effects of plasma zinc. Gene expression changes were analyzed by microarray and qPCR, and incubation of VSMCs with blood plasma from zinc-deficient rats strongly changed the expression of >2500 genes, compared to incubation of cells with zinc-adequate rat plasma. We demonstrated that this effect was caused by a low-molecular-weight (∼2-kDa) zinc-regulated humoral factor but that changes in gene expression were mostly reversed by adding zinc back to zinc-deficient plasma. Strongly regulated genes were overrepresented in pathways associated with immune function and development. We conclude that zinc deficiency induces the production of a low-molecular-weight humoral factor whose influence on VSMC gene expression is blocked by plasma zinc. This factor is therefore under dual control by zinc.
DOT National Transportation Integrated Search
2006-12-18
This study investigated the affect of pavement smoothness on fuel efficiency, specifically examining the miles per gallon in fuel savings for smooth versus rough pavement. The study found a 53% improvement in smoothness which resulted in over 2.4% im...
The Dynamic Actin Cytoskeleton in Smooth Muscle.
Tang, Dale D
2018-01-01
Smooth muscle contraction requires both myosin activation and actin cytoskeletal remodeling. Actin cytoskeletal reorganization facilitates smooth muscle contraction by promoting force transmission between the contractile unit and the extracellular matrix (ECM), and by enhancing intercellular mechanical transduction. Myosin may be viewed to serve as an "engine" for smooth muscle contraction whereas the actin cytoskeleton may function as a "transmission system" in smooth muscle. The actin cytoskeleton in smooth muscle also undergoes restructuring upon activation with growth factors or the ECM, which controls smooth muscle cell proliferation and migration. Abnormal smooth muscle contraction, cell proliferation, and motility contribute to the development of vascular and pulmonary diseases. A number of actin-regulatory proteins including protein kinases have been discovered to orchestrate actin dynamics in smooth muscle. In particular, Abelson tyrosine kinase (c-Abl) is an important molecule that controls actin dynamics, contraction, growth, and motility in smooth muscle. Moreover, c-Abl coordinates the regulation of blood pressure and contributes to the pathogenesis of airway hyperresponsiveness and vascular/airway remodeling in vivo. Thus, c-Abl may be a novel pharmacological target for the development of new therapy to treat smooth muscle diseases such as hypertension and asthma. © 2018 Elsevier Inc. All rights reserved.
Buhendwa, Rudahaba Augustin; Roelants, Mathieu; Thomis, Martine; Nkiama, Constant E
2017-09-01
The last study to establish centiles of main anthropometric measurements in Kinshasa was conducted over 60 years ago, which questions its current adequacy to describe or monitor growth in this population. To assess the nutritional status of school-aged children and adolescents and to estimate centile curves of height, weight and body mass index (BMI). A representative sample of 7541 school-aged children and adolescents (48% boys) aged 6-18 years was measured between 2010-2013. Smooth centiles of height, weight and BMI-for-age were estimated with the LMS method and compared with the WHO 2007 reference. Nutritional status was assessed by comparing measurements of height and BMI against the appropriate WHO cut-offs. Compared to the WHO reference, percentiles of height and BMI were generally lower. This difference was larger in boys than in girls and increased as they approached adolescence. The prevalence of short stature (< -2 SD) and thinness (< -2 SD) was higher in boys (9.8% and 12%) than in girls (3.4% and 6.1%), but the prevalence of overweight (> 1 SD) was higher in girls (8.6%) than in boys (4.5%). Children from Kinshasa fall below WHO centile references. This study established up-to-date centile curves for height, weight and BMI by age in children and adolescents. These reference curves describe the current status of these anthropometric markers and can be used as a basis for comparison in future studies.
NASA Astrophysics Data System (ADS)
Jiménez-Ruano, Adrián; Rodrigues Mimbrero, Marcos; de la Riva Fernández, Juan
2017-04-01
Understanding fire regime is a crucial step towards achieving a better knowledge of the wildfire phenomenon. This study proposes a method for the analysis of fire regime based on multidimensional scatterplots (MDS). MDS are a visual approach that allows direct comparison among several variables and fire regime features so that we are able to unravel spatial patterns and relationships within the region of analysis. Our analysis is conducted in Spain, one of the most fire-affected areas within the Mediterranean region. Specifically, the Spanish territory has been split into three regions - Northwest, Hinterland and Mediterranean - considered as representative fire regime zones according to MAGRAMA (Spanish Ministry of Agriculture, Environment and Food). The main goal is to identify key relationships between fire frequency and burnt area, two of the most common fire regime features, with socioeconomic activity and climate. In this way we will be able to better characterize fire activity within each fire region. Fire data along the period 1974-2010 was retrieved from the General Statistics Forest Fires database (EGIF). Specifically, fire frequency and burnt area size was examined for each region and fire season (summer and winter). Socioeconomic activity was defined in terms of human pressure on wildlands, i.e. the presence and intensity of anthropogenic activity near wildland or forest areas. Human pressure was built from GIS spatial information about land use (wildland-agriculture and wildland-urban interface) and demographic potential. Climate variables (average maximum temperature and annual precipitation) were extracted from MOTEDAS (Monthly Temperature Dataset of Spain) and MOPREDAS (Monthly Precipitation Dataset of Spain) datasets and later reclassified into ten categories. All these data were resampled to fit the 10x10 Km grid used as spatial reference for fire data. Climate and socioeconomic variables were then explored by means of MDS to find the extent to which fire frequency and burnt areas are controlled by either environmental, human, or both factors. Results reveal a noticeable link between fire frequency and human activity, especially in the Northwest area during winter. On the other hand, in the Hinterland and Mediterranean regions, human and climate factors 'work' together in terms of their relationship with fire activity, being the concurrence of high human pressure and favourable climate conditions the main driver. In turn, burned area shows a similar behaviour except in the Hinterland region, were fire-affected area depends mostly on climate factors. Overall, we can conclude that the visual analysis of multidimensional scatterplots has proved to be a powerful tool that facilitates characterization and investigation of fire regimes.
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Robinson, P. A.; Anderson, Roger R.; Strangeway, R. J.
1997-01-01
Plasma wave data are compared with ISEE 1's position in the electron foreshock for an interval with unusually constant (but otherwise typical) solar wind magnetic field and plasma characteristics. For this period, temporal variations in the wave characteristics can be confidently separated from sweeping of the spatially varying foreshock back and forth across the spacecraft. The spacecraft's location, particularly the coordinate D(sub f) downstream from the foreshock boundary (often termed DIFF), is calculated by using three shock models and the observed solar wind magnetometer and plasma data. Scatterplots of the wave field versus D(sub f) are used to constrain viable shock models, to investigate the observed scatter in the wave fields at constant D(sub f), and to test the theoretical predictions of linear instability theory. The scatterplots confirm the abrupt onset of the foreshock waves near the upstream boundary, the narrow width in D(sub f) of the region with high fields, and the relatively slow falloff of the fields at large D(sub f), as seen in earlier studies, but with much smaller statistical scatter. The plots also show an offset of the high-field region from the foreshock boundary. It is shown that an adaptive, time-varying shock model with no free parameters, determined by the observed solar wind data and published shock crossings, is viable but that two alternative models are not. Foreshock wave studies can therefore remotely constrain the bow shock's location. The observed scatter in wave field at constant D(sub f) is shown to be real and to correspond to real temporal variations, not to unresolved changes in D(sub f). By comparing the wave data with a linear instability theory based on a published model for the electron beam it is found that the theory can account qualitatively and semiquantitatively for the abrupt onset of the waves near D(sub f) = 0, for the narrow width and offset of the high-field region, and for the decrease in wave intensity with increasing D(sub f). Quantitative differences between observations and theory remain, including large overprediction of the wave fields and the slower than predicted falloff at large D(sub f) of the wave fields. These differences, as well as the unresolved issue of the electron beam speed in the high-field region of the foreshock, are discussed. The intrinsic temporal variability of the wave fields, as well as their overprediction based on homogeneous plasma theory, are indicative of stochastic growth physics, which causes wave growth to be random and varying in sign, rather than secular.
An image mosaic method based on corner
NASA Astrophysics Data System (ADS)
Jiang, Zetao; Nie, Heting
2015-08-01
In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.
NASA Technical Reports Server (NTRS)
Edwards, B. F.; Waligora, J. M.; Horrigan, D. J., Jr.
1985-01-01
This analysis was done to determine whether various decompression response groups could be characterized by the pooled nitrogen (N2) washout profiles of the group members, pooling individual washout profiles provided a smooth time dependent function of means representative of the decompression response group. No statistically significant differences were detected. The statistical comparisons of the profiles were performed by means of univariate weighted t-test at each 5 minute profile point, and with levels of significance of 5 and 10 percent. The estimated powers of the tests (i.e., probabilities) to detect the observed differences in the pooled profiles were of the order of 8 to 30 percent.
Geochemical landscapes of the conterminous United States; new map presentations for 22 elements
Gustavsson, N.; Bolviken, B.; Smith, D.B.; Severson, R.C.
2001-01-01
Geochemical maps of the conterminous United States have been prepared for seven major elements (Al, Ca, Fe, K, Mg, Na, and Ti) and 15 trace elements (As, Ba, Cr, Cu, Hg, Li, Mn, Ni, Pb, Se, Sr, V, Y, Zn, and Zr). The maps are based on an ultra low-density geochemical survey consisting of 1,323 samples of soils and other surficial materials collected from approximately 1960-1975. The data were published by Boerngen and Shacklette (1981) and black-and-white point-symbol geochemical maps were published by Shacklette and Boerngen (1984). The data have been reprocessed using weighted-median and Bootstrap procedures for interpolation and smoothing.
[A giant myxoid leiomyoma mimicking an inguinal hernia].
Huszár, Orsolya; Zaránd, Attila; Szántó, Gyöngyi; Juhász, Viktória; Székely, Eszter; Novák, András; Molnár, Béla Ákos; Harsányi, László
2016-03-06
Leiomyoma is a rare, smooth muscle tumour that can occur everywhere in the human body. The authors present the history of a 60-year-old female, who had a giant, Mullerian type myxoid leiomyoma in the inguinal region mimicking acute abdominal symptoms. After examination the authors removed the soft tissue mass in the right femoral region reaching down in supine position to the middle third of the leg measuring 335 × 495 × 437 mm in greatest diameters in weight 33 kg. Reconstruction of the tissue defect was performed using oncoplastic guidelines. During the follow-up time no tumour recurrence was detected and the quality of life of the patient improved significantly.
General Multivariate Linear Modeling of Surface Shapes Using SurfStat
Chung, Moo K.; Worsley, Keith J.; Nacewicz, Brendon, M.; Dalton, Kim M.; Davidson, Richard J.
2010-01-01
Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper present a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used as to parameterize, to smooth out, and to normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects. PMID:20620211
NASA Technical Reports Server (NTRS)
Kimsey, D. B.
1978-01-01
The effect on the life cycle cost of the timing subsystem was examined, when these optional features were included in various combinations. The features included mutual control, directed control, double-ended reference links, independence of clock error measurement and correction, phase reference combining, self-organization, smoothing for link and nodal dropouts, unequal reference weightings, and a master in a mutual control network. An overall design of a microprocessor-based timing subsystem was formulated. The microprocessor (8080) implements the digital filter portion of a digital phase locked loop, as well as other control functions such as organization of the network through communication with processors at neighboring nodes.
Extraction and characterization of the auricularia auricular polysaccharide
NASA Astrophysics Data System (ADS)
Zhang, Q. T.
2016-07-01
To study a new protein drugs carrier, the Auricularia auricular polysaccharide (AAP) was extracted and purified from Auricularia auricular, and then characterized by the micrOTOF-Q mass spectrometer, UV/Vis spectrophotometer, moisture analyzer and SEM. The results showed that the AAP sample was water- soluble and white flocculence, its molecular weight were 20506.9 Da∼⃒63923.7 Da, and the yield, moisture, and total sugar contents of the AAP were 4.5%, 6.2% and 90.12%(w/w), respectively. The results of the SEM revealed that the AAP dried by vacuum were spherical particles with a smooth surface, and the AAP freeze-dried had continuous porous sheet shape with the loose structure.
Polishing compound for plastic surfaces
Stowell, M.S.
1991-01-01
This invention is comprised of a polishing compound for plastic materials. The compound includes approximately by approximately by weight 25 to 80 parts at least one petroleum distillate lubricant, 1 to 12 parts mineral spirits, 50 to 155 parts abrasive paste, and 15 to 60 parts water. Preferably, the compound includes approximately 37 to 42 parts at least one petroleum distillate lubricant, up to 8 parts mineral spirits, 95 to 110 parts abrasive paste, and 50 to 55 parts water. The proportions of the ingredients are varied in accordance with the particular application. The compound is used on PLEXIGLAS{trademark}, LEXAN{trademark}, LUCITE{trademark}, polyvinyl chloride (PVC), and similar plastic materials whenever a smooth, clear polished surface is desired.
FORMATION BY IRRADIATION OF AN EXPANDED, CELLULAR, POLYMERIC BODY
Charlesby, A.; Ross, M.
1958-12-01
The treatment of polymeric esters of methacrylic acid having a softening polnt above 40 icient laborato C to form an expanded cellular mass with a smooth skin is discussed. The disclosed method comprises the steps of subjecting the body at a temperature below the softenpoint to a dose of at least 5 x lO/sup 6/ roentgen of gamma radiation from cobalt-60 source until its average molecular weight is reduced to a value within the range of 3 x lO/sup 5/ to 10/sup 4/, and heating at a temperature within the range of 0 to lO icient laborato C above its softening point to effect expansion.
Model predictive control for spacecraft rendezvous in elliptical orbit
NASA Astrophysics Data System (ADS)
Li, Peng; Zhu, Zheng H.
2018-05-01
This paper studies the control of spacecraft rendezvous with attitude stable or spinning targets in an elliptical orbit. The linearized Tschauner-Hempel equation is used to describe the motion of spacecraft and the problem is formulated by model predictive control. The control objective is to maximize control accuracy and smoothness simultaneously to avoid unexpected change or overshoot of trajectory for safe rendezvous. It is achieved by minimizing the weighted summations of control errors and increments. The effects of two sets of horizons (control and predictive horizons) in the model predictive control are examined in terms of fuel consumption, rendezvous time and computational effort. The numerical results show the proposed control strategy is effective.
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
Mapping Shoreline Change Using Digital Orthophotogrammetry on Maui, Hawaii
Fletcher, C.; Rooney, J.; Barbee, M.; Lim, S.-C.; Richmond, B.
2003-01-01
Digital, aerial orthophotomosaics with 0.5-3.0 m horizontal accuracy, used with NOAA topographic maps (T-sheets), document past shoreline positions on Maui Island, Hawaii. Outliers in the shoreline position database are determined using a least median of squares regression. Least squares linear regression of the reweighted data (outliers excluded) is used to determine a shoreline trend termed the reweighted linear squares (RLS). To determine the annual erosion hazard rate (AEHR) for use by shoreline managers the RLS data is smoothed in the longshore direction using a weighted moving average five transects wide with the smoothed rate applied to the center transect. Weightings within each five transect group are 1,3,5,3,1. AEHR's (smoothed RLS values) are plotted on a 1:3000 map series for use by shoreline managers and planners. These maps are displayed on the web for public reference at http://www.co.maui.hi.us/ departments/Planning/erosion.htm. An end-point rate of change is also calculated using the earliest T-sheet and the latest collected shoreline (1997 or 2002). The resulting database consists of 3565 separate erosion rates spaced every 20 m along 90 km of sandy shoreline. Three regions are analyzed: Kihei, West Maui, and North Shore coasts. The Kihei Coast has an average AEHR of about 0.3 m/yr, an end point rate (EPR) of 0.2 m/yr, 2.8 km of beach loss and 19 percent beach narrowing in the period 1949-1997. Over the same period the West Maui coast has an average AEHR of about 0.2 m/yr, an average EPR of about 0.2 m/yr, about 4.5 km of beach loss and 25 percent beach narrowing. The North Shore has an average AEHR of about 0.4 m/yr, an average EPR of about 0.3 m/yr, 0.8 km of beach loss and 15 percent beach narrowing. The mean, island-wide EPR of eroding shorelines is 0.24 m/yr and the average AEHR of eroding shorelines is about 0.3 m/yr. The overall shoreline change rate, erosion and accretion included, as measured using the unsmoothed RLS technique is 0.21 m/yr. Island wide changes in beach width show a 19 percent decrease over the period 1949/ 1950 to 1997/2002. Island-wide, about 8 km of dry beach has been lost since 1949 (i.e., high water against hard engineering structures and natural rock substrate).
Jin, Haifeng; Liu, Mingcheng; Zhang, Xin; Pan, Jinjin; Han, Jinzhen; Wang, Yudong; Lei, Haixin; Ding, Yanchun; Yuan, Yuhui
2016-10-01
Hypoxia-induced oxidative stress and excessive proliferation of pulmonary artery smooth muscle cells (PASMCs) play important roles in the pathological process of hypoxic pulmonary hypertension (HPH). Grape seed procyanidin extract (GSPE) possesses antioxidant properties and has beneficial effects on the cardiovascular system. However, the effect of GSPE on HPH remains unclear. In this study, adult Sprague-Dawley rats were exposed to intermittent chronic hypoxia for 4 weeks to mimic a severe HPH condition. Hemodynamic and pulmonary pathomorphology data showed that chronic hypoxia significantly increased right ventricular systolic pressures (RVSP), weight of the right ventricle/left ventricle plus septum (RV/LV+S) ratio and median width of pulmonary arteries. GSPE attenuated the elevation of RVSP, RV/LV+S, and reduced the pulmonary vascular structure remodeling. GSPE also increased the levels of SOD and reduced the levels of MDA in hypoxia-induced HPH model. In addition, GSPE suppressed Nox4 mRNA levels, ROS production and PASMCs proliferation. Meanwhile, increased expression of phospho-STAT3, cyclin D1, cyclin D3 and Ki67 in PASMCs caused by hypoxia was down-regulated by GSPE. These results suggested that GSPE might potentially prevent HPH via antioxidant and antiproliferative mechanisms. Copyright © 2016. Published by Elsevier Inc.
Super-hydrophobic coatings based on non-solvent induced phase separation during electro-spraying.
Gao, Jiefeng; Huang, Xuewu; Wang, Ling; Zheng, Nan; Li, Wan; Xue, Huaiguo; Li, Robert K Y; Mai, Yiu-Wing
2017-11-15
The polymer solution concentration determines whether electrospinning or electro-spraying occurs, while the addition of the non-solvent into the polymer solution strongly influences the surface morphology of the obtained products. Both smooth and porous surfaces of the electro-sprayed microspheres can be harvested by choosing different non-solvent and its amount as well as incorporating polymeric additives. The influences of the solution concentration, weight ratio between the non-solvent and the copolymer, and the polymeric additives on the surface morphology and the wettability of the electro-sprayed products were systematically studied. Surface pores and/or asperities on the microsphere surface were mainly caused by the non-solvent induced phase separation (NIPS) and subsequent evaporation of the non-solvent during electro-spraying. With increasing polymer solution concentration, the microsphere was gradually changed to the bead-on-string geometry and finally to a nanofiber form, leading to a sustained decrease of the contact angle (CA). It was found that the substrate coatings derived from the microspheres possessing hierarchical surface pores or dense asperities had high surface roughness and super-hydrophobicity with CAs larger than 150° while sliding angles smaller than 10°; but coatings composed of microspheres with smooth surfaces gave relatively low CAs. Copyright © 2017 Elsevier Inc. All rights reserved.
Obtaining reliable phase-gradient delays from otoacoustic emission data.
Shera, Christopher A; Bergevin, Christopher
2012-08-01
Reflection-source otoacoustic emission phase-gradient delays are widely used to obtain noninvasive estimates of cochlear function and properties, such as the sharpness of mechanical tuning and its variation along the length of the cochlear partition. Although different data-processing strategies are known to yield different delay estimates and trends, their relative reliability has not been established. This paper uses in silico experiments to evaluate six methods for extracting delay trends from reflection-source otoacoustic emissions (OAEs). The six methods include both previously published procedures (e.g., phase smoothing, energy-weighting, data exclusion based on signal-to-noise ratio) and novel strategies (e.g., peak-picking, all-pass factorization). Although some of the methods perform well (e.g., peak-picking), others introduce substantial bias (e.g., phase smoothing) and are not recommended. In addition, since standing waves caused by multiple internal reflection can complicate the interpretation and compromise the application of OAE delays, this paper develops and evaluates two promising signal-processing strategies, the first based on time-frequency filtering using the continuous wavelet transform and the second on cepstral analysis, for separating the direct emission from its subsequent reflections. Altogether, the results help to resolve previous disagreements about the frequency dependence of human OAE delays and the sharpness of cochlear tuning while providing useful analysis methods for future studies.
Wang, Songquan; Zhang, Dekun; Hu, Ningning; Zhang, Jialu
2016-01-01
In this work, the effects of loading condition and corrosion solution on the corrosion fatigue behavior of smooth steel wire were discussed. The results of polarization curves and weight loss curves showed that the corrosion of steel wire in acid solution was more severe than that in neutral and alkaline solutions. With the extension of immersion time in acid solution, the cathodic reaction of steel wire gradually changed from the reduction of hydrogen ion to the reduction of oxygen, but was always the reduction of hydrogen ion in neutral and alkaline solutions. The corrosion kinetic parameters and equivalent circuits of steel wires were also obtained by simulating the Nyquist diagrams. In corrosion fatigue test, the effect of stress ratio and loading frequency on the crack initiation mechanism was emphasized. The strong corrosivity of acid solution could accelerate the nucleation of crack tip. The initiation mechanism of crack under different conditions was summarized according to the side and fracture surface morphologies. For the crack initiation mechanism of anodic dissolution, the stronger the corrosivity of solution was, the more easily the fatigue crack source formed, while, for the crack initiation mechanism of deformation activation, the lower stress ratio and higher frequency would accelerate the generation of corrosion fatigue crack source. PMID:28773869
Wang, Rui-Rong; Yu, Xiao-Qing; Zheng, Shu-Wang; Ye, Yang
2016-01-01
Location based services (LBS) provided by wireless sensor networks have garnered a great deal of attention from researchers and developers in recent years. Chirp spread spectrum (CSS) signaling formatting with time difference of arrival (TDOA) ranging technology is an effective LBS technique in regards to positioning accuracy, cost, and power consumption. The design and implementation of the location engine and location management based on TDOA location algorithms were the focus of this study; as the core of the system, the location engine was designed as a series of location algorithms and smoothing algorithms. To enhance the location accuracy, a Kalman filter algorithm and moving weighted average technique were respectively applied to smooth the TDOA range measurements and location results, which are calculated by the cooperation of a Kalman TDOA algorithm and a Taylor TDOA algorithm. The location management server, the information center of the system, was designed with Data Server and Mclient. To evaluate the performance of the location algorithms and the stability of the system software, we used a Nanotron nanoLOC Development Kit 3.0 to conduct indoor and outdoor location experiments. The results indicated that the location system runs stably with high accuracy at absolute error below 0.6 m.
A covalently cross-linked gel derived from the epidermis of the pilot whale Globicephala melas.
Baum, C; Fleischer, L-G; Roessner, D; Meyer, W; Siebers, D
2002-01-01
The rheological properties of the stratum corneum of the pilot whale (Globicephala melas) were investigated with emphasis on their significance to the self-cleaning abilities of the skin surface smoothed by a jelly material enriched with various hydrolytic enzymes. The gel formation of the collected fluid was monitored by applying periodic-harmonic oscillating loads using a stress-controlled rheometer. In the mechanical spectrum of the gel, the plateau region of the storage modulus G' (<1200 Pa) and the loss modulus G" (>120 Pa) were independent of frequency (omega = 43.98 to 0.13 rad x s(-1), tau = 15 Pa, T = 20 degrees C), indicating high elastic performance of a covalently cross-linked viscoelastic solid. In addition, multi-angle laser light scattering experiments (MALLS) were performed to analyse the potential time-dependent changes in the weight-average molar mass of the samples. The observed increase showed that the gel formation is based on the assembly of covalently cross-linked aggregates. The viscoelastic properties and the shear resistance of the gel assure that the enzyme-containing jelly material smoothing the skin surface is not removed from the stratum corneum by shear regimes during dolphin jumping. The even skin surface is considered to be most important for the self-cleaning abilities of the dolphin skin against biofouling.
Chen, Shiu-Jau; Lee, Ching-Ju; Lin, Tzer-Bin; Liu, Hsiang-Jui; Huang, Shuan-Yu; Chen, Jia-Zeng; Tseng, Kuang-Wen
2016-01-07
Ultraviolet B (UVB) irradiation is the most common cause of radiation damage to the eyeball and is a risk factor for human corneal damage. We determined the protective effect of fucoxanthin, which is a carotenoid found in common edible seaweed, on ocular tissues against oxidative UVB-induced corneal injury. The experimental rats were intravenously injected with fucoxanthin at doses of 0.5, 5 mg/kg body weight/day or with a vehicle before UVB irradiation. Lissamine green for corneal surface staining showed that UVB irradiation caused serious damage on the corneal surface, including severe epithelial exfoliation and deteriorated epithelial smoothness. Histopathological lesion examination revealed that levels of proinflammatory cytokines, including tumor necrosis factor-α (TNF-α) and vascular endothelial growth factor (VEGF), significantly increased. However, pretreatment with fucoxanthin inhibited UVB radiation-induced corneal disorders including evident preservation of corneal surface smoothness, downregulation of proinflammatory cytokine expression, and decrease of infiltrated polymorphonuclear leukocytes from UVB-induced damage. Moreover, significant preservation of the epithelial integrity and inhibition of stromal swelling were also observed after UVB irradiation in fucoxanthin-treated groups. Pretreatment with fucoxanthin may protect against UVB radiation-induced corneal disorders by inhibiting expression of proinflammatory factors, TNF-α, and VEGF and by blocking polymorphonuclear leukocyte infiltration.
Uniform Foam Crush Testing for Multi-Mission Earth Entry Vehicle Impact Attenuation
NASA Technical Reports Server (NTRS)
Patterson, Byron W.; Glaab, Louis J.
2012-01-01
Multi-Mission Earth Entry Vehicles (MMEEVs) are blunt-body vehicles designed with the purpose of transporting payloads from outer space to the surface of the Earth. To achieve high-reliability and minimum weight, MMEEVs avoid use of limited-reliability systems, such as parachutes and retro-rockets, instead using built-in impact attenuators to absorb energy remaining at impact to meet landing loads requirements. The Multi-Mission Systems Analysis for Planetary Entry (M-SAPE) parametric design tool is used to facilitate the design of MMEEVs and develop the trade space. Testing was conducted to characterize the material properties of several candidate impact foam attenuators to enhance M-SAPE analysis. In the current effort, four different Rohacell foams are tested at three different, uniform, strain rates (approximately 0.17, approximately 100, approximately 13,600%/s). The primary data analysis method uses a global data smoothing technique in the frequency domain to remove noise and system natural frequencies. The results from the data indicate that the filter and smoothing technique are successful in identifying the foam crush event and removing aberrations. The effect of strain rate increases with increasing foam density. The 71-WF-HT foam may support Mars Sample Return requirements. Several recommendations to improve the drop tower test technique are identified.
The Effect of Spatial Smoothing on Representational Similarity in a Simple Motor Paradigm
Hendriks, Michelle H. A.; Daniels, Nicky; Pegado, Felipe; Op de Beeck, Hans P.
2017-01-01
Multi-voxel pattern analyses (MVPA) are often performed on unsmoothed data, which is very different from the general practice of large smoothing extents in standard voxel-based analyses. In this report, we studied the effect of smoothing on MVPA results in a motor paradigm. Subjects pressed four buttons with two different fingers of the two hands in response to auditory commands. Overall, independent of the degree of smoothing, correlational MVPA showed distinctive patterns for the different hands in all studied regions of interest (motor cortex, prefrontal cortex, and auditory cortices). With regard to the effect of smoothing, our findings suggest that results from correlational MVPA show a minor sensitivity to smoothing. Moderate amounts of smoothing (in this case, 1−4 times the voxel size) improved MVPA correlations, from a slight improvement to large improvements depending on the region involved. None of the regions showed signs of a detrimental effect of moderate levels of smoothing. Even higher amounts of smoothing sometimes had a positive effect, most clearly in low-level auditory cortex. We conclude that smoothing seems to have a minor positive effect on MVPA results, thus researchers should be mindful about the choices they make regarding the level of smoothing. PMID:28611726
7 CFR 51.636 - Smooth texture.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States...) Definitions § 51.636 Smooth texture. Smooth texture means that the skin is thin and smooth for the variety and...
7 CFR 51.698 - Smooth texture.
Code of Federal Regulations, 2010 CFR
2010-01-01
... FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States... § 51.698 Smooth texture. Smooth texture means that the skin is thin and smooth for the variety and size...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berger, R.L.; Lefebvre, E.; Langdon, A.B.
1999-04-01
Control of filamentation and stimulated Raman and Brillouin scattering is shown to be possible by use of both spatial and temporal smoothing schemes. The spatial smoothing is accomplished by the use of phase plates [Y. Kato and K. Mima, Appl. Phys. {bold 329}, 186 (1982)] and polarization smoothing [Lefebvre {ital et al.}, Phys. Plasmas {bold 5}, 2701 (1998)] in which the plasma is irradiated with two orthogonally polarized, uncorrelated speckle patterns. The temporal smoothing considered here is smoothing by spectral dispersion [Skupsky {ital et al.}, J. Appl. Phys. {bold 66}, 3456 (1989)] in which the speckle pattern changes on themore » laser coherence time scale. At the high instability gains relevant to laser fusion experiments, the effect of smoothing must include the competition among all three instabilities. {copyright} {ital 1999 American Institute of Physics.}« less
Cytotoxicity associated with electrospun polyvinyl alcohol.
Pathan, Saif G; Fitzgerald, Lisa M; Ali, Syed M; Damrauer, Scott M; Bide, Martin J; Nelson, David W; Ferran, Christiane; Phaneuf, Tina M; Phaneuf, Matthew D
2015-11-01
Polyvinyl alcohol (PVA) is a synthetic, water-soluble polymer, with applications in industries ranging from textiles to biomedical devices. Research on electrospinning of PVA has been targeted toward optimizing or finding novel applications in the biomedical field. However, the effects of electrospinning on PVA biocompatibility have not been thoroughly evaluated. In this study, the cytotoxicity of electrospun PVA (nPVA) which was not crosslinked after electrospinning was assessed. PVA polymers of several molecular weights were dissolved in distilled water and electrospun using the same parameters. Electrospun PVA materials with varying molecular weights were then dissolved in tissue culture medium and directly compared against solutions of nonelectrospun PVA polymer in human coronary artery smooth muscle cells and human coronary artery endothelial cells cultures. All nPVA solutions were cytotoxic at a threshold molar concentration that correlated with the molecular weight of the starting PVA polymer. In contrast, none of the nonelectrospun PVA solutions caused any cytotoxicity, regardless of their concentration in the cell culture. Evaluation of the nPVA material by differential scanning calorimetry confirmed that polymer degradation had occurred after electrospinning. To elucidate the identity of the nPVA component that caused cytotoxicity, nPVA materials were dissolved, fractionated using size exclusion columns, and the different fractions were added to HCASMC and human coronary artery endothelial cells cultures. These studies indicated that the cytotoxic component of the different nPVA solutions were present in the low-molecular-weight fraction. Additionally, the amount of PVA present in the 3-10 kg/mol fraction was approximately sixfold greater than that in the nonelectrospun samples. In conclusion, electrospinning of PVA resulted in small-molecular-weight fractions that were cytotoxic to cells. This result demonstrates that biocompatibility of electrospun biodegradable polymers should not be assumed on the basis of success of their nonelectrospun predecessors. © 2015 Wiley Periodicals, Inc.
Cloud Forecast Simulation Model.
1981-10-01
creasing the kurtosis of the distribution, i.e., making it more negative (more platykurtic ). Case (a) might be the distribution of forecast cloud cover be...fore smoothing, and (b) might be the distribution after smoothing. Character- istically, smoothing makes cloud cover distributions less platykurtic ...19, this effect of smoothing can be described in terms of making the smoothed distribu- tion less platykurtic than the unsmoothed distribution
Villar, José; Cheikh Ismail, Leila; Victora, Cesar G; Ohuma, Eric O; Bertino, Enrico; Altman, Doug G; Lambert, Ann; Papageorghiou, Aris T; Carvalho, Maria; Jaffer, Yasmin A; Gravett, Michael G; Purwar, Manorama; Frederick, Ihunnaya O; Noble, Alison J; Pang, Ruyan; Barros, Fernando C; Chumlea, Cameron; Bhutta, Zulfiqar A; Kennedy, Stephen H
2014-09-06
In 2006, WHO published international growth standards for children younger than 5 years, which are now accepted worldwide. In the INTERGROWTH-21(st) Project, our aim was to complement them by developing international standards for fetuses, newborn infants, and the postnatal growth period of preterm infants. INTERGROWTH-21(st) is a population-based project that assessed fetal growth and newborn size in eight geographically defined urban populations. These groups were selected because most of the health and nutrition needs of mothers were met, adequate antenatal care was provided, and there were no major environmental constraints on growth. As part of the Newborn Cross-Sectional Study (NCSS), a component of INTERGROWTH-21(st) Project, we measured weight, length, and head circumference in all newborn infants, in addition to collecting data prospectively for pregnancy and the perinatal period. To construct the newborn standards, we selected all pregnancies in women meeting (in addition to the underlying population characteristics) strict individual eligibility criteria for a population at low risk of impaired fetal growth (labelled the NCSS prescriptive subpopulation). Women had a reliable ultrasound estimate of gestational age using crown-rump length before 14 weeks of gestation or biparietal diameter if antenatal care started between 14 weeks and 24 weeks or less of gestation. Newborn anthropometric measures were obtained within 12 h of birth by identically trained anthropometric teams using the same equipment at all sites. Fractional polynomials assuming a skewed t distribution were used to estimate the fitted centiles. We identified 20,486 (35%) eligible women from the 59,137 pregnant women enrolled in NCSS between May 14, 2009, and Aug 2, 2013. We calculated sex-specific observed and smoothed centiles for weight, length, and head circumference for gestational age at birth. The observed and smoothed centiles were almost identical. We present the 3rd, 10th, 50th, 90th, and 97th centile curves according to gestational age and sex. We have developed, for routine clinical practice, international anthropometric standards to assess newborn size that are intended to complement the WHO Child Growth Standards and allow comparisons across multiethnic populations. Bill & Melinda Gates Foundation. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Q; Cheng, P; Tan, S
2016-06-15
Purpose: To combine total variation (TV) and Hessian penalty in a structure adaptive way for cone-beam CT (CBCT) reconstruction. Methods: TV is a widely used first order penalty with good ability in suppressing noise and preserving edges but leads to the staircase effect in regions with smooth intensity transition. The second order Hessian penalty can effectively suppress the staircase effect with extra cost of blurring object edges. To take the best of both penalties we proposed a novel method to combine both for CBCT reconstruction in a structure adaptive way. The proposed method adaptively determined the weight of each penaltymore » according to the geometry of local regions. An specially-designed exponent term with image gradient involved was used to characterize the local geometry such that the weights for Hessian and TV were 1 and 0 respectively at uniform local regions and 0 and 1 at edge regions. For other local regions the weights varied from 0 to 1. The objective functional was minimized using the majorzationminimization approach. We evaluated the proposed method on a modified 3D shepp-logan and a CatPhan 600 phantom. The full-width-at-halfmaximum (FWHM) and contrast-to-noise (CNR) were calculated. Results: For 3D shepp-logan the reconstructed images using TV had an obvious staircase effect while those using the proposed method and Hessian preserved the smooth transition regions well. FWHMs of the proposed method TV and Hessian penalty were 1.75 1.61 and 3.16 respectively, indicating that both TV and the proposed method is able to preserve edges. For CatPhan 600 CNR values of the proposed method were similar to those of TV and Hessian. Conclusion: The proposed method retains favorable properties of TV like preserving edges and also has the ability in better preserving gradual transition structure as Hessian does. All methods performs similarly in suppressing noise. This work was supported in part by National Natural Science Foundation of China (NNSFC) under Grant Nos.60971112 and 61375018 grants from the Cancer Prevention and Research Institute of Texas (RP130109 and RP110562-P2) National Institute of Biomedical Imaging and Bioengineering (R01 EB020366) and a grant from the American Cancer Society (RSG-13-326-01-CCE).« less
Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂
Lee, Jong Soo; Cox, Dennis D.
2009-01-01
Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976
Factors affecting minimum push and pull forces of manual carts.
Al-Eisawi, K W; Kerk, C J; Congleton, J J; Amendola, A A; Jenkins, O C; Gaines, W
1999-06-01
The minimum forces needed to manually push or pull a 4-wheel cart of differing weights with similar wheel sizes from a stationary state were measured on four floor materials under different conditions of wheel width, diameter, and orientation. Cart load was increased from 0 to 181.4 kg in increments of 36.3 kg. The floor materials were smooth concrete, tile, asphalt, and industrial carpet. Two wheel widths were tested: 25 and 38 mm. Wheel diameters were 51, 102, and 153 mm. Wheel orientation was tested at four levels: F0R0 (all four wheels aligned in the forward direction), F0R90 (the two front wheels, the wheels furthest from the cart handle, aligned in the forward direction and the two rear wheels, the wheels closest to the cart handle, aligned at 90 degrees to the forward direction), F90R0 (the two front wheels aligned at 90 degrees to the forward direction and the two rear wheels aligned in the forward direction), and F90R90 (all four wheels aligned at 90 degrees to the forward direction). Wheel width did not have a significant effect on the minimum push/pull forces. The minimum push/pull forces were linearly proportional to cart weight, and inversely proportional to wheel diameter. The coefficients of rolling friction were estimated as 2.2, 2.4, 3.3, and 4.5 mm for hard rubber wheels rolling on smooth concrete, tile, asphalt, and industrial carpet floors, respectively. The effect of wheel orientation was not consistent over the tested conditions, but, in general, the smallest minimum push/pull forces were measured with all four wheels aligned in the forward direction, whereas the largest minimum push/pull forces were measured when all four wheels were aligned at 90 degrees to the forward direction. There was no significant difference between the push and pull forces when all four wheels were aligned in the forward direction.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
Zhang, Rong; Jack, Gregory S; Rao, Nagesh; Zuk, Patricia; Ignarro, Louis J; Wu, Benjamin; Rodríguez, Larissa V
2012-03-01
Human adipose-derived stem cells hASC have been isolated and were shown to have multilineage differentiation capacity. Although both plasticity and cell fusion have been suggested as mechanisms for cell differentiation in vivo, the effect of the local in vivo environment on the differentiation of adipose-derived stem cells has not been evaluated. We previously reported the in vitro capacity of smooth muscle differentiation of these cells. In this study, we evaluate the effect of an in vivo smooth muscle environment in the differentiation of hASC. We studied this by two experimental designs: (a) in vivo evaluation of smooth muscle differentiation of hASC injected into a smooth muscle environment and (b) in vitro evaluation of smooth muscle differentiation capacity of hASC exposed to bladder smooth muscle cells. Our results indicate a time-dependent differentiation of hASC into mature smooth muscle cells when these cells are injected into the smooth musculature of the urinary bladder. Similar findings were seen when the cells were cocultured in vitro with primary bladder smooth muscle cells. Chromosomal analysis demonstrated that microenvironment cues rather than nuclear fusion are responsible for this differentiation. We conclude that cell plasticity is present in hASCs, and their differentiation is accomplished in the absence of nuclear fusion. Copyright © 2011 AlphaMed Press.
Novel treatment strategies for smooth muscle disorders: Targeting Kv7 potassium channels.
Haick, Jennifer M; Byron, Kenneth L
2016-09-01
Smooth muscle cells provide crucial contractile functions in visceral, vascular, and lung tissues. The contractile state of smooth muscle is largely determined by their electrical excitability, which is in turn influenced by the activity of potassium channels. The activity of potassium channels sustains smooth muscle cell membrane hyperpolarization, reducing cellular excitability and thereby promoting smooth muscle relaxation. Research over the past decade has indicated an important role for Kv7 (KCNQ) voltage-gated potassium channels in the regulation of the excitability of smooth muscle cells. Expression of multiple Kv7 channel subtypes has been demonstrated in smooth muscle cells from viscera (gastrointestinal, bladder, myometrial), from the systemic and pulmonary vasculature, and from the airways of the lung, from multiple species, including humans. A number of clinically used drugs, some of which were developed to target Kv7 channels in other tissues, have been found to exert robust effects on smooth muscle Kv7 channels. Functional studies have indicated that Kv7 channel activators and inhibitors have the ability to relax and contact smooth muscle preparations, respectively, suggesting a wide range of novel applications for the pharmacological tool set. This review summarizes recent findings regarding the physiological functions of Kv7 channels in smooth muscle, and highlights potential therapeutic applications based on pharmacological targeting of smooth muscle Kv7 channels throughout the body. Published by Elsevier Inc.
Regeneration and Maintenance of Intestinal Smooth Muscle Phenotypes
NASA Astrophysics Data System (ADS)
Walthers, Christopher M.
Tissue engineering is an emerging field of biomedical engineering that involves growing artificial organs to replace those lost to disease or injury. Within tissue engineering, there is a demand for artificial smooth muscle to repair tissues of the digestive tract, bladder, and vascular systems. Attempts to develop engineered smooth muscle tissues capable of contracting with sufficient strength to be clinically relevant have so far proven unsatisfactory. The goal of this research was to develop and sustain mature, contractile smooth muscle. Survival of implanted SMCs is critical to sustain the benefits of engineered smooth muscle. Survival of implanted smooth muscle cells was studied with layered, electrospun polycaprolactone implants with lasercut holes ranging from 0--25% porosity. It was found that greater angiogenesis was associated with increased survival of implanted cells, with a large increase at a threshold between 20% and 25% porosity. Heparan sulfate coatings improved the speed of blood vessel infiltration after 14 days of implantation. With these considerations, thicker engineered tissues may be possible. An improved smooth muscle tissue culture technique was utilized. Contracting smooth muscle was produced in culture by maintaining the native smooth muscle tissue organization, specifically by sustaining intact smooth muscle strips rather than dissociating tissue in to isolated smooth muscle cells. Isolated cells showed a decrease in maturity and contained fewer enteric neural and glial cells. Muscle strips also exhibited periodic contraction and regular fluctuation of intracellular calclium. The muscle strip maturity persisted after implantation in omentum for 14 days on polycaprolactone scaffolds. A low-cost, disposable bioreactor was developed to further improve maturity of cultured smooth muscle cells in an environment of controlled cyclical stress.The bioreactor consistently applied repeated mechanical strain with controllable inputs for strain, frequency, and duty cycle. Cells grown on protein-conjugated silicone membranes showed a morphological change while undergoing bioreactor stress. Analyzing change in muscle strips undergoing bioreactor stress is an area for future research. The overall goal of this research was to move engineered smooth muscle towards tissues capable of contracting with physiologically relevant strength and frequency. This approach first increased survival of smooth muscle constructs, and then sought to improve contractile ability of smooth muscle cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
The numerical simulation of meso-, convective-, and microscale atmospheric flows requires the solution of the Euler or the Navier-Stokes equations. Nonhydrostatic weather prediction algorithms often solve the equations in terms of derived quantities such as Exner pressure and potential temperature (and are thus not conservative) and/or as perturbations to the hydrostatically balanced equilibrium state. This paper presents a well-balanced, conservative finite difference formulation for the Euler equations with a gravitational source term, where the governing equations are solved as conservation laws for mass, momentum, and energy. Preservation of the hydrostatic balance to machine precision by the discretized equations is essentialmore » because atmospheric phenomena are often small perturbations to this balance. The proposed algorithm uses the weighted essentially nonoscillatory and compact-reconstruction weighted essentially nonoscillatory schemes for spatial discretization that yields high-order accurate solutions for smooth flows and is essentially nonoscillatory across strong gradients; however, the well-balanced formulation may be used with other conservative finite difference methods. The performance of the algorithm is demonstrated on test problems as well as benchmark atmospheric flow problems, and the results are verified with those in the literature.« less
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Haas, John L.
1978-01-01
The total pressure for the system H2O-CH 4 is given by p(total) = P(H2O,t) + exp10[log x(CH 4) - a - b x(CH4)], where P(H2O,t) is the vapor pressure of H2O liquid at the temperature t (?C) and x(CH 4) is the molal concentration of methane in the solution. The terms a and b are functions of temperature only. Where the total pressure and temperature are known, the concentration of methane, x(CH4), is found by iteration. The concentration of methane in a sodium chloride brine, y(CH4), is estimated using the function log y(CH4) = log x(CH4) - A I, where A is the salting out constant and I is the ionic strength. For sodium chloride solutions, the ionic strength is equal to the molality of the salt. The equations are valid to 360?C, 138 MPa, and 25 weight percent sodium chloride.
Hambardzumyan, Arayik; Foulon, Laurence; Chabbert, Brigitte; Aguié-Béghin, Véronique
2012-12-10
Novel nanocomposite coatings composed of cellulose nanocrystals (CNCs) and lignin (either synthetic or fractionated from spruce and corn stalks) were prepared without chemical modification or functionalization (via covalent attachment) of one of the two biopolymers. The spectroscopic properties of these coatings were investigated by UV-visible spectrophotometry and spectroscopic ellipsometry. When using the appropriate weight ratio of CNC/lignin (R), these nanocomposite systems exhibited high-performance optical properties, high transmittance in the visible spectrum, and high blocking in the UV spectrum. Atomic force microscopy analysis demonstrated that these coatings were smooth and homogeneous, with visible dispersed lignin nodules in a cellulosic matrix. It was also demonstrated that the introduction of nanoparticles into the medium increases the weight ratio and the CNC-specific surface area, which allows better dispersion of the lignin molecules throughout the solid film. Consequently, the larger molecular expansion of these aromatic polymers on the surface of the cellulosic nanoparticles dislocates the π-π aromatic aggregates, which increases the extinction coefficient and decreases the transmittance in the UV region. These nanocomposite coatings were optically transparent at visible wavelengths.
Li, Xinpeng; Li, Xiaohong; Zhang, Quanbin; Zhao, Tingting
2017-12-01
We investigated the renal protective effects of low molecular weight fucoidan (LMWF) and its two fractions (F0.5 and F1.0), which were extracted from Laminaria japonica, on the epithelial-mesenchymal transition (EMT) induced by transforming growth factor beta 1 (TGF-β1) and fibroblast growth factor 2 (FGF-2) in HK-2 human renal proximal tubular cells. Cell morphology and EMT markers (fibronectin and alpha-smooth muscle actin) demonstrated that cells treated with TGF-β1 or FGF-2 developed EMT to a significant extent. Treatment with LMWF or its fractions markedly attenuated the EMT and decreased expression of the EMT markers. The F1.0 fraction, the sulfated fucan fraction, was found to be the main active component of LMWF, and heparanase (HPSE) was a key factor in renal tubular epithelial trans-differentiation. The F1.0 fraction inhibited elevated HPSE and matrix metallopeptidase 9 expression, thereby attenuating the progress of EMT. Copyright © 2017 Elsevier B.V. All rights reserved.
McRoy, Susan; Jones, Sean; Kurmally, Adam
2016-09-01
This article examines methods for automated question classification applied to cancer-related questions that people have asked on the web. This work is part of a broader effort to provide automated question answering for health education. We created a new corpus of consumer-health questions related to cancer and a new taxonomy for those questions. We then compared the effectiveness of different statistical methods for developing classifiers, including weighted classification and resampling. Basic methods for building classifiers were limited by the high variability in the natural distribution of questions and typical refinement approaches of feature selection and merging categories achieved only small improvements to classifier accuracy. Best performance was achieved using weighted classification and resampling methods, the latter yielding an accuracy of F1 = 0.963. Thus, it would appear that statistical classifiers can be trained on natural data, but only if natural distributions of classes are smoothed. Such classifiers would be useful for automated question answering, for enriching web-based content, or assisting clinical professionals to answer questions. © The Author(s) 2015.
Prediction of a service demand using combined forecasting approach
NASA Astrophysics Data System (ADS)
Zhou, Ling
2017-08-01
Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.
Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.
Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam
2015-01-01
Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.
New Tetra-Schiff Bases as Efficient Photostabilizers for Poly(vinyl chloride).
Ahmed, Dina S; El-Hiti, Gamal A; Hameed, Ayad S; Yousif, Emad; Ahmed, Ahmed
2017-09-09
Three new tetra-Schiff bases were synthesized and characterized to be used as photostabilizers for poly(vinyl chloride) (PVC) films. The photostability of PVC films (40 μm thickness) in the presence of Schiff bases (0.5 wt %) upon irradiation (300 h) with a UV light (λ max = 365 nm and light intensity = 6.43 × 10 -9 ein∙dm -3 ∙s -1 ) was examined using various spectroscopic measurements and surface morphology analysis. The changes in various functional groups' indices, weight and viscosity average molecular weight of PVC films were monitored against irradiation time. The additives used showed photostability for PVC films, with Schiff base 1 being the most effective additive upon irradiation, followed by 2 and 3 . The atomic force microscopy (AFM) images for the PVC surface containing Schiff base 1 after irradiation were found to be smooth, with a roughness factor ( R q) of 36.8, compared to 132.2 for the PVC (blank). Several possible mechanisms that explain PVC photostabilization upon irradiation in the presence of tetra-Schiff bases were proposed.
Fan, Chong; Chen, Xushuai; Zhong, Lei; Zhou, Min; Shi, Yun; Duan, Yulin
2017-03-18
A sub-block algorithm is usually applied in the super-resolution (SR) reconstruction of images because of limitations in computer memory. However, the sub-block SR images can hardly achieve a seamless image mosaicking because of the uneven distribution of brightness and contrast among these sub-blocks. An effectively improved weighted Wallis dodging algorithm is proposed, aiming at the characteristic that SR reconstructed images are gray images with the same size and overlapping region. This algorithm can achieve consistency of image brightness and contrast. Meanwhile, a weighted adjustment sequence is presented to avoid the spatial propagation and accumulation of errors and the loss of image information caused by excessive computation. A seam line elimination method can share the partial dislocation in the seam line to the entire overlapping region with a smooth transition effect. Subsequently, the improved method is employed to remove the uneven illumination for 900 SR reconstructed images of ZY-3. Then, the overlapping image mosaic method is adopted to accomplish a seamless image mosaic based on the optimal seam line.
Robust head pose estimation via supervised manifold learning.
Wang, Chao; Song, Xubo
2014-05-01
Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Longitudinal-control design approach for high-angle-of-attack aircraft
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.; Proffitt, Melissa S.
1993-01-01
This paper describes a control synthesis methodology that emphasizes a variable-gain output feedback technique that is applied to the longitudinal channel of a high-angle-of-attack aircraft. The aircraft is a modified F/A-18 aircraft with thrust-vectored controls. The flight regime covers a range up to a Mach number of 0.7; an altitude range from 15,000 to 35,000 ft; and an angle-of-attack (alpha) range up to 70 deg, which is deep into the poststall region. A brief overview is given of the variable-gain mathematical formulation as well as a description of the discrete control structure used for the feedback controller. This paper also presents an approximate design procedure with relationships for the optimal weights for the selected feedback control structure. These weights are selected to meet control design guidelines for high-alpha flight controls. Those guidelines that apply to the longitudinal-control design are also summarized. A unique approach is presented for the feed-forward command generator to obtain smooth transitions between load factor and alpha commands. Finally, representative linear analysis results and nonlinear batch simulation results are provided.
NASA Astrophysics Data System (ADS)
Wu, Shaofeng; Gao, Dianrong; Liang, Yingna; Chen, Bo
2015-11-01
With the development of bionics, the bionic non-smooth surfaces are introduced to the field of tribology. Although non-smooth surface has been studied widely, the studies of non-smooth surface under the natural seawater lubrication are still very fewer, especially experimental research. The influences of smooth and non-smooth surface on the frictional properties of the glass fiber-epoxy resin composite (GF/EPR) coupled with stainless steel 316L are investigated under natural seawater lubrication in this paper. The tested non-smooth surfaces include the surfaces with semi-spherical pits, the conical pits, the cone-cylinder combined pits, the cylindrical pits and through holes. The friction and wear tests are performed using a ring-on-disc test rig under 60 N load and 1000 r/min rotational speed. The tests results show that GF/EPR with bionic non-smooth surface has quite lower friction coefficient and better wear resistance than GF/EPR with smooth surface without pits. The average friction coefficient of GF/EPR with semi-spherical pits is 0.088, which shows the largest reduction is approximately 63.18% of GF/EPR with smooth surface. In addition, the wear debris on the worn surfaces of GF/EPR are observed by a confocal scanning laser microscope. It is shown that the primary wear mechanism is the abrasive wear. The research results provide some design parameters for non-smooth surface, and the experiment results can serve as a beneficial supplement to non-smooth surface study.
Bifurcation theory for finitely smooth planar autonomous differential systems
NASA Astrophysics Data System (ADS)
Han, Maoan; Sheng, Lijuan; Zhang, Xiang
2018-03-01
In this paper we establish bifurcation theory of limit cycles for planar Ck smooth autonomous differential systems, with k ∈ N. The key point is to study the smoothness of bifurcation functions which are basic and important tool on the study of Hopf bifurcation at a fine focus or a center, and of Poincaré bifurcation in a period annulus. We especially study the smoothness of the first order Melnikov function in degenerate Hopf bifurcation at an elementary center. As we know, the smoothness problem was solved for analytic and C∞ differential systems, but it was not tackled for finitely smooth differential systems. Here, we present their optimal regularity of these bifurcation functions and their asymptotic expressions in the finite smooth case.
7 CFR 51.768 - Smooth texture.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Smooth texture. Smooth texture means that the skin is thin and smooth for the variety and size of the fruit. “Thin” means that the skin thickness does not average more than 3/8 inch (9.5 mm), on a central...
7 CFR 51.768 - Smooth texture.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Smooth texture. Smooth texture means that the skin is thin and smooth for the variety and size of the fruit. “Thin” means that the skin thickness does not average more than 3/8 inch (9.5 mm), on a central...
Leiomodin and tropomodulin in smooth muscle
NASA Technical Reports Server (NTRS)
Conley, C. A.
2001-01-01
Evidence is accumulating to suggest that actin filament remodeling is critical for smooth muscle contraction, which implicates actin filament ends as important sites for regulation of contraction. Tropomodulin (Tmod) and smooth muscle leiomodin (SM-Lmod) have been found in many tissues containing smooth muscle by protein immunoblot and immunofluorescence microscopy. Both proteins cofractionate with tropomyosin in the Triton-insoluble cytoskeleton of rabbit stomach smooth muscle and are solubilized by high salt. SM-Lmod binds muscle tropomyosin, a biochemical activity characteristic of Tmod proteins. SM-Lmod staining is present along the length of actin filaments in rat intestinal smooth muscle, while Tmod stains in a punctate pattern distinct from that of actin filaments or the dense body marker alpha-actinin. After smooth muscle is hypercontracted by treatment with 10 mM Ca(2+), both SM-Lmod and Tmod are found near alpha-actinin at the periphery of actin-rich contraction bands. These data suggest that SM-Lmod is a novel component of the smooth muscle actin cytoskeleton and, furthermore, that the pointed ends of actin filaments in smooth muscle may be capped by Tmod in localized clusters.
Potential roles for BMP and Pax genes in the development of iris smooth muscle.
Jensen, Abbie M
2005-02-01
The embryonic optic cup generates four types of tissue: neural retina, pigmented epithelium, ciliary epithelium, and iris smooth muscle. Remarkably little attention has focused on the development of the iris smooth muscle since Lewis ([1903] J. Am. Anat. 2:405-416) described its origins from the anterior rim of the optic cup neuroepithelium. As an initial step toward understanding iris smooth muscle development, I first determined the spatial and temporal pattern of the development of the iris smooth muscle in the chick by using the HNK1 antibody, which labels developing iris smooth muscle. HNK1 labeling shows that iris smooth muscle development is correlated in time and space with the development of the ciliary epithelial folds. Second, because neural crest is the only other neural tissue that has been shown to generate smooth muscle (Le Lievre and Le Douarin [1975] J. Embryo. Exp. Morphol. 34:125-154), I sought to determine whether iris smooth muscle development shares similarities with neural crest development. Two members of the BMP superfamily, BMP4 and BMP7, which may regulate neural crest development, are highly expressed by cells at the site of iris smooth muscle generation. Third, because humans and mice that are heterozygous for Pax6 mutations have no irides (Hill et al. [1991] Nature 354:522-525; Hanson et al. [1994] Nat. Genet. 6:168-173), I determined the expression of Pax6. I also examined the expression of Pax3 in the developing anterior optic cup. The developing iris smooth muscle coexpresses Pax6 and Pax3. I suggest that some of the eye defects caused by mutations in Pax6, BMP4, and BMP7 may be due to abnormal iris smooth muscle. Copyright 2004 Wiley-Liss, Inc.
Schürch, W.; Skalli, O.; Lagacé, R.; Seemayer, T. A.; Gabbiani, G.
1990-01-01
Intermediate filament proteins and actin isoforms of a series of 12 malignant hemangiopericytomas and five glomus tumors were examined by light microscopy, transmission electron microscopy, two-dimensional gel electrophoresis (2D-GE), and by immunohistochemistry, the latter using monoclonal or affinity-purified polyclonal antibodies to desmin, vimentin, cytokeratins, alpha-smooth muscle, and alpha-sarcomeric actins. By light microscopy, all hemangiopericytomas disclosed a predominant vascular pattern with scant storiform, myxoid and spindle cell areas, and with variable degrees of perivascular fibrosis. By ultrastructure, smooth muscle differentiation was observed in each hemangiopericytoma. Immunohistochemically, neoplastic cells of hemangiopericytomas expressed vimentin as the sole intermediate filament protein and lacked alpha-smooth muscle or alpha-sarcomeric actins. 2D-GE revealed only beta and gamma actins, in proportions typical for fibroblastic tissues. Glomus tumors revealed vimentin and alpha-smooth muscle actin within glomus cells by immunohistochemical techniques and disclosed ultrastructurally distinct smooth muscle differentiation. Therefore hemangiopericytomas represent a distinct soft-tissue neoplasm with uniform morphologic, immunohistochemical, and biochemical features most likely related to glomus tumors, the former representing an aggressive and potentially malignant neoplasm of vascular smooth muscle cells and the latter a well-differentiated neoplasm of vascular smooth muscle cells. Because malignant hemangiopericytomas disclose smooth muscle differentiation by ultrastructure, but do not express alpha-smooth muscle actin, as normal pericytes and glomus cells, it is suggested that these neoplasms represent highly vascularized smooth muscle neoplasms, ie, poorly differentiated leiomyosarcomas derived from vascular smooth muscle cells or their equivalent, the pericytes, which have lost alpha-smooth muscle actin as a differentiation marker that is similar to many conventional poorly differentiated leiomyosarcomas. Images Figure 6 Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:2158236
Polo-like Kinase 1 Regulates Vimentin Phosphorylation at Ser-56 and Contraction in Smooth Muscle*
Li, Jia; Wang, Ruping; Gannon, Olivia J.; Rezey, Alyssa C.; Jiang, Sixin; Gerlach, Brennan D.; Liao, Guoning
2016-01-01
Polo-like kinase 1 (Plk1) is a serine/threonine-protein kinase that has been implicated in mitosis, cytokinesis, and smooth muscle cell proliferation. The role of Plk1 in smooth muscle contraction has not been investigated. Here, stimulation with acetylcholine induced Plk1 phosphorylation at Thr-210 (an indication of Plk1 activation) in smooth muscle. Contractile stimulation also activated Plk1 in live smooth muscle cells as evidenced by changes in fluorescence resonance energy transfer signal of a Plk1 sensor. Moreover, knockdown of Plk1 in smooth muscle attenuated force development. Smooth muscle conditional knock-out of Plk1 also diminished contraction of mouse tracheal rings. Plk1 knockdown inhibited acetylcholine-induced vimentin phosphorylation at Ser-56 without affecting myosin light chain phosphorylation. Expression of T210A Plk1 inhibited the agonist-induced vimentin phosphorylation at Ser-56 and contraction in smooth muscle. However, myosin light chain phosphorylation was not affected by T210A Plk1. Ste20-like kinase (SLK) is a serine/threonine-protein kinase that has been implicated in spindle orientation and microtubule organization during mitosis. In this study knockdown of SLK inhibited Plk1 phosphorylation at Thr-210 and activation. Finally, asthma is characterized by airway hyperresponsiveness, which largely stems from airway smooth muscle hyperreactivity. Here, smooth muscle conditional knock-out of Plk1 attenuated airway resistance and airway smooth muscle hyperreactivity in a murine model of asthma. Taken together, these findings suggest that Plk1 regulates smooth muscle contraction by modulating vimentin phosphorylation at Ser-56. Plk1 activation is regulated by SLK during contractile activation. Plk1 contributes to the pathogenesis of asthma. PMID:27662907
Shi, Feng; Long, Xiaochun; Hendershot, Allison; Miano, Joseph M.; Sottile, Jane
2014-01-01
Smooth muscle cells are maintained in a differentiated state in the vessel wall, but can be modulated to a synthetic phenotype following injury. Smooth muscle phenotypic modulation is thought to play an important role in the pathology of vascular occlusive diseases. Phenotypically modulated smooth muscle cells exhibit increased proliferative and migratory properties that accompany the downregulation of smooth muscle cell marker proteins. Extracellular matrix proteins, including fibronectin, can regulate the smooth muscle phenotype when used as adhesive substrates. However, cells produce and organize a 3-dimensional fibrillar extracellular matrix, which can affect cell behavior in distinct ways from the protomeric 2-dimensional matrix proteins that are used as adhesive substrates. We previously showed that the deposition/polymerization of fibronectin into the extracellular matrix can regulate the deposition and organization of other extracellular matrix molecules in vitro. Further, our published data show that the presence of a fibronectin polymerization inhibitor results in increased expression of smooth muscle cell differentiation proteins and inhibits vascular remodeling in vivo. In this manuscript, we used an in vitro cell culture system to determine the mechanism by which fibronectin polymerization affects smooth muscle phenotypic modulation. Our data show that fibronectin polymerization decreases the mRNA levels of multiple smooth muscle differentiation genes, and downregulates the levels of smooth muscle α-actin and calponin proteins by a Rac1-dependent mechanism. The expression of smooth muscle genes is transcriptionally regulated by fibronectin polymerization, as evidenced by the increased activity of luciferase reporter constructs in the presence of a fibronectin polymerization inhibitor. Fibronectin polymerization also promotes smooth muscle cell growth, and decreases the levels of actin stress fibers. These data define a Rac1-dependent pathway wherein fibronectin polymerization promotes the SMC synthetic phenotype by modulating the expression of smooth muscle cell differentiation proteins. PMID:24752318
DOT National Transportation Integrated Search
2006-11-01
Its widely accepted that smooth roads provide greater driver comfort and satisfaction, decreased vehicle maintenance costs, and better fuel economy. Now thanks to a recently completed study, the affect of pavement smoothness on fuel efficiency has...
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Tsai, J C; Jain, M; Hsieh, C M; Lee, W S; Yoshizumi, M; Patterson, C; Perrella, M A; Cooke, C; Wang, H; Haber, E; Schlegel, R; Lee, M E
1996-02-16
Pyrrolidinedithiocarbamate (PDTC) and N-acetylcysteine (NAC) have been used as antioxidants to prevent apoptosis in lymphocytes, neurons, and vascular endothelial cells. We report here that PDTC and NAC induce apoptosis in rat and human smooth muscle cells. In rat aortic smooth muscle cells, PDTC induced cell shrinkage, chromatin condensation, and DNA strand breaks consistent with apoptosis. In addition, overexpression of Bcl-2 suppressed vascular smooth muscle cell death caused by PDTC and NAC. The viability of rat aortic smooth muscle cells decreased within 3 h of treatment with PDTC and was reduced to 30% at 12 h. The effect of PDTC and NAC on smooth muscle cells was not species specific because PDTC and NAC both caused dose-dependent reductions in viability in rat and human aortic smooth muscle cells. In contrast, neither PDTC nor NAC reduced viability in human aortic endothelial cells. The use of antioxidants to induce apoptosis in vascular smooth muscle cells may help prevent their proliferation in arteriosclerotic lesions.
Smooth halos in the cosmic web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaite, José, E-mail: jose.gaite@upm.es
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description ofmore » the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.« less
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
Optimum structural design with plate bending elements - A survey
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1981-01-01
A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.
A homogeneous, recyclable polymer support for Rh(I)-catalyzed C-C bond formation.
Jana, Ranjan; Tunge, Jon A
2011-10-21
A robust and practical polymer-supported, homogeneous, recyclable biphephos rhodium(I) catalyst has been developed for C-C bond formation reactions. Control of polymer molecular weight allowed tuning of the polymer solubility such that the polymer-supported catalyst is soluble in nonpolar solvents and insoluble in polar solvents. Using the supported rhodium catalysts, addition of aryl and vinylboronic acids to the electrophiles such as enones, aldehydes, N-sulfonyl aldimines, and alkynes occurs smoothly to provide products in high yields. Additions of terminal alkynes to enones and industrially relevant hydroformylation reactions have also been successfully carried out. Studies show that the leaching of Rh from the polymer support is low and catalyst recycle can be achieved by simple precipitation and filtration.
An efficient unstructured WENO method for supersonic reactive flows
NASA Astrophysics Data System (ADS)
Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da
2018-03-01
An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.
Aerodynamic load control strategy of wind turbine in microgrid
NASA Astrophysics Data System (ADS)
Wang, Xiangming; Liu, Heshun; Chen, Yanfei
2017-12-01
A control strategy is proposed in the paper to optimize the aerodynamic load of the wind turbine in micro-grid. In grid-connection mode, the wind turbine adopts a new individual variable pitch control strategy. The pitch angle of the blade is rapidly given by the controller, and the pitch angle of each blade is fine tuned by the weight coefficient distributor. In islanding mode, according to the requirements of energy storage system, a given power tracking control method based on fuzzy PID control is proposed. Simulation result shows that this control strategy can effectively improve the axial aerodynamic load of the blade under rated wind speed in grid-connection mode, and ensure the smooth operation of the micro-grid in islanding mode.
A Homogeneous, Recyclable Polymer Support for Rh(I)-Catalyzed C-C Bond Formation
Jana, Ranjan; Tunge, Jon A.
2011-01-01
A robust and practical polymer-supported, homogeneous, recyclable biphephos rhodium(I) catalyst has been developed for C-C bond formation reactions. Control of polymer molecular weight allowed tuning of the polymer solubility such that the polymer-supported catalyst is soluble in nonpolar solvents and insoluble in polar solvents. Using the supported rhodium catalysts, addition of aryl and vinylboronic acids to the electrophiles such as enones, aldehydes, N-sulfonyl aldimines, and alkynes occurs smoothly to provide products in high yields. Additions of terminal alkynes to enones and industrially relevant hydroformylation reactions have also been successfully carried out. Studies show that the leaching of Rh from the polymer support is low and catalyst recycle can be achieved by simple precipitation and filtration. PMID:21895010
Creep feeding nursing beef calves.
Lardy, Gregory P; Maddock, Travis D
2007-03-01
Creep feeding can be used to increase calf weaning weights. However, the gain efficiency of free-choice, energy-based creep feeds is relatively poor. Generally, limit-feeding, high-protein creep feeds are more efficient, and gains may be similar to those produced by creep feeds offered free choice. Creep feeding can increase total organic matter intake and improve the overall energy status of the animal. Creep-fed calves tend to acclimate to the feedlot more smoothly than unsupplemented calves. Furthermore, provision of a high-starch creep feed may have a positive influence on subsequent carcass quality traits. Creep feeding can be applied to numerous environmental situations to maximize calf performance; however, beef cattle producers should consider their individual situations carefully before making the decision to creep feed.
Control of brown and beige fat development
Wang, Wenshan; Seale, Patrick
2017-01-01
Brown and beige adipocytes expend chemical energy to produce heat and are therefore important in regulating body temperature and body weight. Brown adipocytes develop in discrete and relatively homogenous depots of brown adipose tissue, whereas beige adipocytes are induced to develop in white adipose tissue in response to certain stimuli — notably, exposure to cold. Fate-mapping analyses have identified progenitor populations that give rise to brown and beige fat cells and revealed unanticipated cell-lineage relationships between vascular smooth muscle and beige adipocytes, and between brown fat and skeletal muscle cells. Additionally, non-adipocyte cells in adipose tissue, including neurons, blood vessel-associated cells and immune cells play crucial roles in regulating the differentiation and function of brown and beige fat. PMID:27552974
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
Walking smoothness is associated with self-reported function after accounting for gait speed.
Lowry, Kristin A; Vanswearingen, Jessie M; Perera, Subashan; Studenski, Stephanie A; Brach, Jennifer S
2013-10-01
Gait speed has shown to be an indicator of functional status in older adults; however, there may be aspects of physical function not represented by speed but by the quality of movement. The purpose of this study was to determine the relations between walking smoothness, an indicator of the quality of movement based on trunk accelerations, and physical function. Thirty older adults (mean age, 77.7±5.1 years) participated. Usual gait speed was measured using an instrumented walkway. Walking smoothness was quantified by harmonic ratios derived from anteroposterior, vertical, and mediolateral trunk accelerations recorded during overground walking. Self-reported physical function was recorded using the function subscales of the Late-Life Function and Disability Instrument. Anteroposterior smoothness was positively associated with all function components of the Late-Life Function and Disability Instrument, whereas mediolateral smoothness exhibited negative associations. Adjusting for gait speed, anteroposterior smoothness remained associated with the overall and lower extremity function subscales, whereas mediolateral smoothness remained associated with only the advanced lower extremity subscale. These findings indicate that walking smoothness, particularly the smoothness of forward progression, represents aspects of the motor control of walking important for physical function not represented by gait speed alone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlovskaya, Veronika; Zavgorodnya, Oleksandra; Ankner, John F.
Here, we report on tailoring the internal architecture of multilayer-derived poly(methacrylic acid) (PMAA) hydrogels by controlling the molecular weight of poly(N-vinylpyrrolidone) (PVPON) in hydrogen-bonded (PMAA/PVPON) layer-by-layer precursor films. The hydrogels are produced by cross-linking PMAA in the spin-assisted multilayers followed by PVPON release. We found that the thickness, morphology, and architecture of hydrogen-bonded films and the corresponding hydrogels are significantly affected by PVPON chain length. For all systems, an increase in PVPON molecular weight from M w = 2.5 to 1300 kDa resulted in increased total film thickness. We also show that increasing polymer M w smooths the hydrogen-bonded filmmore » surfaces but roughens those of the hydrogels. Using deuterated dPMAA marker layers in neutron reflectometry measurements, we found that hydrogen-bonded films reveal a high degree of stratification which is preserved in the cross-linked films. We observed dPMAA to be distributed more widely in the hydrogen-bonded films prepared with small M w PVPON due to the greater mobility of short-chain PVPON. Furthermore, these variations in the distribution of PMAA are erased after cross-linking, resulting in a distribution of dPMAA over about two bilayers for all M w but being somewhat more widely distributed in the films templated with higher M w PVPON. Finally, our results yield new insights into controlling the organization of nanostructured polymer networks using polymer molecular weight and open opportunities for fabrication of thin films with well-organized architecture and controllable function.« less
Insaf, Tabassum Z; Talbot, Thomas
2016-07-01
To assess the geographic distribution of Low Birth Weight (LBW) in New York State among singleton births using a spatial regression approach in order to identify priority areas for public health actions. LBW was defined as birth weight less than 2500g. Geocoded data from 562,586 birth certificates in New York State (years 2008-2012) were merged with 2010 census data at the tract level. To provide stable estimates and maintain confidentiality, data were aggregated to yield 1268 areas of analysis. LBW prevalence among singleton births was related with area-level behavioral, socioeconomic and demographic characteristics using a Poisson mixed effects spatial error regression model. Observed low birth weight showed statistically significant auto-correlation in our study area (Moran's I 0.16 p value 0.0005). After over-dispersion correction and accounting for fixed effects for selected social determinants, spatial autocorrelation was fully accounted for (Moran's I-0.007 p value 0.241). The proportion of LBW was higher in areas with larger Hispanic or Black populations and high smoking prevalence. Smoothed maps with predicted prevalence were developed to identify areas at high risk of LBW. Spatial patterns of residual variation were analyzed to identify unique risk factors. Neighborhood racial composition contributes to disparities in LBW prevalence beyond differences in behavioral and socioeconomic factors. Small-area analyses of LBW can identify areas for targeted interventions and display unique local patterns that should be accounted for in prevention strategies. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Rocca, Maria A; Valsasina, Paola; Damjanovic, Dusan; Horsfield, Mark A; Mesaros, Sarlota; Stosic-Opincal, Tatjana; Drulovic, Jelena; Filippi, Massimo
2013-01-01
To apply voxel-based methods to map the regional distribution of atrophy and T2 hyperintense lesions in the cervical cord of multiple sclerosis (MS) patients with different clinical phenotypes. Brain and cervical cord 3D T1-weighted and T2-weighted scans were acquired from 31 healthy controls (HC) and 77 MS patients (15 clinically isolated syndromes (CIS), 15 relapsing-remitting (RR), 19 benign (B), 15 primary progressive (PP) and 13 secondary progressive (SP) MS). Hyperintense cord lesions were outlined on T2-weighted scans. The T2- and 3D T1-weighted cord images were then analysed using an active surface method which created output images reformatted in planes perpendicular to the estimated cord centre line. These unfolded cervical cord images were co-registered into a common space; then smoothed binary cord masks and lesion masks underwent spatial statistic analysis (SPM8). No cord atrophy was found in CIS patients versus HC, while PPMS had significant cord atrophy. Clusters of cord atrophy were found in BMS versus RRMS, and in SPMS versus RRMS, BMS and PPMS patients, mainly involving the posterior and lateral cord segments. Cord lesion probability maps showed a significantly greater likelihood of abnormalities in RRMS, PPMS and SPMS than in CIS and BMS patients. The spatial distributions of cord atrophy and cord lesions were not correlated. In progressive MS, regional cord atrophy was correlated with clinical disability and impairment in the pyramidal system. Voxel-based assessment of cervical cord damage is feasible and may contribute to a better characterisation of the clinical heterogeneity of MS patients.
Kozlovskaya, Veronika; Zavgorodnya, Oleksandra; Ankner, John F.; ...
2015-11-16
Here, we report on tailoring the internal architecture of multilayer-derived poly(methacrylic acid) (PMAA) hydrogels by controlling the molecular weight of poly(N-vinylpyrrolidone) (PVPON) in hydrogen-bonded (PMAA/PVPON) layer-by-layer precursor films. The hydrogels are produced by cross-linking PMAA in the spin-assisted multilayers followed by PVPON release. We found that the thickness, morphology, and architecture of hydrogen-bonded films and the corresponding hydrogels are significantly affected by PVPON chain length. For all systems, an increase in PVPON molecular weight from M w = 2.5 to 1300 kDa resulted in increased total film thickness. We also show that increasing polymer M w smooths the hydrogen-bonded filmmore » surfaces but roughens those of the hydrogels. Using deuterated dPMAA marker layers in neutron reflectometry measurements, we found that hydrogen-bonded films reveal a high degree of stratification which is preserved in the cross-linked films. We observed dPMAA to be distributed more widely in the hydrogen-bonded films prepared with small M w PVPON due to the greater mobility of short-chain PVPON. Furthermore, these variations in the distribution of PMAA are erased after cross-linking, resulting in a distribution of dPMAA over about two bilayers for all M w but being somewhat more widely distributed in the films templated with higher M w PVPON. Finally, our results yield new insights into controlling the organization of nanostructured polymer networks using polymer molecular weight and open opportunities for fabrication of thin films with well-organized architecture and controllable function.« less
Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction
Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.
2018-01-01
Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870
Araujo, Layanne C. da Cunha; de Souza, Iara L. L.; Vasconcelos, Luiz H. C.; Brito, Aline de Freitas; Queiroga, Fernando R.; Silva, Alexandre S.; da Silva, Patrícia M.; Cavalcante, Fabiana de Andrade; da Silva, Bagnólia A.
2016-01-01
Aerobic exercise promotes short-term physiological changes in the intestinal smooth muscle associated to the ischemia-reperfusion process; however, few studies have demonstrated its effect on the intestinal contractile function. Thus, this work describes our observations regarding the influence of acute aerobic swimming exercise in the contractile reactivity, oxidative stress, and morphology of rat ileum. Wistar rats were divided into sedentary (SED) and acutely exercised (EX-AC) groups. Animals were acclimated by 10, 10, and 30 min of swimming exercise in intercalated days 1 week before exercise. Then they were submitted to forced swimming for 1 h with a metal of 3% of their body weight attached to their body. Animals were euthanized immediately after the exercise section and the ileum was suspended in organ baths for monitoring isotonic contractions. The analysis of lipid peroxidation was performed in order to determinate the malondialdehyde (MDA) levels as a marker of oxidative stress, and intestinal smooth muscle morphology by histological staining. Cumulative concentration-response curves to KCl were altered in the EX-AC with an increase in both its efficacy and potency (Emax = 153.2 ± 2.8%, EC50 = 1.3 ± 0.1 × 10−2 M) compared to the SED group (Emax = 100%, EC50 = 1.8 ± 0.1 × 10−2 M). Interestingly, carbachol had its efficacy and potency reduced in the EX-AC (Emax = 67.1 ± 1.4%, EC50 = 9.8 ± 1.4 × 10−7 M) compared to the SED group (Emax = 100%, EC50 = 2.0 ± 0.2 × 10−7 M). The exercise did not alter the MDA levels in the ileum (5.4 ± 0.6 μ mol/mL) in the EX-AC compared to the SED group (8.4 ± 1.7 μ mol/mL). Moreover, neither the circular nor the longitudinal smooth muscle layers thickness were modified by the exercise (66.2 ± 6.0 and 40.2 ± 2.6 μm, respectively), compared to the SED group (61.6 ± 6.4 and 34.8 ± 3.7 μm, respectively). Therefore, the ileum sensitivity to contractile agents is differentially altered by the acute aerobic swimming exercise, without affecting the oxidative stress and the morphology of ileum smooth muscle. PMID:27047389
Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans
ERIC Educational Resources Information Center
Lencer, Rebekka; Trillenberg, Peter
2008-01-01
Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…
Radial Basis Function Based Quadrature over Smooth Surfaces
2016-03-24
Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
Lejarraga, Horacio; del Pino, Mariana; Fano, Virginia; Caino, Silvia; Cole, Timothy J
2009-04-01
Argentine growth references have been widely used by paediatricians in the country for the last 20 years. Two main difficulties were detected during this period: the lack of data on breast-fed children in the first months of age, and problems in the calculation of "z" scores. On these basis, local data on weight and height during the first two years of life were replaced by data from the longitudinal international study recently carried out by WHO. L, M and S values were obtained from the original percentile data for ages 2 to maturity, and smoothed with cubic splines. Selected percentiles for weight and height from birth to maturity were then re-calculated using LMS values. Charts were designed in two formats: birth to maturity and birth to 6.0 years. Now, users can calculate "z" scores automatically at the new site provided by the Department of Growth and Development, Hospital Garrahan, which enables the use of the LMS growth programme. We have also incorporated into the new charts, percentiles of the age of attaining menarche and Tanner s stage II of breast, genitalia and pubic hair for Argentine children. We think the new references represent an improvement in the assessment of growth in our country.
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-01-01
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy. PMID:26334278
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-08-31
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.