Variability in Non-Target Terrestrial Plant Studies Should Inform Endpoint Selection.
Staveley, J P; Green, J W; Nusz, J; Edwards, D; Henry, K; Kern, M; Deines, A M; Brain, R; Glenn, B; Ehresman, N; Kung, T; Ralston-Hooper, K; Kee, F; McMaster, S
2018-05-04
Inherent variability in Non-Target Terrestrial Plant (NTTP) testing of pesticides creates challenges for using and interpreting these data for risk assessment. Standardized NTTP testing protocols were initially designed to calculate the application rate causing a 25% effect (ER25, used in the U.S.) or a 50% effect (ER50, used in Europe) for various measures based on the observed dose-response. More recently, the requirement to generate a no-observed-effect rate (NOER), or, in the absence of a NOER, the rate causing a 5% effect (ER05), has raised questions about the inherent variability in, and statistical detectability of, these tests. Statistically significant differences observed between test and control groups may be a product of this inherent variability and may not represent biological relevance. Attempting to derive an ER05 and the associated risk assessment conclusions drawn from these values can overestimate risk. To address these concerns, we evaluated historical data from approximately 100 seedling emergence and vegetative vigor guideline studies on pesticides to assess the variability of control results across studies for each plant species, examined potential causes for the variation in control results, and defined the minimum percent effect that can be reliably detected. The results indicate that with current test design and implementation, the ER05 cannot be reliably estimated. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Spray drift and off-target loss reduction with a precision air-assisted sprayer
USDA-ARS?s Scientific Manuscript database
Spray drift and off-target losses are inherent problems of conventional air-assisted sprayers. Their low efficiencies cause environmental pollutions resulting in public anxieties. A new drift reduction technology incorporating laser scanning capabilities with a variable-rate air-assisted sprayer w...
X-ray crystal structures of native HIV-1 capsid protein reveal conformational variability
Gres, Anna T.; Kirby, Karen A.; KewalRamani, Vineet N.; ...
2015-06-04
The detailed molecular interactions between native HIV-1 capsid protein (CA) hexamers that shield the viral genome and proteins have been elusive. In this paper, we report crystal structures describing interactions between CA monomers related by sixfold symmetry within hexamers (intrahexamer) and threefold and twofold symmetry between neighboring hexamers (interhexamer). The structures describe how CA builds hexagonal lattices, the foundation of mature capsids. Lattice structure depends on an adaptable hydration layer modulating interactions among CA molecules. Disruption of this layer alters interhexamer interfaces, highlighting an inherent structural variability. A CA-targeting antiviral affects capsid stability by binding across CA molecules and subtlymore » altering interhexamer interfaces remote to the ligand-binding site. Finally, inherent structural plasticity, hydration layer rearrangement, and effector binding affect capsid stability and have functional implications for the retroviral life cycle.« less
Combatting Inherent Vulnerabilities of CFAR Algorithms and a New Robust CFAR Design
1993-09-01
elements of any automatic radar system. Unfortunately, CFAR systems are inherently vulnerable to degradation caused by large clutter edges, multiple ...edges, multiple targets, and electronic countermeasures (ECM) environments. 20 Distribution, Availability of Abstract 21 Abstract Security...inherently vulnerable to degradation caused by large clutter edges, multiple targets and jamming environments. This thesis presents eight popular and studied
Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883
Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
Impedance modulation and feedback corrections in tracking targets of variable size and frequency.
Selen, Luc P J; van Dieën, Jaap H; Beek, Peter J
2006-11-01
Humans are able to adjust the accuracy of their movements to the demands posed by the task at hand. The variability in task execution caused by the inherent noisiness of the neuromuscular system can be tuned to task demands by both feedforward (e.g., impedance modulation) and feedback mechanisms. In this experiment, we studied both mechanisms, using mechanical perturbations to estimate stiffness and damping as indices of impedance modulation and submovement scaling as an index of feedback driven corrections. Eight subjects tracked three differently sized targets (0.0135, 0.0270, and 0.0405 rad) moving at three different frequencies (0.20, 0.25, and 0.33 Hz). Movement variability decreased with both decreasing target size and movement frequency, whereas stiffness and damping increased with decreasing target size, independent of movement frequency. These results are consistent with the theory that mechanical impedance acts as a filter of noisy neuromuscular signals but challenge stochastic theories of motor control that do not account for impedance modulation and only partially for feedback control. Submovements during unperturbed cycles were quantified in terms of their gain, i.e., the slope between their duration and amplitude in the speed profile. Submovement gain decreased with decreasing movement frequency and increasing target size. The results were interpreted to imply that submovement gain is related to observed tracking errors and that those tracking errors are expressed in units of target size. We conclude that impedance and submovement gain modulation contribute additively to tracking accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, L; Braunstein, S; Chiu, J
2016-06-15
Purpose: Spinal cord tolerance for SBRT has been recommended for the maximum point dose level or at irradiated volumes such as 0.35 mL or 10% of contoured volumes. In this study, we investigated an inherent functional relationship that associates these dose surrogates for irradiated spinal cord volumes of up to 3.0 mL. Methods: A hidden variable termed as Effective Dose Radius (EDR) was formulated based on a dose fall-off model to correlate dose at irradiated spinal cord volumes ranging from 0 mL (point maximum) to 3.0 mL. A cohort of 15 spine SBRT cases was randomly selected to derive anmore » EDR-parameterized formula. The mean prescription dose for the studied cases was 21.0±8.0 Gy (range, 10–40Gy) delivered in 3±1 fractions with target volumes of 39.1 ± 70.6 mL. Linear regression and variance analysis were performed for the fitting parameters of variable EDR values. Results: No direct correlation was found between the dose at maximum point and doses at variable spinal cord volumes. For example, Pearson R{sup 2} = 0.643 and R{sup 2}= 0.491 were obtained when correlating the point maximum dose with the spinal cord dose at 1 mL and 3 mL, respectively. However, near perfect correlation (R{sup 2} ≥0.99) was obtained when corresponding parameterized EDRs. Specifically, Pearson R{sup 2}= 0.996 and R{sup 2} = 0.990 were obtained when correlating EDR (maximum point dose) with EDR (dose at 1 mL) and EDR(dose at 3 mL), respectively. As a result, high confidence level look-up tables were established to correlate spinal cord doses at the maximum point to any finite irradiated volumes. Conclusion: An inherent functional relationship was demonstrated for spine SBRT. Such a relationship unifies dose surrogates at variable cord volumes and proves that a single dose surrogate (e.g. point maximum dose) is mathematically sufficient in constraining the overall spinal cord dose tolerance for SBRT.« less
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Quantifying Disease Progression in Amyotrophic Lateral Sclerosis
Simon, Neil G; Turner, Martin R; Vucic, Steve; Al-Chalabi, Ammar; Shefner, Jeremy; Lomen-Hoerth, Catherine; Kiernan, Matthew C
2014-01-01
Amyotrophic lateral sclerosis (ALS) exhibits characteristic variability of onset and rate of disease progression, with inherent clinical heterogeneity making disease quantitation difficult. Recent advances in understanding pathogenic mechanisms linked to the development of ALS impose an increasing need to develop strategies to predict and more objectively measure disease progression. This review explores phenotypic and genetic determinants of disease progression in ALS, and examines established and evolving biomarkers that may contribute to robust measurement in longitudinal clinical studies. With targeted neuroprotective strategies on the horizon, developing efficiencies in clinical trial design may facilitate timely entry of novel treatments into the clinic. PMID:25223628
Scheduling admissions and reducing variability in bed demand.
Bekker, René; Koeleman, Paulien M
2011-09-01
Variability in admissions and lengths of stay inherently leads to variability in bed occupancy. The aim of this paper is to analyse the impact of these sources of variability on the required amount of capacity and to determine admission quota for scheduled admissions to regulate the occupancy pattern. For the impact of variability on the required number of beds, we use a heavy-traffic limit theorem for the G/G/∞ queue yielding an intuitively appealing approximation in case the arrival process is not Poisson. Also, given a structural weekly admission pattern, we apply a time-dependent analysis to determine the mean offered load per day. This time-dependent analysis is combined with a Quadratic Programming model to determine the optimal number of elective admissions per day, such that an average desired daily occupancy is achieved. From the mathematical results, practical scenarios and guidelines are derived that can be used by hospital managers and support the method of quota scheduling. In practice, the results can be implemented by providing admission quota prescribing the target number of admissions for each patient group.
A framework for monitoring social process and outcomes in environmental programs.
Chapman, Sarah
2014-12-01
When environmental programs frame their activities as being in the service of human wellbeing, social variables need to be integrated into monitoring and evaluation (M&E) frameworks. This article draws upon ecosystem services theory to develop a framework to guide the M&E of collaborative environmental programs with anticipated social benefits. The framework has six components: program need, program activities, pathway process variables, moderating process variables, outcomes, and program value. Needs are defined in terms of ecosystem services, as well as other human needs that must be addressed to achieve outcomes. The pathway variable relates to the development of natural resource governance capacity in the target community. Moderating processes can be externalities such as the inherent capacity of the natural system to service ecosystem needs, local demand for natural resources, policy or socio-economic drivers. Internal program-specific processes relate to program service delivery, targeting and participant responsiveness. Ecological outcomes are expressed in terms of changes in landscape structure and function, which in turn influence ecosystem service provision. Social benefits derived from the program are expressed in terms of the value of the eco-social service to user-specified goals. The article provides suggestions from the literature for identifying indicators and measures for components and component variables, and concludes with an example of how the framework was used to inform the M&E of an adaptive co-management program in western Kenya. Copyright © 2014 Elsevier Ltd. All rights reserved.
Programming and execution of movement in Parkinson's disease.
Sheridan, M R; Flowers, K A; Hurrell, J
1987-10-01
Programming and execution of arm movements in Parkinson's disease were investigated in choice and simple reaction time (RT) situations in which subjects made aimed movements at a target. A no-aiming condition was also studied. Reaction time was fractionated using surface EMG recording into premotor (central) and motor (peripheral) components. Premotor RT was found to be greater for parkinsonian patients than normal age-matched controls in the simple RT condition, but not in the choice condition. This effect did not depend on the parameters of the impending movement. Thus, paradoxically, parkinsonian patients were not inherently slower at initiating aiming movements from the starting position, but seemed unable to use advance information concerning motor task demands to speed up movement initiation. For both groups, low velocity movements took longer to initiate than high velocity ones. In the no-aiming condition parkinsonian RTs were markedly shorter than when aiming, but were still significantly longer than control RTs. Motor RT was constant across all conditions and was not different for patient and control subjects. In all conditions, parkinsonian movements were around 37% slower than control movements, and their movement times were more variable, the differences showing up early on in the movement, that is, during the initial ballistic phase. The within-subject variability of movement endpoints was also greater in patients. The motor dysfunction displayed in Parkinson's disease involves a number of components: (1) a basic central problem with simply initiating movements, even when minimal programming is required (no-aiming condition); (2) difficulty in maintaining computed forces for motor programs over time (simple RT condition); (3) a basic slowness of movement (bradykinesia) in all conditions; and (4) increased variability of movement in both time and space, presumably caused by inherent variability in force production.
Cancer heterogeneity: origins and implications for genetic association studies
Urbach, Davnah; Lupien, Mathieu; Karagas, Margaret R.; Moore, Jason H.
2012-01-01
Genetic association studies have become standard approaches to characterize the genetic and epigenetic variability associated with cancer development, including predispositions and mutations. However, the bewildering genetic and phenotypic heterogeneity inherent in cancer both magnifies the conceptual and methodological problems associated with these approaches and renders the translation of available genetic information into a knowledge that is both biologically sound and clinically relevant difficult. Here, we elaborate on the underlying causes of this complexity, illustrate why it represents a challenge for genetic association studies, and briefly discuss how it can be reconciled with the ultimate goal of identifying targetable disease pathways and successfully treating individual patients. PMID:22858414
Accounting for and predicting the influence of spatial autocorrelation in water quality modeling
NASA Astrophysics Data System (ADS)
Miralha, L.; Kim, D.
2017-12-01
Although many studies have attempted to investigate the spatial trends of water quality, more attention is yet to be paid to the consequences of considering and ignoring the spatial autocorrelation (SAC) that exists in water quality parameters. Several studies have mentioned the importance of accounting for SAC in water quality modeling, as well as the differences in outcomes between models that account for and ignore SAC. However, the capacity to predict the magnitude of such differences is still ambiguous. In this study, we hypothesized that SAC inherently possessed by a response variable (i.e., water quality parameter) influences the outcomes of spatial modeling. We evaluated whether the level of inherent SAC is associated with changes in R-Squared, Akaike Information Criterion (AIC), and residual SAC (rSAC), after accounting for SAC during modeling procedure. The main objective was to analyze if water quality parameters with higher Moran's I values (inherent SAC measure) undergo a greater increase in R² and a greater reduction in both AIC and rSAC. We compared a non-spatial model (OLS) to two spatial regression approaches (spatial lag and error models). Predictor variables were the principal components of topographic (elevation and slope), land cover, and hydrological soil group variables. We acquired these data from federal online sources (e.g. USGS). Ten watersheds were selected, each in a different state of the USA. Results revealed that water quality parameters with higher inherent SAC showed substantial increase in R² and decrease in rSAC after performing spatial regressions. However, AIC values did not show significant changes. Overall, the higher the level of inherent SAC in water quality variables, the greater improvement of model performance. This indicates a linear and direct relationship between the spatial model outcomes (R² and rSAC) and the degree of SAC in each water quality variable. Therefore, our study suggests that the inherent level of SAC in response variables can predict improvements in models even before performing spatial regression approaches. We also recognize the constraints of this research and suggest that further studies focus on better ways of defining spatial neighborhoods, considering the differences among stations set in tributaries near to each other and in upstream areas.
How reliable are ligand-centric methods for Target Fishing?
NASA Astrophysics Data System (ADS)
Peon, Antonio; Dang, Cuong; Ballester, Pedro
2016-04-01
Computational methods for Target Fishing (TF), also known as Target Prediction or Polypharmacology Prediction, can be used to discover new targets for small-molecule drugs. This may result in repositioning the drug in a new indication or improving our current understanding of its efficacy and side effects. While there is a substantial body of research on TF methods, there is still a need to improve their validation, which is often limited to a small part of the available targets and not easily interpretable by the user. Here we discuss how target-centric TF methods are inherently limited by the number of targets that can possibly predict (this number is by construction much larger in ligand-centric techniques). We also propose a new benchmark to validate TF methods, which is particularly suited to analyse how predictive performance varies with the query molecule. On average over approved drugs, we estimate that only five predicted targets will have to be tested to find two true targets with submicromolar potency (a strong variability in performance is however observed). In addition, we find that an approved drug has currently an average of eight known targets, which reinforces the notion that polypharmacology is a common and strong event. Furthermore, with the assistance of a control group of randomly-selected molecules, we show that the targets of approved drugs are generally harder to predict.
NASA Technical Reports Server (NTRS)
Ludwig, David A.; Convertino, Victor A.; Goldwater, Danielle J.; Sandler, Harold
1987-01-01
Small sample size (n less than 1O) and inappropriate analysis of multivariate data have hindered previous attempts to describe which physiologic and demographic variables are most important in determining how long humans can tolerate acceleration. Data from previous centrifuge studies conducted at NASA/Ames Research Center, utilizing a 7-14 d bed rest protocol to simulate weightlessness, were included in the current investigation. After review, data on 25 women and 22 men were available for analysis. Study variables included gender, age, weight, height, percent body fat, resting heart rate, mean arterial pressure, Vo(sub 2)max and plasma volume. Since the dependent variable was time to greyout (failure), two contemporary biostatistical modeling procedures (proportional hazard and logistic discriminant function) were used to estimate risk, given a particular subject's profile. After adjusting for pro-bed-rest tolerance time, none of the profile variables remained in the risk equation for post-bed-rest tolerance greyout. However, prior to bed rest, risk of greyout could be predicted with 91% accuracy. All of the profile variables except weight, MAP, and those related to inherent aerobic capacity (Vo(sub 2)max, percent body fat, resting heart rate) entered the risk equation for pro-bed-rest greyout. A cross-validation using 24 new subjects indicated a very stable model for risk prediction, accurate within 5% of the original equation. The result for the inherent fitness variables is significant in that a consensus as to whether an increased aerobic capacity is beneficial or detrimental has not been satisfactorily established. We conclude that tolerance to +Gz acceleration before and after simulated weightlessness is independent of inherent aerobic fitness.
NASA Astrophysics Data System (ADS)
Kassem, Hussein A.; Chehab, Ghassan R.; Najjar, Shadi S.
2017-08-01
Advanced material characterization of asphalt concrete is essential for realistic and accurate performance prediction of flexible pavements. However, such characterization requires rigorous testing regimes that involve mechanical testing of a large number of laboratory samples at various conditions and set-ups. Advanced measurement instrumentation in addition to meticulous and accurate data analysis and analytical representation are also of high importance. Such steps as well as the heterogeneous nature of asphalt concrete (AC) constitute major factors of inherent variability. Thus, it is imperative to model and quantify the variability of the needed asphalt material's properties, mainly the linear viscoelastic response functions such as: relaxation modulus, E(t), and creep compliance, D(t). The objective of this paper is to characterize the inherent uncertainty of both E(t) and D(t) over the time domain of their master curves. This is achieved through a probabilistic framework using Monte Carlo simulations and First Order approximations, utilizing E^{*} data for six AC mixes with at least eight replicates per mix. The study shows that the inherent variability, presented by the coefficient of variation (COV), in E(t) and D(t) is low at small reduced times, and increases with the increase in reduced time. At small reduced times, the COV in E(t) and D(t) are similar in magnitude; however, differences become significant at large reduced times. Additionally, the probability distributions and COVs of E(t) and D(t) are mix dependent. Finally, a case study is considered in which the inherent uncertainty in D(t) is forward propagated to assess the effect of variability on the predicted number of cycles to fatigue failure of an asphalt mix.
Hyperspectral target detection using manifold learning and multiple target spectra
Ziemann, Amanda K.; Theiler, James; Messinger, David W.
2016-03-31
Imagery collected from satellites and airborne platforms provides an important tool for remotely analyzing the content of a scene. In particular, the ability to remotely detect a specific material within a scene is of critical importance in nonproliferation and other applications. The sensor systems that process hyperspectral images collect the high-dimensional spectral information necessary to perform these detection analyses. For a d-dimensional hyperspectral image, however, where d is the number of spectral bands, it is common for the data to inherently occupy an m-dimensional space with m << d. In the remote sensing community, this has led to recent interestmore » in the use of manifold learning, which seeks to characterize the embedded lower-dimensional, nonlinear manifold that the data discretely approximate. The research presented in this paper focuses on a graph theory and manifold learning approach to target detection, using an adaptive version of locally linear embedding that is biased to separate target pixels from background pixels. Finally, this approach incorporates multiple target signatures for a particular material, accounting for the spectral variability that is often present within a solid material of interest.« less
Advances of the smooth variable structure filter: square-root and two-pass formulations
NASA Astrophysics Data System (ADS)
Gadsden, S. Andrew; Lee, Andrew S.
2017-01-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilize a switching gain that brings an inherent amount of stability to the estimation process. In an effort to improve upon the numerical stability of the SVSF, a square-root formulation is derived. The square-root SVSF is based on Potter's algorithm. The proposed formulation is computationally more efficient and reduces the risks of failure due to numerical instability. The new strategy is applied on target tracking scenarios for the purposes of state estimation, and the results are compared with the popular Kalman filter. In addition, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the Kalman-based two-pass smoother.
Expanding understanding of optical variability in Lake Superior with a 4-year dataset
NASA Astrophysics Data System (ADS)
Mouw, Colleen B.; Ciochetto, Audrey B.; Grunert, Brice; Yu, Angela
2017-07-01
Lake Superior is one of the largest freshwater lakes on our planet, but few optical observations have been made to allow for the development and validation of visible spectral satellite remote sensing products. The dataset described here focuses on coincidently observing inherent and apparent optical properties along with biogeochemical parameters. Specifically, we observe remote sensing reflectance, absorption, scattering, backscattering, attenuation, chlorophyll concentration, and suspended particulate matter over the ice-free months of 2013-2016. The dataset substantially increases the optical knowledge of the lake. In addition to visible spectral satellite algorithm development, the dataset is valuable for characterizing the variable light field, particle, phytoplankton, and colored dissolved organic matter distributions, and helpful in food web and carbon cycle investigations. The compiled data can be freely accessed at https://seabass.gsfc.nasa.gov/archive/URI/Mouw/LakeSuperior/.
CELL5M: A geospatial database of agricultural indicators for Africa South of the Sahara.
Koo, Jawoo; Cox, Cindy M; Bacou, Melanie; Azzarri, Carlo; Guo, Zhe; Wood-Sichra, Ulrike; Gong, Queenie; You, Liangzhi
2016-01-01
Recent progress in large-scale georeferenced data collection is widening opportunities for combining multi-disciplinary datasets from biophysical to socioeconomic domains, advancing our analytical and modeling capacity. Granular spatial datasets provide critical information necessary for decision makers to identify target areas, assess baseline conditions, prioritize investment options, set goals and targets and monitor impacts. However, key challenges in reconciling data across themes, scales and borders restrict our capacity to produce global and regional maps and time series. This paper provides overview, structure and coverage of CELL5M-an open-access database of geospatial indicators at 5 arc-minute grid resolution-and introduces a range of analytical applications and case-uses. CELL5M covers a wide set of agriculture-relevant domains for all countries in Africa South of the Sahara and supports our understanding of multi-dimensional spatial variability inherent in farming landscapes throughout the region.
Reducing inherent biases introduced during DNA viral metagenome analyses of municipal wastewater
Metagenomics is a powerful tool for characterizing viral composition within environmental samples, but sample and molecular processing steps can bias the estimation of viral community structure. The objective of this study is to understand the inherent variability introduced when...
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Effect of retransmission and retrodiction on estimation and fusion in long-haul sensor networks
Liu, Qiang; Wang, Xin; Rao, Nageswara S. V.; ...
2016-01-01
In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as target tracking. In this work, we study the scenario where sensors take measurements of one or more dynamic targets and send state estimates of the targets to a fusion center via satellite links. The severe loss and delay inherent over the satellite channels reduce the number of estimates successfully arriving at the fusion center, thereby limiting the potential fusion gain and resulting in suboptimal accuracy performance of the fused estimates. In addition, the errors in target-sensor data association can alsomore » degrade the estimation performance. To mitigate the effect of imperfect communications on state estimation and fusion, we consider retransmission and retrodiction. The system adopts certain retransmission-based transport protocols so that lost messages can be recovered over time. Besides, retrodiction/smoothing techniques are applied so that the chances of incurring excess delay due to retransmission are greatly reduced. We analyze the extent to which retransmission and retrodiction can improve the performance of delay-sensitive target tracking tasks under variable communication loss and delay conditions. Lastly, simulation results of a ballistic target tracking application are shown in the end to demonstrate the validity of our analysis.« less
Development and application of a microarray meter tool to optimize microarray experiments
Rouse, Richard JD; Field, Katrine; Lapira, Jennifer; Lee, Allen; Wick, Ivan; Eckhardt, Colleen; Bhasker, C Ramana; Soverchia, Laura; Hardiman, Gary
2008-01-01
Background Successful microarray experimentation requires a complex interplay between the slide chemistry, the printing pins, the nucleic acid probes and targets, and the hybridization milieu. Optimization of these parameters and a careful evaluation of emerging slide chemistries are a prerequisite to any large scale array fabrication effort. We have developed a 'microarray meter' tool which assesses the inherent variations associated with microarray measurement prior to embarking on large scale projects. Findings The microarray meter consists of nucleic acid targets (reference and dynamic range control) and probe components. Different plate designs containing identical probe material were formulated to accommodate different robotic and pin designs. We examined the variability in probe quality and quantity (as judged by the amount of DNA printed and remaining post-hybridization) using three robots equipped with capillary printing pins. Discussion The generation of microarray data with minimal variation requires consistent quality control of the (DNA microarray) manufacturing and experimental processes. Spot reproducibility is a measure primarily of the variations associated with printing. The microarray meter assesses array quality by measuring the DNA content for every feature. It provides a post-hybridization analysis of array quality by scoring probe performance using three metrics, a) a measure of variability in the signal intensities, b) a measure of the signal dynamic range and c) a measure of variability of the spot morphologies. PMID:18710498
A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…
Gcn4-Mediator Specificity Is Mediated by a Large and Dynamic Fuzzy Protein-Protein Complex.
Tuttle, Lisa M; Pacheco, Derek; Warfield, Linda; Luo, Jie; Ranish, Jeff; Hahn, Steven; Klevit, Rachel E
2018-03-20
Transcription activation domains (ADs) are inherently disordered proteins that often target multiple coactivator complexes, but the specificity of these interactions is not understood. Efficient transcription activation by yeast Gcn4 requires its tandem ADs and four activator-binding domains (ABDs) on its target, the Mediator subunit Med15. Multiple ABDs are a common feature of coactivator complexes. We find that the large Gcn4-Med15 complex is heterogeneous and contains nearly all possible AD-ABD interactions. Gcn4-Med15 forms via a dynamic fuzzy protein-protein interface, where ADs bind the ABDs in multiple orientations via hydrophobic regions that gain helicity. This combinatorial mechanism allows individual low-affinity and specificity interactions to generate a biologically functional, specific, and higher affinity complex despite lacking a defined protein-protein interface. This binding strategy is likely representative of many activators that target multiple coactivators, as it allows great flexibility in combinations of activators that can cooperate to regulate genes with variable coactivator requirements. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Kager, Simone; Budhota, Aamani; Deshmukh, Vishwanath A.; Kuah, Christopher W. K.; Yam, Lester H. L.; Xiang, Liming; Chua, Karen S. G.; Masia, Lorenzo; Campolo, Domenico
2017-01-01
Proprioception is a critical component for motor functions and directly affects motor learning after neurological injuries. Conventional methods for its assessment are generally ordinal in nature and hence lack sensitivity. Robotic devices designed to promote sensorimotor learning can potentially provide quantitative precise, accurate, and reliable assessments of sensory impairments. In this paper, we investigate the clinical applicability and validity of using a planar 2 degrees of freedom robot to quantitatively assess proprioceptive deficits in post-stroke participants. Nine stroke survivors and nine healthy subjects participated in the study. Participants’ hand was passively moved to the target position guided by the H-Man robot (Criterion movement) and were asked to indicate during a second passive movement towards the same target (Matching movement) when they felt that they matched the target position. The assessment was carried out on a planar surface for movements in the forward and oblique directions in the contralateral and ipsilateral sides of the tested arm. The matching performance was evaluated in terms of error magnitude (absolute and signed) and its variability. Stroke patients showed higher variability in the estimation of the target position compared to the healthy participants. Further, an effect of target was found, with lower absolute errors in the contralateral side. Pairwise comparison between individual stroke participant and control participants showed significant proprioceptive deficits in two patients. The proposed assessment of passive joint position sense was inherently simple and all participants, regardless of motor impairment level, could complete it in less than 10 minutes. Therefore, the method can potentially be carried out to detect changes in proprioceptive deficits in clinical settings. PMID:29161264
M Dwarf Flares: Exoplanet Detection Implications
NASA Astrophysics Data System (ADS)
Tofflemire, B. M.; Wisniewski, J. P.; Hilton, E. J.; Kowalski, A. F.; Kundurthy, P.; Schmidt, S. J.; Hawley, S. L.; Holtzman, J. A.
2011-12-01
Low mass stars such as M dwarfs have become prime targets for exoplanet transit searches as their low luminosities and small stellar radii could enable the detection of super-Earths residing in their habitable zones. While promising transit targets, M dwarfs are also inherently variable and can exhibit up to ˜6 magnitude flux enhancements in the optical U-band. This is significantly higher than the predicted transit depths of habitable zone super-Earths (0.005 magnitude flux decrease). The behavior of flares at infrared (IR) wavelengths, particularly those likely to be used to study and characterize M dwarf exoplanets using facilities such as the James Web Space Telescope (JWST), remains largely unknown. To address these uncertainties, we are executing a coordinated, contemporaneous monitoring program of the optical and IR flux of M dwarfs known to regularly flare. A suite of telescopes located at the Kitt Peak National Observatory and the Apache Point Observatory are used for the observations. We present the initial results of this program.
Sun, Lan; Irudayaraj, Joseph
2009-01-01
We demonstrate a surface enhanced Raman spectroscopy (SERS) based array platform to monitor gene expression in cancer cells in a multiplex and quantitative format without amplification steps. A strategy comprising of DNA/RNA hybridization, S1 nuclease digestion, and alkaline hydrolysis was adopted to obtain DNA targets specific to two splice junction variants Δ(9, 10) and Δ(5) of the breast cancer susceptibility gene 1 (BRCA1) from MCF-7 and MDA-MB-231 breast cancer cell lines. These two targets were identified simultaneously and their absolute quantities were estimated by a SERS strategy utilizing the inherent plasmon-phonon Raman mode of gold nanoparticle probes as a self-referencing standard to correct for variability in surface enhancement. Results were then validated by reverse transcription PCR (RT-PCR). Our proposed methodology could be expanded to a higher level of multiplexing for quantitative gene expression analysis of any gene without any amplification steps. PMID:19780515
Yuen, Garmen; Khan, Fehad J.; Gao, Shaojian; Stommel, Jayne M.; Batchelor, Eric; Wu, Xiaolin
2017-01-01
Abstract CRISPR/Cas9 is a powerful gene editing tool for gene knockout studies and functional genomic screens. Successful implementation of CRISPR often requires Cas9 to elicit efficient target knockout in a population of cells. In this study, we investigated the role of several key factors, including variation in target copy number, inherent potency of sgRNA guides, and expression level of Cas9 and sgRNA, in determining CRISPR knockout efficiency. Using isogenic, clonal cell lines with variable copy numbers of an EGFP transgene, we discovered that CRISPR knockout is relatively insensitive to target copy number, but is highly dependent on the potency of the sgRNA guide sequence. Kinetic analysis revealed that most target mutation occurs between 5 and 10 days following Cas9/sgRNA transduction, while sgRNAs with different potencies differ by their knockout time course and by their terminal-phase knockout efficiency. We showed that prolonged, low level expression of Cas9 and sgRNA often fails to elicit target mutation, particularly if the potency of the sgRNA is also low. Our findings provide new insights into the behavior of CRISPR/Cas9 in mammalian cells that could be used for future improvement of this platform. PMID:29036671
Yuen, Garmen; Khan, Fehad J; Gao, Shaojian; Stommel, Jayne M; Batchelor, Eric; Wu, Xiaolin; Luo, Ji
2017-11-16
CRISPR/Cas9 is a powerful gene editing tool for gene knockout studies and functional genomic screens. Successful implementation of CRISPR often requires Cas9 to elicit efficient target knockout in a population of cells. In this study, we investigated the role of several key factors, including variation in target copy number, inherent potency of sgRNA guides, and expression level of Cas9 and sgRNA, in determining CRISPR knockout efficiency. Using isogenic, clonal cell lines with variable copy numbers of an EGFP transgene, we discovered that CRISPR knockout is relatively insensitive to target copy number, but is highly dependent on the potency of the sgRNA guide sequence. Kinetic analysis revealed that most target mutation occurs between 5 and 10 days following Cas9/sgRNA transduction, while sgRNAs with different potencies differ by their knockout time course and by their terminal-phase knockout efficiency. We showed that prolonged, low level expression of Cas9 and sgRNA often fails to elicit target mutation, particularly if the potency of the sgRNA is also low. Our findings provide new insights into the behavior of CRISPR/Cas9 in mammalian cells that could be used for future improvement of this platform. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.
Quantifying Forest Soil Physical Variables Potentially Important for Site Growth Analyses
John S. Kush; Douglas G. Pitt; Phillip J. Craul; William D. Boyer
2004-01-01
Accurate mean plot values of forest soil factors are required for use as independent variables in site-growth analyses. Adequate accuracy is often difficult to attain because soils are inherently widely variable. Estimates of the variability of appropriate soil factors influencing growth can be used to determine the sampling intensity required to secure accurate mean...
Arctic sea ice trends, variability and implications for seasonal ice forecasting
Serreze, Mark C.; Stroeve, Julienne
2015-01-01
September Arctic sea ice extent over the period of satellite observations has a strong downward trend, accompanied by pronounced interannual variability with a detrended 1 year lag autocorrelation of essentially zero. We argue that through a combination of thinning and associated processes related to a warming climate (a stronger albedo feedback, a longer melt season, the lack of especially cold winters) the downward trend itself is steepening. The lack of autocorrelation manifests both the inherent large variability in summer atmospheric circulation patterns and that oceanic heat loss in winter acts as a negative (stabilizing) feedback, albeit insufficient to counter the steepening trend. These findings have implications for seasonal ice forecasting. In particular, while advances in observing sea ice thickness and assimilating thickness into coupled forecast systems have improved forecast skill, there remains an inherent limit to predictability owing to the largely chaotic nature of atmospheric variability. PMID:26032315
Intelligibility of clear speech: effect of instruction.
Lam, Jennifer; Tjaden, Kris
2013-10-01
The authors investigated how clear speech instructions influence sentence intelligibility. Twelve speakers produced sentences in habitual, clear, hearing impaired, and overenunciate conditions. Stimuli were amplitude normalized and mixed with multitalker babble for orthographic transcription by 40 listeners. The main analysis investigated percentage-correct intelligibility scores as a function of the 4 conditions and speaker sex. Additional analyses included listener response variability, individual speaker trends, and an alternate intelligibility measure: proportion of content words correct. Relative to the habitual condition, the overenunciate condition was associated with the greatest intelligibility benefit, followed by the hearing impaired and clear conditions. Ten speakers followed this trend. The results indicated different patterns of clear speech benefit for male and female speakers. Greater listener variability was observed for speakers with inherently low habitual intelligibility compared to speakers with inherently high habitual intelligibility. Stable proportions of content words were observed across conditions. Clear speech instructions affected the magnitude of the intelligibility benefit. The instruction to overenunciate may be most effective in clear speech training programs. The findings may help explain the range of clear speech intelligibility benefit previously reported. Listener variability analyses suggested the importance of obtaining multiple listener judgments of intelligibility, especially for speakers with inherently low habitual intelligibility.
The Use of Race-Related Variables in Counseling Research
ERIC Educational Resources Information Center
Strom, Thad Q.; Lee, D. John; Trahan, Emily; Kaufman, Aimee; Pritchett, Tiffany
2009-01-01
This study provides a detailed analysis of all race-related articles published in prominent counseling journals between 1995 and 2004. Findings indicate that 75% of articles did not define race variables and in the absence of an operational definition, authors tended to conceptualize race as an inherent biological variable. (Contains 3 tables.)
USDA-ARS?s Scientific Manuscript database
In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement used to describe fermentability of NDF by rumen microbes. Variability is inherent in assays and affects the precision that can be expected for replicated samples. The study objective was to evaluate variability w...
USDA-ARS?s Scientific Manuscript database
In vitro neutral detergent fiber (NDF) digestibility (NDFD) is an empirical measurement used to describe fermentability of NDF by rumen microbes. Variability is inherent in assays and affects the precision that can be expected for replicated samples. The study objective was to evaluate variability w...
A linear-encoding model explains the variability of the target morphology in regeneration
Lobo, Daniel; Solano, Mauricio; Bubenik, George A.; Levin, Michael
2014-01-01
A fundamental assumption of today's molecular genetics paradigm is that complex morphology emerges from the combined activity of low-level processes involving proteins and nucleic acids. An inherent characteristic of such nonlinear encodings is the difficulty of creating the genetic and epigenetic information that will produce a given self-assembling complex morphology. This ‘inverse problem’ is vital not only for understanding the evolution, development and regeneration of bodyplans, but also for synthetic biology efforts that seek to engineer biological shapes. Importantly, the regenerative mechanisms in deer antlers, planarian worms and fiddler crabs can solve an inverse problem: their target morphology can be altered specifically and stably by injuries in particular locations. Here, we discuss the class of models that use pre-specified morphological goal states and propose the existence of a linear encoding of the target morphology, making the inverse problem easy for these organisms to solve. Indeed, many model organisms such as Drosophila, hydra and Xenopus also develop according to nonlinear encodings producing linear encodings of their final morphologies. We propose the development of testable models of regeneration regulation that combine emergence with a top-down specification of shape by linear encodings of target morphology, driving transformative applications in biomedicine and synthetic bioengineering. PMID:24402915
Weather variability and adaptive management for rangeland restoration
USDA-ARS?s Scientific Manuscript database
Inherent weather variability in upland rangeland systems requires relatively long-term goal setting, and contingency planning for partial success or failure in any given year. Rangeland plant communities are dynamic systems and successional planning is essential for achieving and maintaining system...
Targeted and non-targeted detection of lemon juice adulteration by LC-MS and chemometrics.
Wang, Zhengfang; Jablonski, Joseph E
2016-01-01
Economically motivated adulteration (EMA) of lemon juice was detected by LC-MS and principal component analysis (PCA). Twenty-two batches of freshly squeezed lemon juice were adulterated by adding an aqueous solution containing 5% citric acid and 6% sucrose to pure lemon juice to obtain 30%, 60% and 100% lemon juice samples. Their total titratable acidities, °Brix and pH values were measured, and then all the lemon juice samples were subject to LC-MS analysis. Concentrations of hesperidin and eriocitrin, major phenolic components of lemon juice, were quantified. The PCA score plots for LC-MS datasets were used to preview the classification of pure and adulterated lemon juice samples. Results showed a large inherent variability in the chemical properties among 22 batches of 100% lemon juice samples. Measurement or quantitation of one or several chemical properties (targeted detection) was not effective in detecting lemon juice adulteration. However, by using the LC-MS datasets, including both chromatographic and mass spectrometric information, 100% lemon juice samples were successfully differentiated from adulterated samples containing 30% lemon juice in the PCA score plot. LC-MS coupled with chemometric analysis can be a complement to existing methods for detecting juice adulteration.
ASSESSING THE RISKS OF NON-TARGET TERRESTRIAL PLANTS FROM HERBICIDES
Use of chemical herbicides to reduce weed competition is a major contributing factor to the high productivity of conventional intensive agricultural cropping systems. However, because of their inherent phytotoxicity, movement of herbicides from target crops and soils can adverse...
Arctic sea ice trends, variability and implications for seasonal ice forecasting.
Serreze, Mark C; Stroeve, Julienne
2015-07-13
September Arctic sea ice extent over the period of satellite observations has a strong downward trend, accompanied by pronounced interannual variability with a detrended 1 year lag autocorrelation of essentially zero. We argue that through a combination of thinning and associated processes related to a warming climate (a stronger albedo feedback, a longer melt season, the lack of especially cold winters) the downward trend itself is steepening. The lack of autocorrelation manifests both the inherent large variability in summer atmospheric circulation patterns and that oceanic heat loss in winter acts as a negative (stabilizing) feedback, albeit insufficient to counter the steepening trend. These findings have implications for seasonal ice forecasting. In particular, while advances in observing sea ice thickness and assimilating thickness into coupled forecast systems have improved forecast skill, there remains an inherent limit to predictability owing to the largely chaotic nature of atmospheric variability. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Ramasesha, Krupa; De Marco, Luigi; Horning, Andrew D; Mandal, Aritra; Tokmakoff, Andrei
2012-04-07
We present an approach for calculating nonlinear spectroscopic observables, which overcomes the approximations inherent to current phenomenological models without requiring the computational cost of performing molecular dynamics simulations. The trajectory mapping method uses the semi-classical approximation to linear and nonlinear response functions, and calculates spectra from trajectories of the system's transition frequencies and transition dipole moments. It rests on identifying dynamical variables important to the problem, treating the dynamics of these variables stochastically, and then generating correlated trajectories of spectroscopic quantities by mapping from the dynamical variables. This approach allows one to describe non-Gaussian dynamics, correlated dynamics between variables of the system, and nonlinear relationships between spectroscopic variables of the system and the bath such as non-Condon effects. We illustrate the approach by applying it to three examples that are often not adequately treated by existing analytical models--the non-Condon effect in the nonlinear infrared spectra of water, non-Gaussian dynamics inherent to strongly hydrogen bonded systems, and chemical exchange processes in barrier crossing reactions. The methods described are generally applicable to nonlinear spectroscopy throughout the optical, infrared and terahertz regions.
RNAi control of aflatoxins in peanut plants, a multifactorial system
USDA-ARS?s Scientific Manuscript database
RNA-interference (RNAi)-mediated control of aflatoxin contamination in peanut plants is a multifactorial and hyper variable system. The use of RNAi biotechnology to silence single genes in plants has inherently high-variability among transgenic events. Also the level of expression of small interfe...
The Challenge of Reproducibility and Accuracy in Nutrition Research: Resources and Pitfalls1234
Kuszak, Adam J; Williamson, John S; Hopp, D Craig; Betz, Joseph M
2016-01-01
Inconsistent and contradictory results from nutrition studies conducted by different investigators continue to emerge, in part because of the inherent variability of natural products, as well as the unknown and therefore uncontrolled variables in study populations and experimental designs. Given these challenges inherent in nutrition research, it is critical for the progress of the field that researchers strive to minimize variability within studies and enhance comparability between studies by optimizing the characterization, control, and reporting of products, reagents, and model systems used, as well as the rigor and reporting of experimental designs, protocols, and data analysis. Here we describe some recent developments relevant to research on plant-derived products used in nutrition research, highlight some resources for optimizing the characterization and reporting of research using these products, and describe some of the pitfalls that may be avoided by adherence to these recommendations. PMID:26980822
Selimkhanov, Jangir; Thompson, W. Clayton; Guo, Juen; Hall, Kevin D.; Musante, Cynthia J.
2017-01-01
The design of well-powered in vivo preclinical studies is a key element in building knowledge of disease physiology for the purpose of identifying and effectively testing potential anti-obesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic endpoints such as food intake and body composition. This, combined with limitations inherent in the measurement of certain endpoints, presents challenges to study design that can have significant consequences for an anti-obesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of key metabolic endpoints. To demonstrate how conclusions can change as a function of study size, we show that a simulated pre-clinical study properly powered for one endpoint may lead to false conclusions based on secondary endpoints. We then propose guidelines for endpoint selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design. PMID:28392555
Matano, Francesca; Sambucini, Valeria
2016-11-01
In phase II single-arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single-arm two-stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A Personalized Approach to Managing Inflammatory Bowel Disease
Kingsley, Michael J.
2016-01-01
The management of inflammatory bowel disease (IBD) requires a personalized approach to treat what is a heterogeneous group of patients with inherently variable disease courses. In its current state, personalized care of the IBD patient involves identifying patients at high risk for rapid progression to complications, selecting the most appropriate therapy for a given patient, using therapeutic drug monitoring, and achieving the individualized goal that is most appropriate for that patient. The growing body of research in this area allows clinicians to better predict outcomes for individual patients. Some paradigms, especially within the realm of therapeutic drug monitoring, have begun to change as therapy is targeted to individual patient results and goals. Future personalized medical decisions may allow specific therapeutic plans to draw on serologic, genetic, and microbial data for Crohn’s disease and ulcerative colitis patients. PMID:27499713
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Levy, Oren; Zhao, Weian; Mortensen, Luke J; Leblanc, Sarah; Tsang, Kyle; Fu, Moyu; Phillips, Joseph A; Sagar, Vinay; Anandakumaran, Priya; Ngai, Jessica; Cui, Cheryl H; Eimon, Peter; Angel, Matthew; Lin, Charles P; Yanik, Mehmet Fatih; Karp, Jeffrey M
2013-10-03
Mesenchymal stem cells (MSCs) are promising candidates for cell-based therapy to treat several diseases and are compelling to consider as vehicles for delivery of biological agents. However, MSCs appear to act through a seemingly limited "hit-and-run" mode to quickly exert their therapeutic impact, mediated by several mechanisms, including a potent immunomodulatory secretome. Furthermore, MSC immunomodulatory properties are highly variable and the secretome composition following infusion is uncertain. To determine whether a transiently controlled antiinflammatory MSC secretome could be achieved at target sites of inflammation, we harnessed mRNA transfection to generate MSCs that simultaneously express functional rolling machinery (P-selectin glycoprotein ligand-1 [PSGL-1] and Sialyl-Lewis(x) [SLeX]) to rapidly target inflamed tissues and that express the potent immunosuppressive cytokine interleukin-10 (IL-10), which is not inherently produced by MSCs. Indeed, triple-transfected PSGL-1/SLeX/IL-10 MSCs transiently increased levels of IL-10 in the inflamed ear and showed a superior antiinflammatory effect in vivo, significantly reducing local inflammation following systemic administration. This was dependent on rapid localization of MSCs to the inflamed site. Overall, this study demonstrates that despite the rapid clearance of MSCs in vivo, engineered MSCs can be harnessed via a "hit-and-run" action for the targeted delivery of potent immunomodulatory factors to treat distant sites of inflammation.
Michell L. Thomey
2012-01-01
Although the Earth's climate system has always been inherently variable, the magnitude and rate of anthropogenic climate change is subjecting ecosystems and the populations that they contain to novel environmental conditions. Because water is the most limiting resource, arid-semiarid ecosystems are likely to be highly responsive to future climate variability. The...
Learning a common dictionary for subject-transfer decoding with resting calibration.
Morioka, Hiroshi; Kanemura, Atsunori; Hirayama, Jun-ichiro; Shikauchi, Manabu; Ogawa, Takeshi; Ikeda, Shigeyuki; Kawanabe, Motoaki; Ishii, Shin
2015-05-01
Brain signals measured over a series of experiments have inherent variability because of different physical and mental conditions among multiple subjects and sessions. Such variability complicates the analysis of data from multiple subjects and sessions in a consistent way, and degrades the performance of subject-transfer decoding in a brain-machine interface (BMI). To accommodate the variability in brain signals, we propose 1) a method for extracting spatial bases (or a dictionary) shared by multiple subjects, by employing a signal-processing technique of dictionary learning modified to compensate for variations between subjects and sessions, and 2) an approach to subject-transfer decoding that uses the resting-state activity of a previously unseen target subject as calibration data for compensating for variations, eliminating the need for a standard calibration based on task sessions. Applying our methodology to a dataset of electroencephalography (EEG) recordings during a selective visual-spatial attention task from multiple subjects and sessions, where the variability compensation was essential for reducing the redundancy of the dictionary, we found that the extracted common brain activities were reasonable in the light of neuroscience knowledge. The applicability to subject-transfer decoding was confirmed by improved performance over existing decoding methods. These results suggest that analyzing multisubject brain activities on common bases by the proposed method enables information sharing across subjects with low-burden resting calibration, and is effective for practical use of BMI in variable environments. Copyright © 2015 Elsevier Inc. All rights reserved.
Effect of inherent location uncertainty on detection of stationary targets in noisy image sequences.
Manjeshwar, R M; Wilson, D L
2001-01-01
The effect of inherent location uncertainty on the detection of stationary targets was determined in noisy image sequences. Targets were thick and thin projected cylinders mimicking arteries, catheters, and guide wires in medical imaging x-ray fluoroscopy. With the use of an adaptive forced-choice method, detection contrast sensitivity (the inverse of contrast) was measured both with and without marker cues that directed the attention of observers to the target location. With the probability correct clamped at 80%, contrast sensitivity increased an average of 77% when the marker was added to the thin-cylinder target. There was an insignificant effect on the thick cylinder. The large enhancement with the thin cylinder was obtained even though the target was located exactly in the center of a small panel, giving observers the impression that it was well localized. Psychometric functions consisting of d' plotted as a function of the square root of the signal-energy-to-noise-ratio gave a positive x intercept for the case of the thin cylinder without a marker. This x intercept, characteristic of uncertainty in other types of detection experiments, disappeared when the marker was added or when the thick cylinder was used. Inherent location uncertainty was further characterized by using four different markers with varying proximity to the target. Visual detection by human observers increased monotonically as the markers better localized the target. Human performance was modeled as a matched-filter detector with an uncertainty in the placement of the template. The removal of a location cue was modeled by introducing a location uncertainty of approximately equals 0.4 mm on the display device or only 7 microm on the retina, a size on the order of a single photoreceptor field. We conclude that detection is affected by target location uncertainty on the order of cellular dimensions, an observation with important implications for detection mechanisms in humans. In medical imaging, the results argue strongly for inclusion of high-contrast visualization markers on catheters and other interventional devices.
Day, Ryan; Joo, Hyun; Chavan, Archana; Lennox, Kristin P.; Chen, Ann; Dahl, David B.; Vannucci, Marina; Tsai, Jerry W.
2012-01-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. PMID:23266765
Day, Ryan; Joo, Hyun; Chavan, Archana C; Lennox, Kristin P; Chen, Y Ann; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2013-02-01
As an alternative to the common template based protein structure prediction methods based on main-chain position, a novel side-chain centric approach has been developed. Together with a Bayesian loop modeling procedure and a combination scoring function, the Stone Soup algorithm was applied to the CASP9 set of template based modeling targets. Although the method did not generate as large of perturbations to the template structures as necessary, the analysis of the results gives unique insights into the differences in packing between the target structures and their templates. Considerable variation in packing is found between target and template structures even when the structures are close, and this variation is found due to 2 and 3 body packing interactions. Outside the inherent restrictions in packing representation of the PDB, the first steps in correctly defining those regions of variable packing have been mapped primarily to local interactions, as the packing at the secondary and tertiary structure are largely conserved. Of the scoring functions used, a loop scoring function based on water structure exhibited some promise for discrimination. These results present a clear structural path for further development of a side-chain centered approach to template based modeling. Copyright © 2012 Elsevier Ltd. All rights reserved.
Science Goal Driven Observing and Spacecraft Autonomy
NASA Technical Reports Server (NTRS)
Koratkar, Amuradha; Grosvenor, Sandy; Jones, Jeremy; Wolf, Karl
2002-01-01
Spacecraft autonomy will be an integral part of mission operations in the coming decade. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will meet the science goals of a program, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the onboard software can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download a subset of observations at a reduced resolution, the spacecraft's overall efficiency will be dramatically increased. The science goal monitoring (SGM) system is a proof-of-concept effort to address the above challenge. The SGM will have an interface to help capture higher level science goals from the scientists and translate them into a flexible observing strategy that SGM can execute and monitor. We are developing an interactive distributed system that will use on-board processing and storage combined with event-driven interfaces with ground-based processing and operations, to enable fast re-prioritization of observing schedules, and to minimize time spent on non-optimized observations.
Rios Piedra, Edgar A; Taira, Ricky K; El-Saden, Suzie; Ellingson, Benjamin M; Bui, Alex A T; Hsu, William
2016-02-01
Brain tumor analysis is moving towards volumetric assessment of magnetic resonance imaging (MRI), providing a more precise description of disease progression to better inform clinical decision-making and treatment planning. While a multitude of segmentation approaches exist, inherent variability in the results of these algorithms may incorrectly indicate changes in tumor volume. In this work, we present a systematic approach to characterize variability in tumor boundaries that utilizes equivalence tests as a means to determine whether a tumor volume has significantly changed over time. To demonstrate these concepts, 32 MRI studies from 8 patients were segmented using four different approaches (statistical classifier, region-based, edge-based, knowledge-based) to generate different regions of interest representing tumor extent. We showed that across all studies, the average Dice coefficient for the superset of the different methods was 0.754 (95% confidence interval 0.701-0.808) when compared to a reference standard. We illustrate how variability obtained by different segmentations can be used to identify significant changes in tumor volume between sequential time points. Our study demonstrates that variability is an inherent part of interpreting tumor segmentation results and should be considered as part of the interpretation process.
Plant community variability on a small area in southeastern Montana
James G. MacCracken; Daniel W. Uresk; Richard M. Hansen
1984-01-01
Plant communities are inherently variable due to a number of environmental and biological forces. Canopy cover and aboveground biomass were determined for understory vegetation in plant communities of a prairie grassland-forest ecotone in southeastern Montana. Vegetation units were described using polar ordination and stepwise discriminant analysis. Nine of a total of...
Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.
2010-08-01
Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less
1999-10-01
instrument, and was deemed unsuitable because it was altered by several variables (patient condition, planned surgical procedure, experience and skill...index of anesthetic risk could be developed. Variables in addition to the patient physical status ( experience of the surgeon, anesthetist, and hospital...and other concurrent illnesses), but also by risk inherent to the specific surgery. Surgical risks include experience of the surgical team, the
Wang, Dingyi; Huang, Xiu; Li, Jie; He, Bin; Liu, Qian; Hu, Ligang; Jiang, Guibin
2018-03-13
We report a graphene-doped resin target fabricated via a 3D printing technique for laser desorption/ionization mass spectrometry analysis. The graphene doped in the target acts as an inherent laser absorber and ionization promoter, thus permitting the direct analysis of samples without adding matrix. This work reveals a new strategy for easy designing and fabrication of functional mass spectrometry devices.
Fisher, Joseph A
2016-06-01
Cerebrovascular reactivity (CVR) studies have elucidated the physiology and pathophysiology of cerebral blood flow regulation. A non-invasive, high spatial resolution approach uses carbon dioxide (CO2) as the vasoactive stimulus and magnetic resonance techniques to estimate the cerebral blood flow response. CVR is assessed as the ratio response change to stimulus change. Precise control of the stimulus is sought to minimize CVR variability between tests, and show functional differences. Computerized methods targeting end-tidal CO2 partial pressures are precise, but expensive. Simpler, improvised methods that fix the inspired CO2 concentrations have been recommended as less expensive, and so more widely accessible. However, these methods have drawbacks that have not been previously presented by those that advocate their use, or those that employ them in their studies. As one of the developers of a computerized method, I provide my perspective on the trade-offs between these two methods. The main concern is that declaring the precision of fixed inspired concentration of CO2 is misleading: it does not, as implied, translate to precise control of the actual vasoactive stimulus - the arterial partial pressure of CO2 The inherent test-to-test, and therefore subject-to-subject variability, precludes clinical application of findings. Moreover, improvised methods imply widespread duplication of development, assembly time and costs, yet lack uniformity and quality control. A tabular comparison between approaches is provided. © The Author(s) 2016.
Beam-specific planning volumes for scattered-proton lung radiotherapy
NASA Astrophysics Data System (ADS)
Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.
2014-08-01
This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.
High-efficiency free-form condenser overcoming rotational symmetry limitations.
Miñano, Juan C; Benítez, Pablo; Blen, José; Santamaría, Asunción
2008-12-08
Conventional condensers using rotational symmetric devices perform far from their theoretical limits when transferring optical power from sources such as arc lamps or halogen bulbs to the rectangular entrance of homogenizing prisms (target). We present a free-form condenser design (calculated with the SMS method) that overcomes the limitations inherent to rotational devices and can send to the target 1.8 times the power sent by an equivalent elliptical condenser for a 4:1 target aspect ratio and 1.5 times for 16:9 target and for practical values of target etendue.
Predictability and preparedness in influenza control.
Smith, Derek J
2006-04-21
The threat of pandemic human influenza looms as we survey the ongoing avian influenza pandemic and wonder if and when it will jump species. What are the risks and how can we plan? The nub of the problem lies in the inherent variability of the virus, which makes prediction difficult. However, it is not impossible; mathematical models can help determine and quantify critical parameters and thresholds in the relationships of those parameters, even if the relationships are nonlinear and obscure to simple reasoning. Mathematical models can derive estimates for the levels of drug stockpiles needed to buy time, how and when to modify vaccines, whom to target with vaccines and drugs, and when to enforce quarantine measures. Regardless, the models used for pandemic planning must be tested, and for this we must continue to gather data, not just for exceptional scenarios but also for seasonal influenza.
Demand, Support, and Perception in Family-Related Stress among Protestant Clergy.
ERIC Educational Resources Information Center
Lee, Cameron; Iverson-Gilbert, Judith
2003-01-01
Studies of clergy have emphasized the effects of stressors inherent to the profession and the impact of these on the minister's personal and family life. A model of family stress was employed to extend the focus to include three classes of variables: demands, social support, and perception. Results indicated that perception variables are more…
Brian J. Palik; Robert J. Mitchell; J. Kevin Hiers
2002-01-01
Modeling silviculture after natural disturbance to maintain biodiversity is a popular concept, yet its application remains elusive. We discuss difficulties inherent to this idea, and suggest approaches to facilitate implementation, using longleaf pine (Pinus palustris) as an example. Natural disturbance regimes are spatially and temporally variable. Variability...
Jamie M. Lydersen; Brandon M. Collins; Eric E. Knapp; Gary B. Roller; Scott Stephens
2015-01-01
Although knowledge of surface fuel loads is critical for evaluating potential fire behaviour and effects, their inherent variability makes these difficult to quantify. Several studies relate fuel loads to vegetation type, topography and spectral imaging, but little work has been done examining relationships between forest overstorey variables and surface fuel...
Is the inherent potential of maize roots efficient for soil phosphorus acquisition?
Deng, Yan; Chen, Keru; Teng, Wan; Zhan, Ai; Tong, Yiping; Feng, Gu; Cui, Zhenling; Zhang, Fusuo; Chen, Xinping
2014-01-01
Sustainable agriculture requires improved phosphorus (P) management to reduce the overreliance on P fertilization. Despite intensive research of root adaptive mechanisms for improving P acquisition, the inherent potential of roots for efficient P acquisition remains unfulfilled, especially in intensive agriculture, while current P management generally focuses on agronomic and environmental concerns. Here, we investigated how levels of soil P affect the inherent potential of maize (Zea mays L.) roots to obtain P from soil. Responses of root morphology, arbuscular mycorrhizal colonization, and phosphate transporters were characterized and related to agronomic traits in pot and field experiments with soil P supply from deficiency to excess. Critical soil Olsen-P level for maize growth approximated 3.2 mg kg(-1), and the threshold indicating a significant environmental risk was about 15 mg kg(-1), which represented the lower and upper levels of soil P recommended in current P management. However, most root adaptations involved with P acquisition were triggered when soil Olsen-P was below 10 mg kg(-1), indicating a threshold for maximum root inherent potential. Therefore, to maintain efficient inherent potential of roots for P acquisition, we suggest that the target upper level of soil P in intensive agriculture should be reduced from the environmental risk threshold to the point maximizing the inherent potential of roots.
Is the Inherent Potential of Maize Roots Efficient for Soil Phosphorus Acquisition?
Deng, Yan; Chen, Keru; Teng, Wan; Zhan, Ai; Tong, Yiping; Feng, Gu; Cui, Zhenling; Zhang, Fusuo; Chen, Xinping
2014-01-01
Sustainable agriculture requires improved phosphorus (P) management to reduce the overreliance on P fertilization. Despite intensive research of root adaptive mechanisms for improving P acquisition, the inherent potential of roots for efficient P acquisition remains unfulfilled, especially in intensive agriculture, while current P management generally focuses on agronomic and environmental concerns. Here, we investigated how levels of soil P affect the inherent potential of maize (Zea mays L.) roots to obtain P from soil. Responses of root morphology, arbuscular mycorrhizal colonization, and phosphate transporters were characterized and related to agronomic traits in pot and field experiments with soil P supply from deficiency to excess. Critical soil Olsen-P level for maize growth approximated 3.2 mg kg−1, and the threshold indicating a significant environmental risk was about 15 mg kg−1, which represented the lower and upper levels of soil P recommended in current P management. However, most root adaptations involved with P acquisition were triggered when soil Olsen-P was below 10 mg kg−1, indicating a threshold for maximum root inherent potential. Therefore, to maintain efficient inherent potential of roots for P acquisition, we suggest that the target upper level of soil P in intensive agriculture should be reduced from the environmental risk threshold to the point maximizing the inherent potential of roots. PMID:24594677
Exploratory Long-Range Models to Estimate Summer Climate Variability over Southern Africa.
NASA Astrophysics Data System (ADS)
Jury, Mark R.; Mulenga, Henry M.; Mason, Simon J.
1999-07-01
Teleconnection predictors are explored using multivariate regression models in an effort to estimate southern African summer rainfall and climate impacts one season in advance. The preliminary statistical formulations include many variables influenced by the El Niño-Southern Oscillation (ENSO) such as tropical sea surface temperatures (SST) in the Indian and Atlantic Oceans. Atmospheric circulation responses to ENSO include the alternation of tropical zonal winds over Africa and changes in convective activity within oceanic monsoon troughs. Numerous hemispheric-scale datasets are employed to extract predictors and include global indexes (Southern Oscillation index and quasi-biennial oscillation), SST principal component scores for the global oceans, indexes of tropical convection (outgoing longwave radiation), air pressure, and surface and upper winds over the Indian and Atlantic Oceans. Climatic targets include subseasonal, area-averaged rainfall over South Africa and the Zambezi river basin, and South Africa's annual maize yield. Predictors and targets overlap in the years 1971-93, the defined training period. Each target time series is fitted by an optimum group of predictors from the preceding spring, in a linear multivariate formulation. To limit artificial skill, predictors are restricted to three, providing 17 degrees of freedom. Models with colinear predictors are screened out, and persistence of the target time series is considered. The late summer rainfall models achieve a mean r2 fit of 72%, contributed largely through ENSO modulation. Early summer rainfall cross validation correlations are lower (61%). A conceptual understanding of the climate dynamics and ocean-atmosphere coupling processes inherent in the exploratory models is outlined.Seasonal outlooks based on the exploratory models could help mitigate the impacts of southern Africa's fluctuating climate. It is believed that an advance warning of drought risk and seasonal rainfall prospects will improve the economic growth potential of southern Africa and provide additional security for food and water supplies.
Accident hazard evaluation and control decisions on forested recreation sites
Lee A. Paine
1971-01-01
Accident hazard associated with trees on recreation sites is inherently concerned with probabilities. The major factors include the probabilities of mechanical failure and of target impact if failure occurs, the damage potential of the failure, and the target value. Hazard may be evaluated as the product of these factors; i.e., expected loss during the current...
Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J
2017-08-01
The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.
Biology-Culture Co-evolution in Finite Populations.
de Boer, Bart; Thompson, Bill
2018-01-19
Language is the result of two concurrent evolutionary processes: biological and cultural inheritance. An influential evolutionary hypothesis known as the moving target problem implies inherent limitations on the interactions between our two inheritance streams that result from a difference in pace: the speed of cultural evolution is thought to rule out cognitive adaptation to culturally evolving aspects of language. We examine this hypothesis formally by casting it as as a problem of adaptation in time-varying environments. We present a mathematical model of biology-culture co-evolution in finite populations: a generalisation of the Moran process, treating co-evolution as coupled non-independent Markov processes, providing a general formulation of the moving target hypothesis in precise probabilistic terms. Rapidly varying culture decreases the probability of biological adaptation. However, we show that this effect declines with population size and with stronger links between biology and culture: in realistically sized finite populations, stochastic effects can carry cognitive specialisations to fixation in the face of variable culture, especially if the effects of those specialisations are amplified through cultural evolution. These results support the view that language arises from interactions between our two major inheritance streams, rather than from one primary evolutionary process that dominates another.
Eddie L. Shea; Lisa A. Schulte; Brian J. Palik
2017-01-01
Structural complexity is widely recognized as an inherent characteristic of unmanaged forests critical to their function and resilience, but often reduced in their managed counterparts. Variable retention harvesting (VRH) has been proposed as a way to restore or enhance structural complexity in managed forests, and thereby sustain attendant biodiversity and ecosystem...
NASA Technical Reports Server (NTRS)
Stewart, Elwood C.; Druding, Frank; Nishiura, Togo
1959-01-01
A study has been made to determine the relative importance of those factors which place an inherent limitation on the minimum obtainable miss distance for a beam-rider navigation system operating in the presence of glint noise and target evasive maneuver. Target and missile motions are assumed to be coplanar. The factors considered are the missile natural frequencies and damping ratios, missile steady-state acceleration capabilities, target evasive maneuver characteristics, and angular scintillation noise characteristics.
Nomura, Yayoi; Sato, Yumi; Suno, Ryoji; Horita, Shoichiro
2016-01-01
Abstract Fv antibody fragments have been used as co‐crystallization partners in structural biology, particularly in membrane protein crystallography. However, there are inherent technical issues associated with the large‐scale production of soluble, functional Fv fragments through conventional methods in various expression systems. To circumvent these problems, we developed a new method, in which a single synthetic polyprotein consisting of a variable light (VL) domain, an intervening removable affinity tag (iRAT), and a variable heavy (VH) domain is expressed by a Gram‐positive bacterial secretion system. This method ensures stoichiometric expression of VL and VH from the monocistronic construct followed by proper folding and assembly of the two variable domains. The iRAT segment can be removed by a site‐specific protease during the purification process to yield tag‐free Fv fragments suitable for crystallization trials. In vitro refolding step is not required to obtain correctly folded Fv fragments. As a proof of concept, we tested the iRAT‐based production of multiple Fv fragments, including a crystallization chaperone for a mammalian membrane protein as well as FDA‐approved therapeutic antibodies. The resulting Fv fragments were functionally active and crystallized in complex with the target proteins. The iRAT system is a reliable, rapid and broadly applicable means of producing milligram quantities of Fv fragments for structural and biochemical studies. PMID:27595817
NASA Astrophysics Data System (ADS)
Watkins, Wendell R.; Bean, Brent L.; Munding, Peter D.
1994-06-01
Recent field tests have provided excellent opportunities to use a new characterization tool associated with the Mobile Imaging Spectroscopy Laboratory (MISL) of the Battlefield Environment Directorate, formerly the U.S. Army Atmospheric Sciences Laboratory. The MISL large area (1.8 by 1.8 m, uniform temperature, thermal target) was used for characterization and isolation of phenomena which impact target contrast. By viewing the target board from closeup and distant ranges simultaneously with the MISL thermal imagers, the inherent scene content could be calibrated and the degrading effects of atmospheric propagation could be isolated. The target board is equipped with several spatial frequency bar patterns, but only the largest 3.5-cycle full area bar pattern was used for the distant range of 1.6 km. The quantities measured with the target board include the inherent background change, the contrast transmission, and the atmospheric modulation transfer function. The MISL target board has a unique design which makes it lightweight with near perfect transition between the hot and cold portions of the bar pattern. The heated portion of the target is an elongated rectangular even which is tilted back at a 30 deg angle to form a 1.8 by 1.8 m square when viewed from the front. The cold bars we positioned in front of the heated oven surface and can be oriented in either the vertical or horizontal direction. The oven is mounted on a lightweight trailer for one- or two-man positioning. An attached metal and canvas structure is used to shield the entire target from both solar loading and cooling winds. The target board has a thin aluminum sheet front surface which is insulated from the oven's heating structure.
Panarchy and environmental policy
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in socio-ecological systems is critical. Sustainability likely must occur via the institutions...
Resilience and environmental management
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in socio-ecological systems is critical. Sustainability likely must occur via the institutions...
Zhang, Yingwei; Tian, Jingqi; Li, Hailong; Wang, Lei; Sun, Xuping
2012-01-01
We develop a novel single fluorophore-labeled double-stranded oligonucleotide (OND) probe for rapid, nanostructure-free, fluorescence-enhanced nucleic acid detection for the first time. We further demonstrate such probe is able to well discriminate single-base mutation in nucleic acid. The design takes advantage of an inherent quenching ability of guanine bases. The short strand of the probe is designed with an end-labeled fluorophore that is placed adjacent to two guanines as the quencher located on the long opposite strand, resulting in great quenching of dye fluorescence. In the presence of a target complementary to the long strand of the probe, a competitive strand-displacement reaction occurs and the long strand forms a more stable duplex with the target, resulting in the two strands of the probe being separated from each other. As a consequence of this displacement, the fluorophore and the quencher are no longer in close proximity and dye fluorescence increases, signaling the presence of target.
Nakamura, Masanobu; Yoneyama, Masami; Tabuchi, Takashi; Takemura, Atsushi; Obara, Makoto; Sawano, Seishi
2012-01-01
Detailed information on anatomy and hemodynamics in cerebrovascular disorders such as AVM and Moyamoya disease is mandatory for defined diagnosis and treatment planning. Arterial spin labeling technique has come to be applied to magnetic resonance angiography (MRA) and perfusion imaging in recent years. However, those non-contrast techniques are mostly limited to single frame images. Recently we have proposed a non-contrast time-resolved MRA technique termed contrast inherent inflow enhanced multi phase angiography combining spatial resolution echo planar imaging based signal targeting and alternating radiofrequency (CINEMA-STAR). CINEMA-STAR can extract the blood flow in the major intracranial arteries at an interval of 70 ms and thus permits us to observe vascular construction in full by preparing MIP images of axial acquisitions with high spatial resolution. This preliminary study demonstrates the usefulness of the CINEMA-STAR technique in evaluating the cerebral vasculature.
Quantification of HCV RNA in Clinical Specimens by Branched DNA (bDNA) Technology.
Wilber, J C; Urdea, M S
1999-01-01
The diagnosis and monitoring of hepatitis C virus (HCV) infection have been aided by the development of HCV RNA quantification assays A direct measure of viral load, HCV RNA quantification has the advantage of providing information on viral kinetics and provides unique insight into the disease process. Branched DNA (bDNA) signal amplification technology provides a novel approach for the direct quantification of HCV RNA in patient specimens. The bDNA assay measures HCV RNA at physiological levels by boosting the reporter signal, rather than by replicating target sequences as the means of detection, and thus avoids the errors inherent in the extraction and amplification of target sequences. Inherently quantitative and nonradioactive, the bDNA assay is amenable to routine use in a clinical research setting, and has been used by several groups to explore the natural history, pathogenesis, and treatment of HCV infection.
Terza, Joseph V; Bradford, W David; Dismuke, Clara E
2008-01-01
Objective To investigate potential bias in the use of the conventional linear instrumental variables (IV) method for the estimation of causal effects in inherently nonlinear regression settings. Data Sources Smoking Supplement to the 1979 National Health Interview Survey, National Longitudinal Alcohol Epidemiologic Survey, and simulated data. Study Design Potential bias from the use of the linear IV method in nonlinear models is assessed via simulation studies and real world data analyses in two commonly encountered regression setting: (1) models with a nonnegative outcome (e.g., a count) and a continuous endogenous regressor; and (2) models with a binary outcome and a binary endogenous regressor. Principle Findings The simulation analyses show that substantial bias in the estimation of causal effects can result from applying the conventional IV method in inherently nonlinear regression settings. Moreover, the bias is not attenuated as the sample size increases. This point is further illustrated in the survey data analyses in which IV-based estimates of the relevant causal effects diverge substantially from those obtained with appropriate nonlinear estimation methods. Conclusions We offer this research as a cautionary note to those who would opt for the use of linear specifications in inherently nonlinear settings involving endogeneity. PMID:18546544
Testing of a variable-stroke Stirling engine
NASA Technical Reports Server (NTRS)
Thieme, Lanny G.; Allen, David J.
1986-01-01
Testing of a variable-stroke Stirling engine at NASA Lewis has been completed. In support of the DOE Stirling Engine Highway Vehicle Systems Program, the engine was tested for about 70 hours total with both He and H2 as working fluids over a range of pressures and strokes. A direct comparison was made of part-load efficiencies obtained with variable-stroke (VS) and variable-pressure operation. Two failures with the variable-angle swash-plate drive system limited testing to low power levels. These failures are not thought to be caused by problems inherent with the VS concept but do emphasize the need for careful design in the area of the crossheads.
Testing of a variable-stroke Stirling engine
NASA Technical Reports Server (NTRS)
Thieme, L. G.; Allen, D. J.
1986-01-01
Testing of a variable-stroke Stirling engine at NASA Lewis has been completed. In support of the DOE Stirling Engine Highway Vehicle Systems Program, the engine was tested for about 70 hours total with both He and H2 working fluids over a range of pressures and strokes. A direct comparison was made of part-load efficiencies obtained with variable-stroke (VS) and variable-pressure operation. Two failures with the variable-angle swash-plate drive system limited testing to low power levels. These failures are not thought to be caused by problems inherent with the VS concept but do emphasize the need for careful design in the area of the crossheads.
Panarchy, adaptive management and environmental policy
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in socio-ecological systems is critical. Sustainability likely must occur via the institutions...
Influences on Group Productivity 1: Factors Inherent in the Task. A bibliographic Synopsis
1983-04-15
organization structure and job attitudes and job behavior . Variables: Structure defined as the positions and parts of k-ganizations and their systematic and...relatively enduring relationship to each other. Attitudes defined in the broadest sense of "opinion concerning some object." Job behavior is...theory specifying the relations among task structure, leadership behavior and group performance. Indeoendent variables: Degree of structure of task (ac
NASA Technical Reports Server (NTRS)
Curran, R. J.; Kropfil, R.; Hallett, J.
1984-01-01
Techniques for remote sensing of particles, from cloud droplet to hailstone size, using optical and microwave frequencies are reviewed. The inherent variability of atmospheric particulates is examined to delineate conditions when the signal can give information to be effectively utilized in a forecasting context. The physical limitations resulting from the phase, size, orientation and concentration variability of the particulates are assessed.
Venkatraman, Prasanna
2010-06-01
Natural products are an abundant source of anti cancer agents. They act as cytotoxic drugs, and inhibitors of apoptosis, transcription, cell proliferation and angiogenesis. While pathways targeted by natural products have been well studied, there is paucity of information about the in vivo molecular target/s of these compounds. This review summarizes some of the natural compounds for which the molecular targets, mechanism of action and structural basis of specificity have been well documented. These examples illustrate that 'off target' binding can be explained on the basis of diversity inherent to biomolecular interactions. There is enough evidence to suggest that natural compounds are potent and versatile warheads that can be optimized for a multi targeted therapeutic intervention in cancer.
Enhancing emotional-based target prediction
NASA Astrophysics Data System (ADS)
Gosnell, Michael; Woodley, Robert
2008-04-01
This work extends existing agent-based target movement prediction to include key ideas of behavioral inertia, steady states, and catastrophic change from existing psychological, sociological, and mathematical work. Existing target prediction work inherently assumes a single steady state for target behavior, and attempts to classify behavior based on a single emotional state set. The enhanced, emotional-based target prediction maintains up to three distinct steady states, or typical behaviors, based on a target's operating conditions and observed behaviors. Each steady state has an associated behavioral inertia, similar to the standard deviation of behaviors within that state. The enhanced prediction framework also allows steady state transitions through catastrophic change and individual steady states could be used in an offline analysis with additional modeling efforts to better predict anticipated target reactions.
Energy and carbon accounting to compare bioenergy crops
USDA-ARS?s Scientific Manuscript database
To compare the utility of current and future biofuels and biofuel feedstocks in an objective manner can be extremely challenging. This challenge exists because agricultural data are inherently variable, experimental techniques are cropdependent,and the literatures usually report relative, rather tha...
LeBouf, Ryan; Yesse, Liesel; Rossner, Alan
2008-05-01
It is well known that characterization of airborne bioaerosols in indoor environments is a challenge because of inherent irregularity in concentrations, which are influenced by many environmental factors. The primary aim of this study was to quantify the day-to-day variability of airborne fungal levels in a single residential environment over multiple seasons. Indoor air quality practitioners must recognize the inherent variability in airborne bio-aerosol measurements during data analysis of mold investigations. Changes in airborne fungi due to varying season and day is important to recognize when considering health impacts of these contaminants and when establishing effective controls. Using an Andersen N6 impactor, indoor and outdoor bioaerosol samples were collected on malt extract agar plates for 18 weekdays and 19 weekdays in winter and summer, respectively. Interday and intraday variability for the bioaerosols were determined for each sampler. Average fungal concentrations were 26 times higher during the summer months. Day-to-day fungal samples showed a relatively high inconsistency suggesting airborne fungal levels are very episodic and are influenced by several environmental factors. Summer bio-aerosol variability ranged from 7 to 36% and winter variability from 24 to 212%; these should be incorporated into results of indoor mold investigations. The second objective was to observe the relationship between biological and nonbiological particulate matter (PM). No correlation was observed between biological and nonbiological PM. Six side-by-side particulate samplers collected coarse PM (PM10) and fine PM (PM2.5) levels in both seasons. PM2.5 particulate concentrations were found to be statistically higher during summer months. Interday variability observed during this study suggests that indoor air quality practitioners must adjust their exposure assessment strategies to reflect the temporal variability in bioaerosol concentrations.
NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
Science Goal Driven Observing: A Step Towards Maximizing Science Returns and Spacecraft Autonomy
NASA Technical Reports Server (NTRS)
Koratkar, Anuradha; Grosvenor, Sandy; Jones, Jeremy; Memarsadeghi, Nargess; Wolf, Karl
2002-01-01
In the coming decade, the drive to increase the scientific returns on capital investment and to reduce costs will force automation to be implemented in many of the scientific tasks that have traditionally been manually overseen. Thus, spacecraft autonomy will become an even greater part of mission operations. While recent missions have made great strides in the ability to autonomously monitor and react to changing health and physical status of spacecraft, little progress has been made in responding quickly to science driven events. The new generation of space-based telescopes/observatories will see deeper, with greater clarity, and they will generate data at an unprecedented rate. Yet, while onboard data processing and storage capability will increase rapidly, bandwidth for downloading data will not increase as fast and can become a significant bottleneck and cost of a science program. For observations of inherently variable targets and targets of opportunity, the ability to recognize early if an observation will not meet the science goals of variability or minimum brightness, and react accordingly, can have a major positive impact on the overall scientific returns of an observatory and on its operational costs. If the observatory can reprioritize the schedule to focus on alternate targets, discard uninteresting observations prior to downloading, or download them at a reduced resolution its overall efficiency will be dramatically increased. We are investigating and developing tools for a science goal monitoring (SGM) system. The SGM will have an interface to help capture higher-level science goals from scientists and translate them into a flexible observing strategy that SGM can execute and monitor. SGM will then monitor the incoming data stream and interface with data processing systems to recognize significant events. When an event occurs, the system will use the science goals given it to reprioritize observations, and react appropriately and/or communicate with ground systems - both human and machine - for confirmation and/or further high priority analyses.
Moran, Kevin
2014-01-01
In high-income countries, death as a consequence of recreational jumping into water from height has not been well investigated partly because it traditionally has been a covert activity within youth culture. An observational study of video recordings posted on the YouTube web site was used to gather data on the nature of jumping activity in New Zealand and Australia. An analytical framework was developed to identify site- participant- social characteristics (10 variables) and online feedback (4 variables). Of the 389 videos recorded in New Zealand (n = 210) and Australia (n = 179), 929 jumpers were observed, and rivers were the most frequently reported site of jumping activity (New Zealand 47%; Australia 35%). One fifth (20%) of the jumps in New Zealand and one third (33%) in Australia were from heights estimated to be more than 12 m. The YouTube website portraying jumps from height were visited almost half a million times (495,686 hits). Ways of reducing recreational jumping risk via targeted education interventions may be best directed at young male adults. Use of social network sites to foster safe behaviours may be an effective way to educate young people of the inherent risks of jumping from height into water.
2013-01-01
Recombinant immunoglobulins comprise an important class of human therapeutics. Although specific immunoglobulins can be purposefully raised against desired antigen targets by various methods, identifying an immunoglobulin clone that simultaneously possesses potent therapeutic activities and desirable manufacturing-related attributes often turns out to be challenging. The variable domains of individual immunoglobulins primarily define the unique antigen specificities and binding affinities inherent to each clone. The primary sequence of the variable domains also specifies the unique physicochemical properties that modulate various aspects of individual immunoglobulin life cycle, starting from the biosynthetic steps in the endoplasmic reticulum, secretory pathway trafficking, secretion, and the fate in the extracellular space and in the endosome-lysosome system. Because of the diverse repertoire of immunoglobulin physicochemical properties, some immunoglobulin clones' intrinsic properties may manifest as intriguing cellular phenotypes, unusual solution behaviors, and serious pathologic outcomes that are of scientific and clinical importance. To gain renewed insights into identifying manufacturable therapeutic antibodies, this paper catalogs important intracellular and extracellular phenotypes induced by various subsets of immunoglobulin clones occupying different niches of diverse physicochemical repertoire space. Both intrinsic and extrinsic factors that make certain immunoglobulin clones desirable or undesirable for large-scale manufacturing and therapeutic use are summarized. PMID:23533417
Silvestrini, Matthew T; Yin, Dali; Martin, Alastair J; Coppes, Valerie G; Mann, Preeti; Larson, Paul S; Starr, Philip A; Zeng, Xianmin; Gupta, Nalin; Panter, S S; Desai, Tejal A; Lim, Daniel A
2015-01-01
Intracerebral cell transplantation is being pursued as a treatment for many neurological diseases, and effective cell delivery is critical for clinical success. To facilitate intracerebral cell transplantation at the scale and complexity of the human brain, we developed a platform technology that enables radially branched deployment (RBD) of cells to multiple target locations at variable radial distances and depths along the initial brain penetration tract with real-time interventional magnetic resonance image (iMRI) guidance. iMRI-guided RBD functioned as an "add-on" to standard neurosurgical and imaging workflows, and procedures were performed in a commonly available clinical MRI scanner. Multiple deposits of super paramagnetic iron oxide beads were safely delivered to the striatum of live swine, and distribution to the entire putamen was achieved via a single cannula insertion in human cadaveric heads. Human embryonic stem cell-derived dopaminergic neurons were biocompatible with the iMRI-guided RBD platform and successfully delivered with iMRI guidance into the swine striatum. Thus, iMRI-guided RBD overcomes some of the technical limitations inherent to the use of straight cannulas and standard stereotactic targeting. This platform technology could have a major impact on the clinical translation of a wide range of cell therapeutics for the treatment of many neurological diseases.
Panarchy, Adaptive Management and Governance: Policy Options for Building Resilience
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in socio-ecological systems is critical. Environmental protection has typically involved a com...
DESIGN AND COST REDUCTION OF REMEDIATION TECHNOLOGY PILOT TESTING
In order to effectively address the inherent variability of MTBE concentrations at a small fuel contamination site chosen for an in-situ remedial technology test demonstration, curtain walls for metering mixtures of conservative and non-conservative tracers into an aquifer were u...
Normalizing biomedical terms by minimizing ambiguity and variability
Tsuruoka, Yoshimasa; McNaught, John; Ananiadou, Sophia
2008-01-01
Background One of the difficulties in mapping biomedical named entities, e.g. genes, proteins, chemicals and diseases, to their concept identifiers stems from the potential variability of the terms. Soft string matching is a possible solution to the problem, but its inherent heavy computational cost discourages its use when the dictionaries are large or when real time processing is required. A less computationally demanding approach is to normalize the terms by using heuristic rules, which enables us to look up a dictionary in a constant time regardless of its size. The development of good heuristic rules, however, requires extensive knowledge of the terminology in question and thus is the bottleneck of the normalization approach. Results We present a novel framework for discovering a list of normalization rules from a dictionary in a fully automated manner. The rules are discovered in such a way that they minimize the ambiguity and variability of the terms in the dictionary. We evaluated our algorithm using two large dictionaries: a human gene/protein name dictionary built from BioThesaurus and a disease name dictionary built from UMLS. Conclusions The experimental results showed that automatically discovered rules can perform comparably to carefully crafted heuristic rules in term mapping tasks, and the computational overhead of rule application is small enough that a very fast implementation is possible. This work will help improve the performance of term-concept mapping tasks in biomedical information extraction especially when good normalization heuristics for the target terminology are not fully known. PMID:18426547
NASA Astrophysics Data System (ADS)
Sonam; Jain, Vikrant
2018-03-01
Long profiles of rivers provide a platform to analyse interaction between geological and geomorphic processes operating at different time scales. Identification of an appropriate model for river long profile becomes important in order to establish a quantitative relationship between the profile shape, its geomorphic effectiveness, and inherent geological characteristics. This work highlights the variability in the long profile shape of the Ganga River and its major tributaries, its impact on stream power distribution pattern, and role of the geological controls on it. Long profile shapes are represented by the sum of two exponential functions through the curve fitting method. We have shown that coefficients of river long profile equations are governed by the geological characteristics of subbasins. These equations further define the spatial distribution pattern of stream power and help to understand stream power variability in different geological terrains. Spatial distribution of stream power in different geological terrains successfully explains spatial variability in geomorphic processes within the Himalayan hinterland area. In general, the stream power peaks of larger rivers lie in the Higher Himalaya, and rivers in the eastern hinterland area are characterised by the highest magnitude of stream power.
Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen
2016-05-01
Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability. Copyright © 2016 Elsevier Inc. All rights reserved.
Glint-induced false alarm reduction in signature adaptive target detection
NASA Astrophysics Data System (ADS)
Crosby, Frank J.
2002-07-01
The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.
Food plant toxicants and safety Risk assessment and regulation of inherent toxicants in plant foods.
Essers, A J; Alink, G M; Speijers, G J; Alexander, J; Bouwmeister, P J; van den Brandt, P A; Ciere, S; Gry, J; Herrman, J; Kuiper, H A; Mortby, E; Renwick, A G; Shrimpton, D H; Vainio, H; Vittozzi, L; Koeman, J H
1998-05-01
The ADI as a tool for risk management and regulation of food additives and pesticide residues is not readily applicable to inherent food plant toxicants: The margin between actual intake and potentially toxic levels is often small; application of the default uncertainty factors used to derive ADI values, particularly when extrapolating from animal data, would prohibit the utilisation of the food, which may have an overall beneficial health effect. Levels of inherent toxicants are difficult to control; their complete removal is not always wanted, due to their function for the plant or for human health. The health impact of the inherent toxicant is often modified by factors in the food, e.g. the bioavailability from the matrix and interaction with other inherent constituents. Risk-benefit analysis should be made for different consumption scenarios, without the use of uncertainty factors. Crucial in this approach is analysis of the toxicity of the whole foodstuff. The relationship between the whole foodstuff and the pure toxicant is expressed in the `product correction factor' (PCF). Investigations in humans are essential so that biomarkers of exposure and for effect can be used to analyse the difference between animals and humans and between the food and the pure toxicant. A grid of the variables characterising toxicity is proposed, showing their inter-relationships. A flow diagram for risk estimate is provided, using both toxicological and epidemiological studies.
Inherent uncertainties in meteorological parameters for wind turbine design
NASA Technical Reports Server (NTRS)
Doran, J. C.
1982-01-01
Major difficulties associated with meteorological measurments such as the inability to duplicate the experimental conditions from one day to the next are discussed. This lack of consistency is compounded by the stochastic nature of many of the meteorological variables of interest. Moreover, simple relationships derived in one location may be significantly altered by topographical or synoptic differences encountered at another. The effect of such factors is a degree of inherent uncertainty if an attempt is made to describe the atmosphere in terms of universal laws. Some of these uncertainties and their causes are examined, examples are presented and some implications for wind turbine design are suggested.
Ather, Jennifer L.; Chung, Michael; Hoyt, Laura R.; Randall, Matthew J.; Georgsdottir, Anna; Daphtary, Nirav A.; Aliyeva, Minara I.; Suratt, Benjamin T.; Bates, Jason H. T.; Irvin, Charles G.; Russell, Sheila R.; Forgione, Patrick M.; Dixon, Anne E.
2016-01-01
Obese asthma presents with inherent hyperresponsiveness to methacholine or augmented allergen-driven allergic asthma, with an even greater magnitude of methacholine hyperresponsiveness. These physiologic parameters and accompanying obese asthma symptoms can be reduced by successful weight loss, yet the underlying mechanisms remain incompletely understood. We implemented mouse models of diet-induced obesity, dietary and surgical weight loss, and environmental allergen exposure to examine the mechanisms and mediators of inherent and allergic obese asthma. We report that the methacholine hyperresponsiveness in these models of inherent obese asthma and obese allergic asthma manifests in distinct anatomical compartments but that both are amenable to interventions that induce substantial weight loss. The inherent obese asthma phenotype, with characteristic increases in distal airspace tissue resistance and tissue elastance, is associated with elevated proinflammatory cytokines that are reduced with dietary weight loss. Surprisingly, bariatric surgery–induced weight loss further elevates these cytokines while reducing methacholine responsiveness to levels similar to those in lean mice or in formerly obese mice rendered lean through dietary intervention. In contrast, the obese allergic asthma phenotype, with characteristic increases in central airway resistance, is not associated with increased adaptive immune responses, yet diet-induced weight loss reduces methacholine hyperresponsiveness without altering immunological variables. Diet-induced weight loss is effective in models of both inherent and allergic obese asthma, and our examination of the fecal microbiome revealed that the obesogenic Firmicutes/Bacteroidetes ratio was normalized after diet-induced weight loss. Our results suggest that structural, immunological, and microbiological factors contribute to the manifold presentations of obese asthma. PMID:27064658
When compared to traditional approaches, the utilization of molecular and genomic techniques to soil and groundwater cleanup investigations can reduce inherent parameter variability when conducting bench and pilot-scale investigations or carrying out full-scale field applications...
New particle formation (NPF) can potentially alter regional climate by increasing aerosol particle (hereafter particle) number concentrations and ultimately cloud condensation nuclei. The large scales on which NPF is manifest indicate potential to use satellite-based (inherently ...
Panarchy, adaptive management and governance: policy options for building resilience
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in socio-ecological systems are critical. Environmental protection has typically involved a c...
Field oriented control of induction motors
NASA Technical Reports Server (NTRS)
Burrows, Linda M.; Zinger, Don S.; Roth, Mary Ellen
1990-01-01
Induction motors have always been known for their simple rugged construction, but until lately were not suitable for variable speed or servo drives due to the inherent complexity of the controls. With the advent of field oriented control (FOC), however, the induction motor has become an attractive option for these types of drive systems. An FOC system which utilizes the pulse population modulation method to synthesize the motor drive frequencies is examined. This system allows for a variable voltage to frequency ratio and enables the user to have independent control of both the speed and torque of an induction motor. A second generation of the control boards were developed and tested with the next point of focus being the minimization of the size and complexity of these controls. Many options were considered with the best approach being the use of a digital signal processor (DSP) due to its inherent ability to quickly evaluate control algorithms. The present test results of the system and the status of the optimization process using a DSP are discussed.
Small-scale electrical resistivity tomography of wet fractured rocks.
LaBrecque, Douglas J; Sharpe, Roger; Wood, Thomas; Heath, Gail
2004-01-01
This paper describes a series of experiments that tested the ability of the electrical resistivity tomography (ERT) method to locate correctly wet and dry fractures in a meso-scale model. The goal was to develop a method of monitoring the flow of water through a fractured rock matrix. The model was a four by six array of limestone blocks equipped with 28 stainless steel electrodes. Dry fractures were created by placing pieces of vinyl between one or more blocks. Wet fractures were created by injecting tap water into a joint between blocks. In electrical terms, the dry fractures are resistive and the wet fractures are conductive. The quantities measured by the ERT system are current and voltage around the outside edge of the model. The raw ERT data were translated to resistivity values inside the model using a three-dimensional Occam's inversion routine. This routine was one of the key components of ERT being tested. The model presented several challenges. First, the resistivity of both the blocks and the joints was highly variable. Second, the resistive targets introduced extreme changes the software could not precisely quantify. Third, the abrupt changes inherent in a fracture system were contrary to the smoothly varying changes expected by the Occam's inversion routine. Fourth, the response of the conductive fractures was small compared to the background variability. In general, ERT was able to locate correctly resistive fractures. Problems occurred, however, when the resistive fracture was near the edges of the model or when multiple fractures were close together. In particular, ERT tended to position the fracture closer to the model center than its true location. Conductive fractures yielded much smaller responses than the resistive case. A difference-inversion method was able to correctly locate these targets.
Timeseries Signal Processing for Enhancing Mobile Surveys: Learning from Field Studies
NASA Astrophysics Data System (ADS)
Risk, D. A.; Lavoie, M.; Marshall, A. D.; Baillie, J.; Atherton, E. E.; Laybolt, W. D.
2015-12-01
Vehicle-based surveys using laser and other analyzers are now commonplace in research and industry. In many cases when these studies target biologically-relevant gases like methane and carbon dioxide, the minimum detection limits are often coarse (ppm) relative to the analyzer's capabilities (ppb), because of the inherent variability in the ambient background concentrations across the landscape that creates noise and uncertainty. This variation arises from localized biological sinks and sources, but also atmospheric turbulence, air pooling, and other factors. Computational processing routines are widely used in many fields to increase resolution of a target signal in temporally dense data, and offer promise for enhancing mobile surveying techniques. Signal processing routines can both help identify anomalies at very low levels, or can be used inversely to remove localized industrially-emitted anomalies from ecological data. This presentation integrates learnings from various studies in which simple signal processing routines were used successfully to isolate different temporally-varying components of 1 Hz timeseries measured with laser- and UV fluorescence-based analyzers. As illustrative datasets, we present results from industrial fugitive emission studies from across Canada's western provinces and other locations, and also an ecological study that aimed to model near-surface concentration variability across different biomes within eastern Canada. In these cases, signal processing algorithms contributed significantly to the clarity of both industrial, and ecological processes. In some instances, signal processing was too computationally intensive for real-time in-vehicle processing, but we identified workarounds for analyzer-embedded software that contributed to an improvement in real-time resolution of small anomalies. Signal processing is a natural accompaniment to these datasets, and many avenues are open to researchers who wish to enhance existing, and future datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Jessica; Denholm, Paul; Pless, Jacquelyn
Wind and solar are inherently more variable and uncertain than the traditional dispatchable thermal and hydro generators that have historically provided a majority of grid-supplied electricity. The unique characteristics of variable renewable energy (VRE) resources have resulted in many misperceptions regarding their contribution to a low-cost and reliable power grid. Common areas of concern include: 1) The potential need for increased operating reserves, 2) The impact of variability and uncertainty on operating costs and pollutant emissions of thermal plants, and 3) The technical limits of VRE penetration rates to maintain grid stability and reliability. This fact sheet corrects misperceptions inmore » these areas.« less
The Dynamic Model and Inherent Variability: The Case of Northern France.
ERIC Educational Resources Information Center
Hornsby, David
1999-01-01
Explores the claims of the "dynamic" model of variation by testing against data recorded in Avion, Northern France. Parallels are drawn between "langue d'oil" areas of France and decreolization situations in which proponents of the dynamic model have generally worked. (Author/VWL)
Site-specific variable rate irrigation a means to enhance water use efficiency
USDA-ARS?s Scientific Manuscript database
The majority of irrigated cropland in the US is watered with sprinkler irrigation systems. These systems are inherently more efficient in distributing water than furrow or flood irrigation. Appropriate system design of sprinkler irrigation equipment, application methods, and farming practices (e.g. ...
Site-specific variable rate irrigation as a means to enhance water use efficiency
USDA-ARS?s Scientific Manuscript database
The majority of irrigated cropland in the US is watered with sprinkler irrigation systems. These systems are inherently more efficient in distributing water than furrow or flood irrigation. Appropriate system design of sprinkler irrigation equipment, application methods, and farming practices (e.g. ...
Advanced miniature processing handware for ATR applications
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Daud, Taher (Inventor); Thakoor, Anikumar (Inventor)
2003-01-01
A Hybrid Optoelectronic Neural Object Recognition System (HONORS), is disclosed, comprising two major building blocks: (1) an advanced grayscale optical correlator (OC) and (2) a massively parallel three-dimensional neural-processor. The optical correlator, with its inherent advantages in parallel processing and shift invariance, is used for target of interest (TOI) detection and segmentation. The three-dimensional neural-processor, with its robust neural learning capability, is used for target classification and identification. The hybrid optoelectronic neural object recognition system, with its powerful combination of optical processing and neural networks, enables real-time, large frame, automatic target recognition (ATR).
Converse, Sarah J.; Royle, J. Andrew; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
An ecological monitoring program should be viewed as a component of a larger framework designed to advance science and/or management, rather than as a stand-alone activity. Monitoring targets (the ecological variables of interest; e.g. abundance or occurrence of a species) should be set based on the needs of that framework (Nichols and Williams 2006; e.g. Chapters 2–4). Once such monitoring targets are set, the subsequent step in monitoring design involves consideration of the field and analytical methods that will be used to measure monitoring targets with adequate accuracy and precision. Long-term monitoring programs will involve replication of measurements over time, and possibly over space; that is, one location or each of multiple locations will be monitored multiple times, producing a collection of site visits (replicates). Clearly this replication is important for addressing spatial and temporal variability in the ecological resources of interest (Chapters 7–10), but it is worth considering how this replication can further be exploited to increase the effectiveness of monitoring. In particular, defensible monitoring of the majority of animal, and to a lesser degree plant, populations and communities will generally require investigators to account for imperfect detection (Chapters 4, 18). Raw indices of population state variables, such as abundance or occupancy (sensu MacKenzie et al. 2002), are rarely defensible when detection probabilities are < 1, because in those cases detection may vary over time and space in unpredictable ways. Myriad authors have discussed the risks inherent in making inference from monitoring data while failing to correct for differences in detection, resulting in indices that have an unknown relationship to the parameters of interest (e.g. Nichols 1992, Anderson 2001, MacKenzie et al. 2002, Williams et al. 2002, Anderson 2003, White 2005, Kéry and Schmidt 2008). While others have argued that indices may be preferable in some cases due to the challenges associated with estimating detection probabilities (e.g. McKelvey and Pearson 2001, Johnson 2008), we do not attempt to resolve this debate here. Rather, we are more apt to agree with MacKenzie and Kendall (2002) that the burden of proof ought to be on the assertion that detection probabilities are constant. Furthermore, given the wide variety of field methods available for estimating detection probabilities and the inability for an investigator to know, a priori, if detection probabilities will be constant over time and space, we believe that development of monitoring programs ought to include field and analytical methods to account for the imperfect detection of organisms.
Resisting the "Employability" Doctrine through Anarchist Pedagogies & Prefiguration
ERIC Educational Resources Information Center
Osborne, Natalie; Grant-Smith, Deanna
2017-01-01
Increasingly those working in higher education are tasked with targeting their teaching approaches and techniques to improve the "employability" of graduates. However, this approach is promoted with little recognition that enhanced employability does not guarantee employment outcomes or the tensions inherent in pursuing this agenda. The…
NASA Astrophysics Data System (ADS)
Zhai, Jiali; Scoble, Judith A.; Li, Nan; Lovrecz, George; Waddington, Lynne J.; Tran, Nhiem; Muir, Benjamin W.; Coia, Gregory; Kirby, Nigel; Drummond, Calum J.; Mulet, Xavier
2015-02-01
Next generation drug delivery utilising nanoparticles incorporates active targeting to specific sites. In this work, we combined targeting with the inherent advantages of self-assembled lipid nanoparticles containing internal nano-structures. Epidermal growth factor receptor (EGFR)-targeting, PEGylated lipid nanoparticles using phytantriol and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-PEG-maleimide amphiphiles were created. The self-assembled lipid nanoparticles presented here have internal lyotropic liquid crystalline nano-structures, verified by synchrotron small angle X-ray scattering and cryo-transmission electron microscopy, that offer the potential of high drug loading and enhanced cell penetration. Anti-EGFR Fab' fragments were conjugated to the surface of nanoparticles via a maleimide-thiol reaction at a high conjugation efficiency and retained specificity following conjugation to the nanoparticles. The conjugated nanoparticles were demonstrated to have high affinity for an EGFR target in a ligand binding assay.Next generation drug delivery utilising nanoparticles incorporates active targeting to specific sites. In this work, we combined targeting with the inherent advantages of self-assembled lipid nanoparticles containing internal nano-structures. Epidermal growth factor receptor (EGFR)-targeting, PEGylated lipid nanoparticles using phytantriol and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-PEG-maleimide amphiphiles were created. The self-assembled lipid nanoparticles presented here have internal lyotropic liquid crystalline nano-structures, verified by synchrotron small angle X-ray scattering and cryo-transmission electron microscopy, that offer the potential of high drug loading and enhanced cell penetration. Anti-EGFR Fab' fragments were conjugated to the surface of nanoparticles via a maleimide-thiol reaction at a high conjugation efficiency and retained specificity following conjugation to the nanoparticles. The conjugated nanoparticles were demonstrated to have high affinity for an EGFR target in a ligand binding assay. Electronic supplementary information (ESI) available: Fig. S1-S4. See DOI: 10.1039/c4nr05200e
Environmental law plays a key role in shaping policy for sustainability. In particular, the types of legal instruments, institutions, and the response of law to the inherent variability in social-ecological systems is critical. Sustainability likely must occur via the institution...
Connecting Achievement Motivation to Performance in General Chemistry
ERIC Educational Resources Information Center
Ferrell, Brent; Phillips, Michael M.; Barbera, Jack
2016-01-01
Student success in chemistry is inherently tied to motivational and other affective processes. We investigated three distinct constructs tied to motivation: self-efficacy, interest, and effort beliefs. These variables were measured twice over the course of a semester in three sections of a first-semester general chemistry course (n = 170). We…
Relative radiometric calibration of LANDSAT TM reflective bands
NASA Technical Reports Server (NTRS)
Barker, J. L.
1984-01-01
A common scientific methodology and terminology is outlined for characterizing the radiometry of both TM sensors. The magnitude of the most significant sources of radiometric variability are discussed and methods are recommended for achieving the exceptional potential inherent in the radiometric precision and accuracy of the TM sensors.
Environmental law plays a key role in shaping approaches to sustainability. In particular, the role of legal instruments, institutions, and the relationship of law to the inherent variability in social-ecological systems is critical. Sustainability likely must occur via the insti...
Minimizing field time to get reasonable greenhouse gas flux estimates from many chambers
USDA-ARS?s Scientific Manuscript database
Greenhouse gas measurements from soil are typically derived from static chambers placed in several replicate field plots and in multiple locations within a plot. Inherent variability in emissions is due to a number of known and unknown factors. Getting robust emission estimates from numerous chamber...
Theory of Mind Predicts Emotion Knowledge Development in Head Start Children
ERIC Educational Resources Information Center
Seidenfeld, Adina M.; Johnson, Stacy R.; Cavadel, Elizabeth Woodburn; Izard, Carroll E.
2014-01-01
Research Findings: Emotion knowledge (EK) enables children to identify emotions in themselves and others, and its development facilitates emotion recognition in complex social situations. Sociocognitive processes, such as theory of mind (ToM), may contribute to developing EK by helping children realize the inherent variability associated with…
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
USDA-ARS?s Scientific Manuscript database
Bioengineering of lignin to contain atypical components derived from other metabolic pathways is increasingly being pursued to custom design lignified cell walls that are inherently more digestible by livestock or more easily pretreated and saccharified for biofuel production. Because plants produce...
There is a paucity of relevant experimental information available for the evaluation of the potential health and environmental effects of many man made chemicals. Knowledge of the potential pathways for activity provides a rational basis for the extrapolations inherent in the pre...
USDA-ARS?s Scientific Manuscript database
Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...
NASA Astrophysics Data System (ADS)
Redolfi, M.; Bertoldi, W.; Tubino, M.; Welber, M.
2018-02-01
Measurement and estimation of bed load transport in gravel bed rivers are highly affected by its temporal fluctuations. Such variability is primarily driven by the flow regime but is also associated with a variety of inherent channel processes, such as flow turbulence, grain entrainment, and bed forms migration. These internal and external controls often act at comparable time scales, and are therefore difficult to disentangle, thus hindering the study of bed load variability under unsteady flow regime. In this paper, we report on laboratory experiments performed in a large, mobile bed flume where typical hydromorphological conditions of gravel bed rivers were reproduced. Data from a large number of replicated runs, including triangular and square-wave hydrographs, were used to build a statistically sound description of sediment transport processes. We found that the inherent variability of bed load flux strongly depends on the sampling interval, and it is significantly higher in complex, wandering or braided channels. This variability can be filtered out by computing the mean response over the experimental replicates, which allows us to highlight two distinctive phenomena: (i) an overshooting (undershooting) response of the mean bed load flux to a sudden increase (decrease) of discharge, and (ii) a clockwise hysteresis in the sediment rating curve. We then provide an interpretation of these findings through a conceptual mathematical model, showing how both phenomena are associated with a lagging morphological adaptation to unsteady flow. Overall, this work provides basic information for evaluating, monitoring, and managing gravel transport in morphologically active rivers.
Designing polymers with sugar-based advantages for bioactive delivery applications.
Zhang, Yingyue; Chan, Jennifer W; Moretti, Alysha; Uhrich, Kathryn E
2015-12-10
Sugar-based polymers have been extensively explored as a means to increase drug delivery systems' biocompatibility and biodegradation. Here,we review he use of sugar-based polymers for drug delivery applications, with a particular focus on the utility of the sugar component(s) to provide benefits for drug targeting and stimuli responsive systems. Specifically, numerous synthetic methods have been developed to reliably modify naturally-occurring polysaccharides, conjugate sugar moieties to synthetic polymer scaffolds to generate glycopolymers, and utilize sugars as a multifunctional building block to develop sugar-linked polymers. The design of sugar-based polymer systems has tremendous implications on both the physiological and biological properties imparted by the saccharide units and are unique from synthetic polymers. These features include the ability of glycopolymers to preferentially target various cell types and tissues through receptor interactions, exhibit bioadhesion for prolonged residence time, and be rapidly recognized and internalized by cancer cells. Also discussed are the distinct stimuli-sensitive properties of saccharide-modified polymers to mediate drug release under desired conditions. Saccharide-based systems with inherent pH- and temperature-sensitive properties, as well as enzyme-cleavable polysaccharides for targeted bioactive delivery, are covered. Overall, this work emphasizes inherent benefits of sugar-containing polymer systems for bioactive delivery.
Duran, Rafael; Sharma, Karun; Dreher, Matthew R; Ashrafi, Koorosh; Mirpour, Sahar; Lin, MingDe; Schernthaner, Ruediger E; Schlachter, Todd R; Tacher, Vania; Lewis, Andrew L; Willis, Sean; den Hartog, Mark; Radaelli, Alessandro; Negussie, Ayele H; Wood, Bradford J; Geschwind, Jean-François H
2016-01-01
Embolotherapy using microshperes is currently performed with soluble contrast to aid in visualization. However, administered payload visibility dimishes soon after delivery due to soluble contrast washout, leaving the radiolucent bead's location unknown. The objective of our study was to characterize inherently radiopaque beads (RO Beads) in terms of physicomechanical properties, deliverability and imaging visibility in a rabbit VX2 liver tumor model. RO Beads, which are based on LC Bead® platform, were compared to LC Bead. Bead size (light microscopy), equilibrium water content (EWC), density, X-ray attenuation and iodine distribution (micro-CT), suspension (settling times), deliverability and in vitro penetration were investigated. Fifteen rabbits were embolized with either LC Bead or RO Beads + soluble contrast (iodixanol-320), or RO Beads+dextrose. Appearance was evaluated with fluoroscopy, X-ray single shot, cone-beam CT (CBCT). Both bead types had a similar size distribution. RO Beads had lower EWC (60-72%) and higher density (1.21-1.36 g/cc) with a homogeneous iodine distribution within the bead's interior. RO Beads suspension time was shorter than LC Bead, with durable suspension (>5 min) in 100% iodixanol. RO Beads ≤300 µm were deliverable through a 2.3-Fr microcatheter. Both bead types showed similar penetration. Soluble contrast could identify target and non-target embolization on fluoroscopy during administration. However, the imaging appearance vanished quickly for LC Bead as contrast washed-out. RO Beads+contrast significantly increased visibility on X-ray single shot compared to LC Bead+contrast in target and non-target arteries (P=0.0043). Similarly, RO beads demonstrated better visibility on CBCT in target arteries (P=0.0238) with a trend in non-target arteries (P=0.0519). RO Beads+dextrose were not sufficiently visible to monitor embolization using fluoroscopy. RO Beads provide better conspicuity to determine target and non-target embolization compared to LC Bead which may improve intra-procedural monitoring and post-procedural evaluation of transarterial embolization.
Health and household air pollution from solid fuel use: the need for improved exposure assessment.
Clark, Maggie L; Peel, Jennifer L; Balakrishnan, Kalpana; Breysse, Patrick N; Chillrud, Steven N; Naeher, Luke P; Rodes, Charles E; Vette, Alan F; Balbus, John M
2013-10-01
Nearly 3 billion people worldwide rely on solid fuel combustion to meet basic household energy needs. The resulting exposure to air pollution causes an estimated 4.5% of the global burden of disease. Large variability and a lack of resources for research and development have resulted in highly uncertain exposure estimates. We sought to identify research priorities for exposure assessment that will more accurately and precisely define exposure-response relationships of household air pollution necessary to inform future cleaner-burning cookstove dissemination programs. As part of an international workshop in May 2011, an expert group characterized the state of the science and developed recommendations for exposure assessment of household air pollution. The following priority research areas were identified to explain variability and reduce uncertainty of household air pollution exposure measurements: improved characterization of spatial and temporal variability for studies examining both short- and long-term health effects; development and validation of measurement technology and approaches to conduct complex exposure assessments in resource-limited settings with a large range of pollutant concentrations; and development and validation of biomarkers for estimating dose. Addressing these priority research areas, which will inherently require an increased allocation of resources for cookstove research, will lead to better characterization of exposure-response relationships. Although the type and extent of exposure assessment will necessarily depend on the goal and design of the cookstove study, without improved understanding of exposure-response relationships, the level of air pollution reduction necessary to meet the health targets of cookstove interventions will remain uncertain.
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
Space Transportation System Availability Requirements and Its Influencing Attributes Relationships
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Adams, TImothy C.
2008-01-01
It is essential that management and engineering understand the need for an availability requirement for the customer's space transportation system as it enables the meeting of his needs, goal, and objectives. There are three types of availability, e.g., operational availability, achieved availability, or inherent availability. The basic definition of availability is equal to the mean uptime divided by the sum of the mean uptime plus the mean downtime. The major difference is the inclusiveness of the functions within the mean downtime and the mean uptime. This paper will address tIe inherent availability which only addresses the mean downtime as that mean time to repair or the time to determine the failed article, remove it, install a replacement article and verify the functionality of the repaired system. The definitions of operational availability include the replacement hardware supply or maintenance delays and other non-design factors in the mean downtime. Also with inherent availability the mean uptime will only consider the mean time between failures (other availability definitions consider this as mean time between maintenance - preventive and corrective maintenance) that requires the repair of the system to be functional. It is also essential that management and engineering understand all influencing attributes relationships to each other and to the resultant inherent availability requirement. This visibility will provide the decision makers with the understanding necessary to place constraints on the design definition for the major drivers that will determine the inherent availability, safety, reliability, maintainability, and the life cycle cost of the fielded system provided the customer. This inherent availability requirement may be driven by the need to use a multiple launch approach to placing humans on the moon or the desire to control the number of spare parts required to support long stays in either orbit or on the surface of the moon or mars. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability, but also provide the capability to bound the variables providing engineering the insight required to control the system's engineering solution. An example of this visibility will be the need to provide integration of similar discipline functions to allow control of the total parts count of the space transportation system. Also the relationship visibility of selecting a reliability requirement will place a constraint on parts count to achieve a given inherent availability requirement or accepting a larger parts count with the resulting higher reliability requirement. This paper will provide an understanding for the relationship of mean repair time (mean downtime) to maintainability, e.g., accessibility for repair, and both mean time between failure, e.g., reliability of hardware and the system inherent availability. Having an understanding of these relationships and resulting requirements before starting the architectural design concept definition will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.
Uniform hydrogen fuel layers for inertial fusion targets by microgravity
NASA Technical Reports Server (NTRS)
Parks, P. B.; Fagaly, Robert L.
1994-01-01
A critical concern in the fabrication of targets for inertial confinement fusion (ICF) is ensuring that the hydrogenic (D(sub 2) or DT) fuel layer maintains spherical symmetry. Solid layered targets have structural integrity, but lack the needed surface smoothness. Liquid targets are inherently smooth, but suffer from gravitationally induced sagging. One method to reduce the effective gravitational field environment is freefall insertion into the target chamber. Another method to counterbalance field gravitational force is to use an applied magnetic field combined with a gradient field to induce a magnetic dipole force on the liquid fuel layer. Based on time dependent calculations of the dynamics of the liquid fuel layer in microgravity environments, we show that it may be possible to produce a liquid layered ICF target that satisfies both smoothness and symmetry requirements.
Robarts, Daniel W H; Wolfe, Andrea D
2014-07-01
In the past few decades, many investigations in the field of plant biology have employed selectively neutral, multilocus, dominant markers such as inter-simple sequence repeat (ISSR), random-amplified polymorphic DNA (RAPD), and amplified fragment length polymorphism (AFLP) to address hypotheses at lower taxonomic levels. More recently, sequence-related amplified polymorphism (SRAP) markers have been developed, which are used to amplify coding regions of DNA with primers targeting open reading frames. These markers have proven to be robust and highly variable, on par with AFLP, and are attained through a significantly less technically demanding process. SRAP markers have been used primarily for agronomic and horticultural purposes, developing quantitative trait loci in advanced hybrids and assessing genetic diversity of large germplasm collections. Here, we suggest that SRAP markers should be employed for research addressing hypotheses in plant systematics, biogeography, conservation, ecology, and beyond. We provide an overview of the SRAP literature to date, review descriptive statistics of SRAP markers in a subset of 171 publications, and present relevant case studies to demonstrate the applicability of SRAP markers to the diverse field of plant biology. Results of these selected works indicate that SRAP markers have the potential to enhance the current suite of molecular tools in a diversity of fields by providing an easy-to-use, highly variable marker with inherent biological significance.
Robarts, Daniel W. H.; Wolfe, Andrea D.
2014-01-01
In the past few decades, many investigations in the field of plant biology have employed selectively neutral, multilocus, dominant markers such as inter-simple sequence repeat (ISSR), random-amplified polymorphic DNA (RAPD), and amplified fragment length polymorphism (AFLP) to address hypotheses at lower taxonomic levels. More recently, sequence-related amplified polymorphism (SRAP) markers have been developed, which are used to amplify coding regions of DNA with primers targeting open reading frames. These markers have proven to be robust and highly variable, on par with AFLP, and are attained through a significantly less technically demanding process. SRAP markers have been used primarily for agronomic and horticultural purposes, developing quantitative trait loci in advanced hybrids and assessing genetic diversity of large germplasm collections. Here, we suggest that SRAP markers should be employed for research addressing hypotheses in plant systematics, biogeography, conservation, ecology, and beyond. We provide an overview of the SRAP literature to date, review descriptive statistics of SRAP markers in a subset of 171 publications, and present relevant case studies to demonstrate the applicability of SRAP markers to the diverse field of plant biology. Results of these selected works indicate that SRAP markers have the potential to enhance the current suite of molecular tools in a diversity of fields by providing an easy-to-use, highly variable marker with inherent biological significance. PMID:25202637
2014-01-01
Background With over 50 different disorders and a combined incidence of up to 1/3000 births, lysosomal storage diseases (LSDs) constitute a major public health problem and place an enormous burden on affected individuals and their families. Many factors make LSD diagnosis difficult, including phenotype and penetrance variability, shared signs and symptoms, and problems inherent to biochemical diagnosis. Developing a powerful diagnostic tool could mitigate the protracted diagnostic process for these families, lead to better outcomes for current and proposed therapies, and provide the basis for more appropriate genetic counseling. Methods We have designed a targeted resequencing assay for the simultaneous testing of 57 lysosomal genes, using in-solution capture as the enrichment method and two different sequencing platforms. A total of 84 patients with high to moderate-or low suspicion index for LSD were enrolled in different centers in Spain and Portugal, including 18 positive controls. Results We correctly diagnosed 18 positive blinded controls, provided genetic diagnosis to 25 potential LSD patients, and ended with 18 diagnostic odysseys. Conclusion We report the assessment of a next–generation-sequencing-based approach as an accessory tool in the diagnosis of LSDs, a group of disorders which have overlapping clinical profiles and genetic heterogeneity. We have also identified and quantified the strengths and limitations of next generation sequencing (NGS) technology applied to diagnosis. PMID:24767253
Atmospheric effects on active illumination
NASA Astrophysics Data System (ADS)
Shaw, Scot E. J.; Kansky, Jan E.
2005-08-01
For some beam-control applications, we can rely on the cooperation of the target when gathering information about the target location and the state of the atmosphere between the target and the beam-control system. The typical example is a cooperative point-source beacon on the target. Light from such a beacon allows the beam-control system to track the target accurately, and, if higher-order adaptive optics is to be employed, to make wave-front measurements and apply appropriate corrections with a deformable mirror. In many applications, including directed-energy weapons, the target is not cooperative. In the absence of a cooperative beacon, we must find other ways to collect the relevant information. This can be accomplished with an active-illumination system. Typically, this means shining one or more lasers at the target and observing the reflected light. In this paper, we qualitatively explore a number of difficulties inherent to active illumination, and suggest some possible mitigation techniques.
Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello
2004-01-01
This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
Hydrologic processes in the pinyon-juniper woodlands: A literature review
Peter F. Ffolliott; Gerald J. Gottfried
2012-01-01
Hydrologic processes in the pinyon-juniper woodlands of the western region of the United States are variable because of the inherent interactions among the occurring precipitation regimes, geomorphological settings, and edaphic conditions that characterize the ecosystem. A wide range of past and present land-use practices further complicates comprehensive evaluations...
Much of the variability inherent in crude oil bioremediation field studies can be eliminated by normalizing analyte concentrations to the concentration of a nonbiodegradable biomarker such as hopane. This was demonstrated with data from a field study in which crude oil was intent...
ERIC Educational Resources Information Center
Immordino-Yang, Mary Helen; Gotlieb, Rebecca
2017-01-01
Social-affective neuroscience is revealing that human brain development is inherently social--our very nature is organized by nurture. To explore the implications for human development and education, we present a series of interdisciplinary studies documenting individual and cultural variability in the neurobiological correlates of emotional…
Integrating Micro-Macro Organizational Communication Research: Rationale, Issues, and Mechanisms.
ERIC Educational Resources Information Center
Miller, Vernon; And Others
The integration of micro-macro variables is critical to the development of organizational communication as an academic field. Mixed-level analysis is inherent in organizational phenomena, and its neglect perpetuates the gap in and fragmentation of organizational communication theories. Three of the many ways to design mixed-level analyses are…
Interpreting Significant Discrete-Time Periods in Survival Analysis.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Denson, Kathleen B.
Discrete-time survival analysis is a new method for educational researchers to employ when looking at the timing of certain educational events. Previous continuous-time methods do not allow for the flexibility inherent in a discrete-time method. Because both time-invariant and time-varying predictor variables can now be used, the interaction of…
Multiple effects of genetic background on variegated transgene expression in mice.
Opsahl, Margaret L; McClenaghan, Margaret; Springbett, Anthea; Reid, Sarah; Lathe, Richard; Colman, Alan; Whitelaw, C Bruce A
2002-01-01
BLG/7 transgenic mice express an ovine beta-lactoglobulin transgene during lactation. Unusually, transgene expression levels in milk differ between siblings. This variable expression is due to variegated transgene expression in the mammary gland and is reminiscent of position-effect variegation. The BLG/7 line was created and maintained on a mixed CBA x C57BL/6 background. We have investigated the effect on transgene expression of backcrossing for 13 generations into these backgrounds. Variable transgene expression was observed in all populations examined, confirming that it is an inherent property of the transgene array at its site of integration. There were also strain-specific effects on transgene expression that appear to be independent of the inherent variegation. The transgene, compared to endogenous milk protein genes, is specifically susceptible to inbreeding depression. Outcrossing restored transgene expression levels to that of the parental population; thus suppression was not inherited. Finally, no generation-dependent decrease in mean expression levels was observed in the parental population. Thus, although the BLG/7 transgene is expressed in a variegated manner, there was no generation-associated accumulated silencing of transgene expression. PMID:11901126
Multiple effects of genetic background on variegated transgene expression in mice.
Opsahl, Margaret L; McClenaghan, Margaret; Springbett, Anthea; Reid, Sarah; Lathe, Richard; Colman, Alan; Whitelaw, C Bruce A
2002-03-01
BLG/7 transgenic mice express an ovine beta-lactoglobulin transgene during lactation. Unusually, transgene expression levels in milk differ between siblings. This variable expression is due to variegated transgene expression in the mammary gland and is reminiscent of position-effect variegation. The BLG/7 line was created and maintained on a mixed CBA x C57BL/6 background. We have investigated the effect on transgene expression of backcrossing for 13 generations into these backgrounds. Variable transgene expression was observed in all populations examined, confirming that it is an inherent property of the transgene array at its site of integration. There were also strain-specific effects on transgene expression that appear to be independent of the inherent variegation. The transgene, compared to endogenous milk protein genes, is specifically susceptible to inbreeding depression. Outcrossing restored transgene expression levels to that of the parental population; thus suppression was not inherited. Finally, no generation-dependent decrease in mean expression levels was observed in the parental population. Thus, although the BLG/7 transgene is expressed in a variegated manner, there was no generation-associated accumulated silencing of transgene expression.
Yocgo, Rosita E; Geza, Ephifania; Chimusa, Emile R; Mazandu, Gaston K
2017-11-23
Advances in forward and reverse genetic techniques have enabled the discovery and identification of several plant defence genes based on quantifiable disease phenotypes in mutant populations. Existing models for testing the effect of gene inactivation or genes causing these phenotypes do not take into account eventual uncertainty of these datasets and potential noise inherent in the biological experiment used, which may mask downstream analysis and limit the use of these datasets. Moreover, elucidating biological mechanisms driving the induced disease resistance and influencing these observable disease phenotypes has never been systematically tackled, eliciting the need for an efficient model to characterize completely the gene target under consideration. We developed a post-gene silencing bioinformatics (post-GSB) protocol which accounts for potential biases related to the disease phenotype datasets in assessing the contribution of the gene target to the plant defence response. The post-GSB protocol uses Gene Ontology semantic similarity and pathway dataset to generate enriched process regulatory network based on the functional degeneracy of the plant proteome to help understand the induced plant defence response. We applied this protocol to investigate the effect of the NPR1 gene silencing to changes in Arabidopsis thaliana plants following Pseudomonas syringae pathovar tomato strain DC3000 infection. Results indicated that the presence of a functionally active NPR1 reduced the plant's susceptibility to the infection, with about 99% of variability in Pseudomonas spore growth between npr1 mutant and wild-type samples. Moreover, the post-GSB protocol has revealed the coordinate action of target-associated genes and pathways through an enriched process regulatory network, summarizing the potential target-based induced disease resistance mechanism. This protocol can improve the characterization of the gene target and, potentially, elucidate induced defence response by more effectively utilizing available phenotype information and plant proteome functional knowledge.
"But What about the Dinosaurs?": A Response to Damien Riggs
ERIC Educational Resources Information Center
Payne, Robert
2013-01-01
This article is written in response to the work of Damien Riggs, providing a critical reading of his analysis of the inherent homophobia and gender normativity present in a selection of sexuality education websites targeted at children. Taking up the productive possibilities of both silences and narrative disruptions, the author examines a moment…
Sensing Surveillance & Navigation
2012-03-07
Removing Atmospheric Turbulence Goal: to restore a single high quality image from the observed sequence Prof. Peyman...Computer Sciences – Higher wavelet studies , time-scale, time-frequency transformations, Reduced Signature Targets, Low Probability of Intercept...Range Dependent Beam -patterns •Electronic Steering with Frequency Offsets •Inherent Countermeasure Capability Why? W1(t) W2(t) W3
Byrne-Nash, Rose; Lucero, Danielle M; Osbaugh, Niki A; Melander, Roberta J; Melander, Christian; Feldheim, Daniel L
2017-07-19
The unrelenting rise of antimicrobial-resistant bacteria has necessitated the search for novel antibiotic solutions. Herein we describe further mechanistic studies on a 2.0-nm-diameter gold nanoparticle-based antibiotic (designated LAL-32). This antibiotic exhibits bactericidal activity against the Gram-negative bacterium Escherichia coli at 1.0 μM, a concentration significantly lower than several clinically available antibiotics (such as ampicillin and gentamicin), and acute treatment with LAL-32 does not give rise to spontaneous resistant mutants. LAL-32 treatment inhibits cellular division, daughter cell separation, and twin-arginine translocation (Tat) pathway dependent shuttling of proteins to the periplasm. Furthermore, we have found that the cedA gene imparts increased resistance to LAL-32, and shown that an E. coli cedA transposon mutant exhibits increased susceptibility to LAL-32. Taken together, these studies further implicate cell division pathways as the target for this nanoparticle-based antibiotic and demonstrate that there may be inherently higher barriers for resistance evolution against nanoscale antibiotics in comparison to their small molecule counterparts.
Coccanari De Fornari, Maria Antonietta; Piccione, Michele; Giampà, Alessio
2010-01-01
In the general reflection inherent categorical and dimensional diagnosis and the opportunity to put neurotic and psychotic personality in the various chapters of the discipline, a never-ending discussion on the similarities and differences between clinical pictures classified in separate entries (think of the comings and goings from one cluster to another between schizoid and avoidant personality disorder). Other cogent discussion focused on the nosographical criteria, targeted to a modified classification that takes into account dimensional rather than descriptive criteria. About personality disorders think of the debate on their degree of severity, as assessed by criteria such dissimilar from various authors, as to be very different in this sense a ranking according to the variables considered (eg, classifications by Kernberg and Millon). As an established tradition that a contribution to psychological studies derives also from the literary and artistic forms in general, we propose, through the interpretation of literary cases, the dimensional affinity between schizoid and narcissistic disorders. The dimensions taken into account are those of affectivity and intersubjectivity, impaired in both disorders.
Validation of the use of synthetic imagery for camouflage effectiveness assessment
NASA Astrophysics Data System (ADS)
Newman, Sarah; Gilmore, Marilyn A.; Moorhead, Ian R.; Filbee, David R.
2002-08-01
CAMEO-SIM was developed as a laboratory method to assess the effectiveness of aircraft camouflage schemes. It is a physically accurate synthetic image generator, rendering in any waveband between 0.4 and 14 microns. Camouflage schemes are assessed by displaying imagery to observers under controlled laboratory conditions or by analyzing the digital image and calculating the contrast statistics between the target and background. Code verification has taken place during development. However, validation of CAMEO-SIM is essential to ensure that the imagery produced is suitable to be used for camouflage effectiveness assessment. Real world characteristics are inherently variable, so exact pixel to pixel correlation is unnecessary. For camouflage effectiveness assessment it is more important to be confident that the comparative effects of different schemes are correct, but prediction of detection ranges is also desirable. Several different tests have been undertaken to validate CAMEO-SIM for the purpose of assessing camouflage effectiveness. Simple scenes have been modeled and measured. Thermal and visual properties of the synthetic and real scenes have been compared. This paper describes the validation tests and discusses the suitability of CAMEO-SIM for camouflage assessment.
Theory of Mind Predicts Emotion Knowledge Development in Head Start Children
Seidenfeld, Adina M.; Johnson, Stacy R.; Cavadel, Elizabeth Woodburn; Izard, Carroll E.
2014-01-01
Research Findings Emotion knowledge (EK) enables children to identify emotions in themselves and others and its development facilitates emotion recognition in complex social situations. Social-cognitive processes, such as theory of mind (ToM), may contribute to developing EK by helping children realize the inherent variability associated with emotion expression across individuals and situations. The present study explored how ToM, particularly false belief understanding, in preschool predicts children’s developing EK in kindergarten. Participants were 60 3- to 5-year-old Head Start children. ToM and EK measures were obtained from standardized child tasks. ToM scores were positively related to performance on an EK task in kindergarten after controlling for preschool levels of EK and verbal ability. Exploratory analyses provided preliminary evidence that ToM serves as an indirect effect between verbal ability and EK. Practice or Policy Early intervention programs may benefit from including lessons on ToM to help promote socio-emotional learning, specifically EK. This consideration may be the most fruitful when the targeted population is at-risk. PMID:25364212
NASA Astrophysics Data System (ADS)
Athanasiadis, Panos; Gualdi, Silvio; Scaife, Adam A.; Bellucci, Alessio; Hermanson, Leon; MacLachlan, Craig; Arribas, Alberto; Materia, Stefano; Borelli, Andrea
2014-05-01
Low-frequency variability is a fundamental component of the atmospheric circulation. Extratropical teleconnections, the occurrence of blocking and the slow modulation of the jet streams and storm tracks are all different aspects of low-frequency variability. Part of the latter is attributed to the chaotic nature of the atmosphere and is inherently unpredictable. On the other hand, primarily as a response to boundary forcings, tropospheric low-frequency variability includes components that are potentially predictable. Seasonal forecasting faces the difficult task of predicting these components. Particularly referring to the extratropics, the current generation of seasonal forecasting systems seem to be approaching this target by realistically initializing most components of the climate system, using higher resolution and utilizing large ensemble sizes. Two seasonal prediction systems (Met-Office GloSea and CMCC-SPS-v1.5) are analyzed in terms of their representation of different aspects of extratropical low-frequency variability. The current operational Met-Office system achieves unprecedented high scores in predicting the winter-mean phase of the North Atlantic Oscillation (NAO, corr. 0.74 at 500 hPa) and the Pacific-N. American pattern (PNA, corr. 0.82). The CMCC system, considering its small ensemble size and course resolution, also achieves good scores (0.42 for NAO, 0.51 for PNA). Despite these positive features, both models suffer from biases in low-frequency variance, particularly in the N. Atlantic. Consequently, it is found that their intrinsic variability patterns (sectoral EOFs) differ significantly from the observed, and the known teleconnections are underrepresented. Regarding the representation of N. hemisphere blocking, after bias correction both systems exhibit a realistic climatology of blocking frequency. In this assessment, instantaneous blocking and large-scale persistent blocking events are identified using daily geopotential height fields at 500 hPa. Given a documented strong relationship between high-latitude N. Atlantic blocking and the NAO, one would expect a predictive skill for the seasonal frequency of blocking comparable to that of the NAO. However, this remains elusive. Future efforts should be in the direction of reducing model biases not only in the mean but also in variability (band-passed variances).
Discerning strain effects in microbial dose-response data.
Coleman, Margaret E; Marks, Harry M; Golden, Neal J; Latimer, Heejeong K
In order to estimate the risk or probability of adverse events in risk assessment, it is necessary to identify the important variables that contribute to the risk and provide descriptions of distributions of these variables for well-defined populations. One component of modeling dose response that can create uncertainty is the inherent genetic variability among pathogenic bacteria. For many microbial risk assessments, the "default" assumption used for dose response does not account for strain or serotype variability in pathogenicity and virulence, other than perhaps, recognizing the existence of avirulent strains. However, an examination of data sets from human clinical trials in which Salmonella spp. and Campylobacter jejuni strains were administered reveals significant strain differences. This article discusses the evidence for strain variability and concludes that more biologically based alternatives are necessary to replace the default assumptions commonly used in microbial risk assessment, specifically regarding strain variability.
Xu, Dan; King, Kevin F; Liang, Zhi-Pei
2007-10-01
A new class of spiral trajectories called variable slew-rate spirals is proposed. The governing differential equations for a variable slew-rate spiral are derived, and both numeric and analytic solutions to the equations are given. The primary application of variable slew-rate spirals is peak B(1) amplitude reduction in 2D RF pulse design. The reduction of peak B(1) amplitude is achieved by changing the gradient slew-rate profile, and gradient amplitude and slew-rate constraints are inherently satisfied by the design of variable slew-rate spiral gradient waveforms. A design example of 2D RF pulses is given, which shows that under the same hardware constraints the RF pulse using a properly chosen variable slew-rate spiral trajectory can be much shorter than that using a conventional constant slew-rate spiral trajectory, thus having greater immunity to resonance frequency offsets.
High-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)
2010-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.
Degree of target utilization influences the location of movement endpoint distributions.
Slifkin, Andrew B; Eder, Jeffrey R
2017-03-01
According to dominant theories of motor control, speed and accuracy are optimized when, on the average, movement endpoints are located at the target center and when the variability of the movement endpoint distributions is matched to the width of the target (viz., Meyer, Abrams, Kornblum, Wright, & Smith, 1988). The current study tested those predictions. According to the speed-accuracy trade-off, expanding the range of variability to the amount permitted by the limits of the target boundaries allows for maximization of movement speed while centering the distribution on the target center prevents movement errors that would have occurred had the distribution been off center. Here, participants (N=20) were required to generate 100 consecutive targeted hand movements under each of 15 unique conditions: There were three movement amplitude requirements (80, 160, 320mm) and within each there were five target widths (5, 10, 20, 40, 80mm). According to the results, it was only at the smaller target widths (5, 10mm) that movement endpoint distributions were centered on the target center and the range of movement endpoint variability matched the range specified by the target boundaries. As target width increased (20, 40, 80mm), participants increasingly undershot the target center and the range of movement endpoint variability increasingly underestimated the variability permitted by the target region. The degree of target center undershooting was strongly predicted by the difference between the size of the target and the amount of movement endpoint variability, i.e., the amount of unused space in the target. The results suggest that participants have precise knowledge of their variability relative to that permitted by the target, and they use that knowledge to systematically reduce the travel distance to targets. The reduction in travel distance across the larger target widths might have resulted in greater cost savings than those associated with increases in speed. Copyright © 2017. Published by Elsevier B.V.
The eukaryotic signal sequence, YGRL, targets the chlamydial inclusion
Kabeiseman, Emily J.; Cichos, Kyle H.; Moore, Elizabeth R.
2014-01-01
Understanding how host proteins are targeted to pathogen-specified organelles, like the chlamydial inclusion, is fundamentally important to understanding the biogenesis of these unique subcellular compartments and how they maintain autonomy within the cell. Syntaxin 6, which localizes to the chlamydial inclusion, contains an YGRL signal sequence. The YGRL functions to return syntaxin 6 to the trans-Golgi from the plasma membrane, and deletion of the YGRL signal sequence from syntaxin 6 also prevents the protein from localizing to the chlamydial inclusion. YGRL is one of three YXXL (YGRL, YQRL, and YKGL) signal sequences which target proteins to the trans-Golgi. We designed various constructs of eukaryotic proteins to test the specificity and propensity of YXXL sequences to target the inclusion. The YGRL signal sequence redirects proteins (e.g., Tgn38, furin, syntaxin 4) that normally do not localize to the chlamydial inclusion. Further, the requirement of the YGRL signal sequence for syntaxin 6 localization to inclusions formed by different species of Chlamydia is conserved. These data indicate that there is an inherent property of the chlamydial inclusion, which allows it to recognize the YGRL signal sequence. To examine whether this “inherent property” was protein or lipid in nature, we asked if deletion of the YGRL signal sequence from syntaxin 6 altered the ability of the protein to interact with proteins or lipids. Deletion or alteration of the YGRL from syntaxin 6 does not appreciably impact syntaxin 6-protein interactions, but does decrease syntaxin 6-lipid interactions. Intriguingly, data also demonstrate that YKGL or YQRL can successfully substitute for YGRL in localization of syntaxin 6 to the chlamydial inclusion. Importantly and for the first time, we are establishing that a eukaryotic signal sequence targets the chlamydial inclusion. PMID:25309881
Arachidonic Acid Metabolite as a Novel Therapeutic Target in Breast Cancer Metastasis
Borin, Thaiz F.; Angara, Kartik; Rashid, Mohammad H.; Achyut, Bhagelu R.; Arbab, Ali S.
2017-01-01
Metastatic breast cancer (BC) (also referred to as stage IV) spreads beyond the breast to the bones, lungs, liver, or brain and is a major contributor to the deaths of cancer patients. Interestingly, metastasis is a result of stroma-coordinated hallmarks such as invasion and migration of the tumor cells from the primary niche, regrowth of the invading tumor cells in the distant organs, proliferation, vascularization, and immune suppression. Targeted therapies, when used as monotherapies or combination therapies, have shown limited success in decreasing the established metastatic growth and improving survival. Thus, novel therapeutic targets are warranted to improve the metastasis outcomes. We have been actively investigating the cytochrome P450 4 (CYP4) family of enzymes that can biosynthesize 20-hydroxyeicosatetraenoic acid (20-HETE), an important signaling eicosanoid involved in the regulation of vascular tone and angiogenesis. We have shown that 20-HETE can activate several intracellular protein kinases, pro-inflammatory mediators, and chemokines in cancer. This review article is focused on understanding the role of the arachidonic acid metabolic pathway in BC metastasis with an emphasis on 20-HETE as a novel therapeutic target to decrease BC metastasis. We have discussed all the significant investigational mechanisms and put forward studies showing how 20-HETE can promote angiogenesis and metastasis, and how its inhibition could affect the metastatic niches. Potential adjuvant therapies targeting the tumor microenvironment showing anti-tumor properties against BC and its lung metastasis are discussed at the end. This review will highlight the importance of exploring tumor-inherent and stromal-inherent metabolic pathways in the development of novel therapeutics for treating BC metastasis. PMID:29292756
Breaking into the epithelial apical–junctional complex — news from pathogen hackers
Vogelmann, Roger; Amieva, Manuel R; Falkow, Stanley; Nelson, W James
2012-01-01
The epithelial apical–junctional complex is a key regulator of cellular functions. In addition, it is an important target for microbial pathogens that manipulate the cell to survive, proliferate and sometimes persist within a host. Out of a myriad of potential molecular targets, some bacterial and viral pathogens have selected a subset of protein targets at the apical–junctional complex of epithelial cells. Studying how microbes use these targets also teaches us about the inherent physiological properties of host molecules in the context of normal junctional structure and function. Thus, we have learned that three recently uncovered components of the apical–junctional complex of the Ig superfamily — junctional adhesion molecule, Nectin and the coxsackievirus and adenovirus receptor — are important regulators of junction structure and function and represent critical targets of microbial virulence gene products. PMID:15037310
Breaking into the epithelial apical-junctional complex--news from pathogen hackers.
Vogelmann, Roger; Amieva, Manuel R; Falkow, Stanley; Nelson, W James
2004-02-01
The epithelial apical-junctional complex is a key regulator of cellular functions. In addition, it is an important target for microbial pathogens that manipulate the cell to survive, proliferate and sometimes persist within a host. Out of a myriad of potential molecular targets, some bacterial and viral pathogens have selected a subset of protein targets at the apical-junctional complex of epithelial cells. Studying how microbes use these targets also teaches us about the inherent physiological properties of host molecules in the context of normal junctional structure and function. Thus, we have learned that three recently uncovered components of the apical-junctional complex of the Ig superfamily--junctional adhesion molecule, Nectin and the coxsackievirus and adenovirus receptor--are important regulators of junction structure and function and represent critical targets of microbial virulence gene products.
Kinematic properties of the helicopter in coordinated turns
NASA Technical Reports Server (NTRS)
Chen, R. T. N.; Jeske, J. A.
1981-01-01
A study on the kinematic relationship of the variables of helicopter motion in steady, coordinated turns involving inherent sideslip is described. A set of exact kinematic equations which govern a steady coordinated helical turn about an Earth referenced vertical axis is developed. A precise definition for the load factor parameter that best characterizes a coordinated turn is proposed. Formulas are developed which relate the aircraft angular rates and pitch and roll attitudes to the turn parameters, angle of attack, and inherent sideslip. A steep, coordinated helical turn at extreme angles of attack with inherent sideslip is of primary interest. The bank angle of the aircraft can differ markedly from the tilt angle of the normal load factor. The normal load factor can also differ substantially from the accelerometer reading along the vertical body axis of the aircraft. Sideslip has a strong influence on the pitch attitude and roll rate of the helicopter. Pitch rate is independent of angle of attack in a coordinated turn and in the absence of sideslip, angular rates about the stability axes are independent of the aerodynamic characteristics of the aircraft.
Trial-to-trial adaptation in control of arm reaching and standing posture
Pienciak-Siewert, Alison; Horan, Dylan P.
2016-01-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. PMID:27683888
Trial-to-trial adaptation in control of arm reaching and standing posture.
Pienciak-Siewert, Alison; Horan, Dylan P; Ahmed, Alaa A
2016-12-01
Classical theories of motor learning hypothesize that adaptation is driven by sensorimotor error; this is supported by studies of arm and eye movements that have shown that trial-to-trial adaptation increases with error. Studies of postural control have shown that anticipatory postural adjustments increase with the magnitude of a perturbation. However, differences in adaptation have been observed between the two modalities, possibly due to either the inherent instability or sensory uncertainty in standing posture. Therefore, we hypothesized that trial-to-trial adaptation in posture should be driven by error, similar to what is observed in arm reaching, but the nature of the relationship between error and adaptation may differ. Here we investigated trial-to-trial adaptation of arm reaching and postural control concurrently; subjects made reaching movements in a novel dynamic environment of varying strengths, while standing and holding the handle of a force-generating robotic arm. We found that error and adaptation increased with perturbation strength in both arm and posture. Furthermore, in both modalities, adaptation showed a significant correlation with error magnitude. Our results indicate that adaptation scales proportionally with error in the arm and near proportionally in posture. In posture only, adaptation was not sensitive to small error sizes, which were similar in size to errors experienced in unperturbed baseline movements due to inherent variability. This finding may be explained as an effect of uncertainty about the source of small errors. Our findings suggest that in rehabilitation, postural error size should be considered relative to the magnitude of inherent movement variability. Copyright © 2016 the American Physiological Society.
Soil carbon storage estimation in a forested watershed using quantitative soil-landscape modeling
James A. Thompson; Randall K. Kolka
2005-01-01
Carbon storage in soils is important to forest ecosystems. Moreover, forest soils may serve as important C sinks for ameliorating excess atmospheric CO2. Spatial estimates of soil organic C (SOC) storage have traditionally relied upon soil survey maps and laboratory characterization data. This approach does not account for inherent variability...
USDA-ARS?s Scientific Manuscript database
One of the most important and least understood properties of carbohydrates is their conformational profile in solution. The study of carbohydrates in solution is a most difficult computational problem, a result of the many soft conformational variables (hydroxyl groups) inherent in the structures of...
Lack of Precision of Burn Surface Area Calculation by UK Armed Forces Medical Personnel
2014-03-01
computer screen or tablet , and therefore the variability in perception and representation inherent in having a human assess and draw the burn remains...Potential solutions to this source of error include 3D MRI and TeraHertz scanning technologies [40], but at the time of writing, these are not yet
Save money by understanding variance and tolerancing.
Stuart, K
2007-01-01
Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.
USDA-ARS?s Scientific Manuscript database
n the Southwest US, the southern Rocky Mountains provide a significant orographic barrier to prevailing moisture-laden Westerly winds, which results in snow accumulation and melt, both vitally important to the region’s water resources. The inherent variability of meteorological conditions in the Sou...
John B. Bradford; Peter Weishampel; Marie-Louise Smith; Randall Kolka; Richard A. Birdsey; Scott V. Ollinger; Michael G. Ryan
2010-01-01
Assessing forest carbon storage and cycling over large areas is a growing challenge that is complicated by the inherent heterogeneity of forest systems. Field measurements must be conducted and analyzed appropriately to generate precise estimates at scales large enough for mapping or comparison with remote sensing data. In this study we examined...
Section Height Determination Methods of the Isotopographic Surface in a Complex Terrain Relief
ERIC Educational Resources Information Center
Syzdykova, Guldana D.; Kurmankozhaev, Azimhan K.
2016-01-01
A new method for determining the vertical interval of isotopographic surfaces on rugged terrain was developed. The method is based on the concept of determining the differentiated size of the vertical interval using spatial-statistical properties inherent in the modal characteristic, the degree of variability of apical heights and the chosen map…
USDA-ARS?s Scientific Manuscript database
We evaluated the influence of pack stock (i.e., horse and mule) use on meadow plant communities in Sequoia and Yosemite National Parks in the Sierra Nevada mountains of California. Meadows were sampled to account for inherent variability across multiple scales by: 1) controlling for among-meadow var...
The E-Portfolio Continuum: Discovering Variables for E-Portfolio Adoption within Music Education
ERIC Educational Resources Information Center
Taylor, John; Dunbar-Hall, Peter; Rowley, Jennifer
2012-01-01
This article presents the results of audit data compiled from a case study introducing e-portfolios into a Music Education degree program, and highlights the key challenges faced from the initial stages of student use to curricular embedding and student adoption. This article discusses the technological, social and educational impacts inherent in…
Variable selection for marginal longitudinal generalized linear models.
Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio
2005-06-01
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).
Applications of Geostatistics in Plant Nematology
Wallace, M. K.; Hawkins, D. M.
1994-01-01
The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the Ap horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities. PMID:19279938
Applications of geostatistics in plant nematology.
Wallace, M K; Hawkins, D M
1994-12-01
The application of geostatistics to plant nematology was made by evaluating soil and nematode data acquired from 200 soil samples collected from the A(p) horizon of a reed canary-grass field in northern Minnesota. Geostatistical concepts relevant to nematology include semi-variogram modelling, kriging, and change of support calculations. Soil and nematode data generally followed a spherical semi-variogram model, with little random variability associated with soil data and large inherent variability for nematode data. Block kriging of soil and nematode data provided useful contour maps of the data. Change of snpport calculations indicated that most of the random variation in nematode data was due to short-range spatial variability in the nematode population densities.
NASA Technical Reports Server (NTRS)
Cetinkunt, Sabri; Book, Wayne J.
1990-01-01
The performance limitations of manipulators under joint variable-feedback control are studied as a function of the mechanical flexibility inherent in the manipulator structure. A finite-dimensional time-domain dynamic model of a two-link two-joint planar manipulator is used in the study. Emphasis is placed on determining the limitations of control algorithms that use only joint variable-feedback information in calculations of control decisions, since most motion control systems in practice are of this kind. Both fine and gross motion cases are studied. Results for fine motion agree well with previously reported results in the literature and are also helpful in explaining the performance limitations in fast gross motions.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
ERIC Educational Resources Information Center
Hopko, D. R.; Robertson, S. M. C.; Colman, L.
2008-01-01
In recent years there has been increased focus on evaluating the efficacy of psychosocial interventions for cancer patients. Among the several limitations inherent to these programs of research, few studies have targeted patients with well-diagnosed clinical depression and little is known about factors that best predict treatment outcome and…
ERIC Educational Resources Information Center
Wharton, Tracy; Alexander, Neil
2013-01-01
This article describes lessons learned about implementing evaluations in hospital settings. In order to overcome the methodological dilemmas inherent in this environment, we used a practical participatory evaluation (P-PE) strategy to engage as many stakeholders as possible in the process of evaluating a clinical demonstration project.…
ERIC Educational Resources Information Center
Cross, Terry L.; And Others
This monograph provides a philosophical framework and practical ideas for improving service delivery to children of color who are severely emotionally disturbed. The monograph targets four sociocultural groups (African Americans, Asian Americans, Hispanic Americans, and Native Americans). The document emphasizes the cultural strengths inherent in…
Heng, Boon Chin; Cao, Tong
2005-01-01
Over the past decade, there has been growing interest in the use of antibodies against intracellular targets. This is currently achieved through recombinant expression of the single chain variable fragment (scFv) antibody format within the cell, which is commonly referred to as an intrabody. This possesses a number of inherent advantages over RNA interference (iRNA). Firstly, the high specificity and affinity of intrabodies to target antigens is well-established, whereas iRNA has been frequently shown to exert multiple non-specific effects. Secondly, intrabodies being proteins possess a much longer active half-life compared to iRNA. Thirdly, when the active half-life of the intracellular target molecule is long, gene silencing through iRNA would be slow to yield any effect, whereas the effects of intrabody expression would be almost instantaneous. Lastly, it is possible to design intrabodies to block certain binding interactions of a particular target molecule, while sparing others. There is, however, various technical challenges faced with intrabody expression through the application of recombinant DNA technology. In particular, protein conformational folding and structural stability of the newly-synthesized intrabody within the cell is affected by reducing conditions of the intracellular environment. Also, there are overwhelming safety concerns surrounding the application of transfected recombinant DNA in human clinical therapy, which is required to achieve intrabody expression within the cell. Of particular concern are the various viral-based vectors that are commonly-used in genetic manipulation. A novel approach around these problems would be to look at the possibility of fusing protein transduction domains (PTD) to scFv antibodies, to create a 'cell-permeable' antibody or 'Transbody'. PTD are short peptide sequences that enable proteins to translocate across the cell membrane and be internalized within the cytosol, through atypical secretory and internalization pathways. There are a number of distinct advantages that a 'Transbody' would possess over conventional intrabodies expressed within the cell. For a start, 'correct' conformational folding and disulfide bond formation can take place prior to introduction into the target cell. More importantly, the use of cell-permeable antibodies or 'Transbodies' would avoid the overwhelming safety and ethical concerns surrounding the direct application of recombinant DNA technology in human clinical therapy, which is required for intrabody expression within the cell. 'Transbodies' introduced into the cell would possess only a limited active half-life, without resulting in any permanent genetic alteration. This would allay any safety concerns with regards to their application in human clinical therapy.
Optimal placement of actuators and sensors in control augmented structural optimization
NASA Technical Reports Server (NTRS)
Sepulveda, A. E.; Schmit, L. A., Jr.
1990-01-01
A control-augmented structural synthesis methodology is presented in which actuator and sensor placement is treated in terms of (0,1) variables. Structural member sizes and control variables are treated simultaneously as design variables. A multiobjective utopian approach is used to obtain a compromise solution for inherently conflicting objective functions such as strucutal mass control effort and number of actuators. Constraints are imposed on transient displacements, natural frequencies, actuator forces and dynamic stability as well as controllability and observability of the system. The combinatorial aspects of the mixed - (0,1) continuous variable design optimization problem are made tractable by combining approximation concepts with branch and bound techniques. Some numerical results for example problems are presented to illustrate the efficacy of the design procedure set forth.
Ellingson, Roger M.; Gallun, Frederick J.; Bock, Guillaume
2015-01-01
It can be problematic to measure stationary acoustic sound pressure level in any environment when the target level approaches or lies below the minimum measureable sound pressure level of the measurement system itself. This minimum measureable level, referred to as the inherent measurement system noise floor, is generally established by noise emission characteristics of measurement system components such as microphones, preamplifiers, and other system circuitry. In this paper, methods are presented and shown accurate measuring stationary levels within 20 dB above and below this system noise floor. Methodology includes (1) measuring inherent measurement system noise, (2) subtractive energy based, inherent noise adjustment of levels affected by system noise floor, and (3) verifying accuracy of inherent noise adjustment technique. While generalizable to other purposes, the techniques presented here were specifically developed to quantify ambient noise levels in very quiet rooms used to evaluate free-field human hearing thresholds. Results obtained applying the methods to objectively measure and verify the ambient noise level in an extremely quiet room, using various measurement system noise floors and analysis bandwidths, are presented and discussed. The verified results demonstrate the adjustment method can accurately extend measurement range to 20 dB below the measurement system noise floor, and how measurement system frequency bandwidth can affect accuracy of reported noise levels. PMID:25786932
Boahen, Kwabena
2013-01-01
A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436
System properties, feedback control and effector coordination of human temperature regulation.
Werner, Jürgen
2010-05-01
The aim of human temperature regulation is to protect body processes by establishing a relative constancy of deep body temperature (regulated variable), in spite of external and internal influences on it. This is basically achieved by a distributed multi-sensor, multi-processor, multi-effector proportional feedback control system. The paper explains why proportional control implies inherent deviations of the regulated variable from the value in the thermoneutral zone. The concept of feedback of the thermal state of the body, conveniently represented by a high-weighted core temperature (T (c)) and low-weighted peripheral temperatures (T (s)) is equivalent to the control concept of "auxiliary feedback control", using a main (regulated) variable (T (c)), supported by an auxiliary variable (T (s)). This concept implies neither regulation of T (s) nor feedforward control. Steady-states result in the closed control-loop, when the open-loop properties of the (heat transfer) process are compatible with those of the thermoregulatory processors. They are called operating points or balance points and are achieved due to the inherent property of dynamical stability of the thermoregulatory feedback loop. No set-point and no comparison of signals (e.g. actual-set value) are necessary. Metabolic heat production and sweat production, though receiving the same information about the thermal state of the body, are independent effectors with different thresholds and gains. Coordination between one of these effectors and the vasomotor effector is achieved by the fact that changes in the (heat transfer) process evoked by vasomotor control are taken into account by the metabolic/sweat processor.
Reliability of COPVs Accounting for Margin of Safety on Design Burst
NASA Technical Reports Server (NTRS)
Murthy, Pappu L.N.
2012-01-01
In this paper, the stress rupture reliability of Carbon/Epoxy Composite Overwrapped Pressure Vessels (COPVs) is examined utilizing the classic Phoenix model and accounting for the differences between the design and the actual burst pressure, and the liner contribution effects. Stress rupture life primarily depends upon the fiber stress ratio which is defined as the ratio of stress in fibers at the maximum expected operating pressure to actual delivered fiber strength. The actual delivered fiber strength is calculated using the actual burst pressures of vessels established through burst tests. However, during the design phase the actual burst pressure is generally not known and to estimate the reliability of the vessels calculations are usually performed based upon the design burst pressure only. Since the design burst is lower than the actual burst, this process yields a much higher value for the stress ratio and consequently a conservative estimate for the reliability. Other complications arise due to the fact that the actual burst pressure and the liner contributions have inherent variability and therefore must be treated as random variables in order to compute the stress rupture reliability. Furthermore, the model parameters, which have to be established based on stress rupture tests of subscale vessels or coupons, have significant variability as well due to limited available data and hence must be properly accounted for. In this work an assessment of reliability of COPVs including both parameter uncertainties and physical variability inherent in liner and overwrap material behavior is made and estimates are provided in terms of degree of uncertainty in the actual burst pressure and the liner load sharing.
Composite Wavelet Filters for Enhanced Automated Target Recognition
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.
NASA Astrophysics Data System (ADS)
Greczynski, G.; Mráz, S.; Schneider, J. M.; Hultman, L.
2018-02-01
The nitride layer formed in the target race track during the deposition of stoichiometric TiN thin films is a factor 2.5 thicker for high power impulse magnetron sputtering (HIPIMS), compared to conventional dc processing (DCMS). The phenomenon is explained using x-ray photoelectron spectroscopy analysis of the as-operated Ti target surface chemistry supported by sputter depth profiles, dynamic Monte Carlo simulations employing the TRIDYN code, and plasma chemical investigations by ion mass spectrometry. The target chemistry and the thickness of the nitride layer are found to be determined by the implantation of nitrogen ions, predominantly N+ and N2+ for HIPIMS and DCMS, respectively. Knowledge of this method-inherent difference enables robust processing of high quality functional coatings.
Plasticity, Variability and Age in Second Language Acquisition and Bilingualism
Birdsong, David
2018-01-01
Much of what is known about the outcome of second language acquisition and bilingualism can be summarized in terms of inter-individual variability, plasticity and age. The present review looks at variability and plasticity with respect to their underlying sources, and at age as a modulating factor in variability and plasticity. In this context we consider critical period effects vs. bilingualism effects, early and late bilingualism, nativelike and non-nativelike L2 attainment, cognitive aging, individual differences in learning, and linguistic dominance in bilingualism. Non-uniformity is an inherent characteristic of both early and late bilingualism. This review shows how plasticity and age connect with biological and experiential sources of variability, and underscores the value of research that reveals and explains variability. In these ways the review suggests how plasticity, variability and age conspire to frame fundamental research issues in L2 acquisition and bilingualism, and provides points of reference for discussion of the present Frontiers in Psychology Research Topic. PMID:29593590
Vorrink, Sabine U.; Ullah, Shahid; Schmidt, Staffan; Nandania, Jatin; Velagapudi, Vidya; Beck, Olof; Ingelman-Sundberg, Magnus; Lauschke, Volker M.
2017-01-01
Adverse reactions or lack of response to medications are important concerns for drug development programs. However, faithful predictions of drug metabolism and toxicity are difficult because animal models show only limited translatability to humans. Furthermore, current in vitro systems, such as hepatic cell lines or primary human hepatocyte (PHH) 2-dimensional (2D) monolayer cultures, can be used only for acute toxicity tests because of their immature phenotypes and inherent instability. Therefore, the migration to novel phenotypically stable models is of prime importance for the pharmaceutical industry. Novel 3-dimensional (3D) culture systems have been shown to accurately mimic in vivo hepatic phenotypes on transcriptomic and proteomic level, but information about their metabolic stability is lacking. Using a combination of targeted and untargeted high-resolution mass spectrometry, we found that PHHs in 3D spheroid cultures remained metabolically stable for multiple weeks, whereas metabolic patterns of PHHs from the same donors cultured as conventional 2D monolayers rapidly deteriorated. Furthermore, pharmacokinetic differences between donors were maintained in 3D spheroid cultures, enabling studies of interindividual variability in drug metabolism and toxicity. We conclude that the 3D spheroid system is metabolically stable and constitutes a suitable model for in vitro studies of long-term drug metabolism and pharmacokinetics.—Vorrink, S. U., Ullah, S., Schmid, S., Nandania, J., Velagapudi, V., Beck, O., Ingelman-Sundberg, M., Lauschke, V. M. Endogenous and xenobiotic metabolic stability of primary human hepatocytes in long-term 3D spheroid cultures revealed by a combination of targeted and untargeted metabolomics. PMID:28264975
Vorrink, Sabine U; Ullah, Shahid; Schmidt, Staffan; Nandania, Jatin; Velagapudi, Vidya; Beck, Olof; Ingelman-Sundberg, Magnus; Lauschke, Volker M
2017-06-01
Adverse reactions or lack of response to medications are important concerns for drug development programs. However, faithful predictions of drug metabolism and toxicity are difficult because animal models show only limited translatability to humans. Furthermore, current in vitro systems, such as hepatic cell lines or primary human hepatocyte (PHH) 2-dimensional (2D) monolayer cultures, can be used only for acute toxicity tests because of their immature phenotypes and inherent instability. Therefore, the migration to novel phenotypically stable models is of prime importance for the pharmaceutical industry. Novel 3-dimensional (3D) culture systems have been shown to accurately mimic in vivo hepatic phenotypes on transcriptomic and proteomic level, but information about their metabolic stability is lacking. Using a combination of targeted and untargeted high-resolution mass spectrometry, we found that PHHs in 3D spheroid cultures remained metabolically stable for multiple weeks, whereas metabolic patterns of PHHs from the same donors cultured as conventional 2D monolayers rapidly deteriorated. Furthermore, pharmacokinetic differences between donors were maintained in 3D spheroid cultures, enabling studies of interindividual variability in drug metabolism and toxicity. We conclude that the 3D spheroid system is metabolically stable and constitutes a suitable model for in vitro studies of long-term drug metabolism and pharmacokinetics.-Vorrink, S. U., Ullah, S., Schmid, S., Nandania, J., Velagapudi, V., Beck, O., Ingelman-Sundberg, M., Lauschke, V. M. Endogenous and xenobiotic metabolic stability of primary human hepatocytes in long-term 3D spheroid cultures revealed by a combination of targeted and untargeted metabolomics. © The Author(s).
Inherently safe in situ uranium recovery
Krumhansl, James L; Brady, Patrick V
2014-04-29
An in situ recovery of uranium operation involves circulating reactive fluids through an underground uranium deposit. These fluids contain chemicals that dissolve the uranium ore. Uranium is recovered from the fluids after they are pumped back to the surface. Chemicals used to accomplish this include complexing agents that are organic, readily degradable, and/or have a predictable lifetime in an aquifer. Efficiency is increased through development of organic agents targeted to complexing tetravalent uranium rather than hexavalent uranium. The operation provides for in situ immobilization of some oxy-anion pollutants under oxidizing conditions as well as reducing conditions. The operation also artificially reestablishes reducing conditions on the aquifer after uranium recovery is completed. With the ability to have the impacted aquifer reliably remediated, the uranium recovery operation can be considered inherently safe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prakash, A., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr; Song, J.; Hwang, H., E-mail: amitknp@postech.ac.kr, E-mail: amit.knp02@gmail.com, E-mail: hwanghs@postech.ac.kr
In order to obtain reliable multilevel cell (MLC) characteristics, resistance controllability between the different resistance levels is required especially in resistive random access memory (RRAM), which is prone to resistance variability mainly due to its intrinsic random nature of defect generation and filament formation. In this study, we have thoroughly investigated the multilevel resistance variability in a TaO{sub x}-based nanoscale (<30 nm) RRAM operated in MLC mode. It is found that the resistance variability not only depends on the conductive filament size but also is a strong function of oxygen vacancy concentration in it. Based on the gained insights through experimentalmore » observations and simulation, it is suggested that forming thinner but denser conductive filament may greatly improve the temporal resistance variability even at low operation current despite the inherent stochastic nature of resistance switching process.« less
Enhancing and targeting nucleic acid delivery by magnetic force.
Plank, Christian; Anton, Martina; Rudolph, Carsten; Rosenecker, Joseph; Krötz, Florian
2003-08-01
Insufficient contact of inherently highly active nucleic acid delivery systems with target cells is a primary reason for their often observed limited efficacy. Physical methods of targeting can overcome this limitation and reduce the risk of undesired side effects due to non-target site delivery. The authors and others have developed a novel means of physical targeting, exploiting magnetic force acting on nucleic acid vectors associated with magnetic particles in order to mediate the rapid contact of vectors with target cells. Here, the principles of magnetic drug and nucleic acid delivery are reviewed, and the facts and potentials of the technique for research and therapeutic applications are discussed. Magnetically enhanced nucleic acid delivery - magnetofection - is universally applicable to viral and non-viral vectors, is extraordinarily rapid, simple and yields saturation level transfection at low dose in vitro. The method is useful for site-specific vector targeting in vivo. Exploiting the full potential of the technique requires an interdisciplinary research effort in magnetic field physics, magnetic particle chemistry, pharmaceutical formulation and medical application.
A manifold learning approach to target detection in high-resolution hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying "targets" such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space.
The Creation of Inequality: Myths of Potential and Ability
ERIC Educational Resources Information Center
Dorling, Danny; Tomlinson, Sally
2016-01-01
The old myth about the ability and variability of potential in children is a comforting myth, for those who are uneasy with the degree of inequality they see and would rather seek to justify it than confront it. The myth of inherent potential helps some explain to themselves why they are privileged. Extend the myth to believe in inherited ability…
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Impacts of wildfire on runoff and sediment loads at Little Granite Creek, western Wyoming
Sandra E. Ryan; Kathleen A. Dwire; Mark K. Dixon
2011-01-01
Baseline data on rates of sediment transport provide useful information on the inherent variability of stream processes and may be used to assess departure in channel form or process from disturbances. In August 2000, wildfire burned portions of the Little Granite Creek watershed near Bondurant, WY where bedload and suspended sediment measurements had been collected...
Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Impact of Classroom Design on Teacher Pedagogy and Student Engagement and Performance in Mathematics
ERIC Educational Resources Information Center
Imms, Wesley; Byers, Terry
2017-01-01
A resurgence in interest in classroom and school design has highlighted how little we know about the impact of learning environments on student and teacher performance. This is partly because of a lack of research methods capable of controlling the complex variables inherent to space and education. In a unique study that overcame such difficulties…
Bollen, Kenneth A
2007-06-01
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal (formative) indicators rests on several claims: (a) A latent variable exists apart from the model when there are effect (reflective) indicators but not when there are causal (formative) indicators, (b) causal (formative) indicators need not have the same consequences, (c) causal (formative) indicators are inherently subject to interpretational confounding, and (d) a researcher cannot detect interpretational confounding when using causal (formative) indicators. This article shows that each claim is false. Rather, interpretational confounding is more a problem of structural misspecification of a model combined with an underidentified model that leaves these misspecifications undetected. Interpretational confounding does not occur if the model is correctly specified whether a researcher has causal (formative) or effect (reflective) indicators. It is the validity of a model not the type of indicator that determines the potential for interpretational confounding. Copyright 2007 APA, all rights reserved.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Mahanama, Sarith P.
2012-01-01
The inherent soil moisture-evaporation relationships used in today 's land surface models (LSMs) arguably reflect a lot of guesswork given the lack of contemporaneous evaporation and soil moisture observations at the spatial scales represented by regional and global models. The inherent soil moisture-runoff relationships used in the LSMs are also of uncertain accuracy. Evaluating these relationships is difficult but crucial given that they have a major impact on how the land component contributes to hydrological and meteorological variability within the climate system. The relationships, it turns out, can be examined efficiently and effectively with a simple water balance model framework. The simple water balance model, driven with multi-decadal observations covering the conterminous United States, shows how different prescribed relationships lead to different manifestations of hydrological variability, some of which can be compared directly to observations. Through the testing of a wide suite of relationships, the simple model provides estimates for the underlying relationships that operate in nature and that should be operating in LSMs. We examine the relationships currently used in a number of different LSMs in the context of the simple water balance model results and make recommendations for potential first-order improvements to these LSMs.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Salvucci, Guido D.; Rigden, Angela J.; Jung, Martin; Collatz, G. James; Schubert, Siegfried D.
2015-01-01
The spatial pattern across the continental United States of the interannual variance of warm season water-dependent evapotranspiration, a pattern of relevance to land-atmosphere feedback, cannot be measured directly. Alternative and indirect approaches to estimating the pattern, however, do exist, and given the uncertainty of each, we use several such approaches here. We first quantify the water dependent evapotranspiration variance pattern inherent in two derived evapotranspiration datasets available from the literature. We then search for the pattern in proxy geophysical variables (air temperature, stream flow, and NDVI) known to have strong ties to evapotranspiration. The variances inherent in all of the different (and mostly independent) data sources show some differences but are generally strongly consistent they all show a large variance signal down the center of the U.S., with lower variances toward the east and (for the most part) toward the west. The robustness of the pattern across the datasets suggests that it indeed represents the pattern operating in nature. Using Budykos hydroclimatic framework, we show that the pattern can largely be explained by the relative strength of water and energy controls on evapotranspiration across the continent.
A variable partially polarizing beam splitter.
Flórez, Jefferson; Carlson, Nathan J; Nacke, Codey H; Giner, Lambert; Lundeen, Jeff S
2018-02-01
We present designs for variably polarizing beam splitters. These are beam splitters allowing the complete and independent control of the horizontal and vertical polarization splitting ratios. They have quantum optics and quantum information applications, such as quantum logic gates for quantum computing and non-local measurements for quantum state estimation. At the heart of each design is an interferometer. We experimentally demonstrate one particular implementation, a displaced Sagnac interferometer configuration, that provides an inherent instability to air currents and vibrations. Furthermore, this design does not require any custom-made optics but only common components which can be easily found in an optics laboratory.
A variable partially polarizing beam splitter
NASA Astrophysics Data System (ADS)
Flórez, Jefferson; Carlson, Nathan J.; Nacke, Codey H.; Giner, Lambert; Lundeen, Jeff S.
2018-02-01
We present designs for variably polarizing beam splitters. These are beam splitters allowing the complete and independent control of the horizontal and vertical polarization splitting ratios. They have quantum optics and quantum information applications, such as quantum logic gates for quantum computing and non-local measurements for quantum state estimation. At the heart of each design is an interferometer. We experimentally demonstrate one particular implementation, a displaced Sagnac interferometer configuration, that provides an inherent instability to air currents and vibrations. Furthermore, this design does not require any custom-made optics but only common components which can be easily found in an optics laboratory.
New Galactic Candidate Luminous Blue Variables and Wolf-Rayet Stars
NASA Astrophysics Data System (ADS)
Stringfellow, Guy S.; Gvaramadze, Vasilii V.; Beletsky, Yuri; Kniazev, Alexei Y.
2012-04-01
We have undertaken a near-infrared spectral survey of stars associated with compact mid-IR shells recently revealed by the MIPSGAL (24 μm) and GLIMPSE (8 μm) Spitzer surveys, whose morphologies are typical of circumstellar shells produced by massive evolved stars. Through spectral similarity with known Luminous Blue Variable (LBV) and Wolf-Rayet (WR) stars, a large population of candidate LBVs (cLBVs) and a smaller number of new WR stars are being discovered. This significantly increases the Galactic cLBV population and confirms that nebulae are inherent to most (if not all) objects of this class.
ERIC Educational Resources Information Center
Heyne, David A.; Vreeke, Leonie J.; Maric, Marija; Boelens, Harrie; Van Widenfelt, Brigit M.
2017-01-01
The "School Refusal Assessment Scale" (SRAS) was developed to identify four factors that might maintain a youth's school attendance problem (SAP), and thus be targeted for treatment. There is still limited support for the four-factor model inherent to the SRAS and its revision (SRAS-R). Recent studies indicate problems with the wording…
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
EUV patterning using CAR or MOX photoresist at low dose exposure for sub 36nm pitch
NASA Astrophysics Data System (ADS)
Thibaut, Sophie; Raley, Angélique; Lazarrino, Frederic; Mao, Ming; De Simone, Danilo; Piumi, Daniele; Barla, Kathy; Ko, Akiteru; Metz, Andrew; Kumar, Kaushik; Biolsi, Peter
2018-04-01
The semiconductor industry has been pushing the limits of scalability by combining 193nm immersion lithography with multi-patterning techniques for several years. Those integrations have been declined in a wide variety of options to lower their cost but retain their inherent variability and process complexity. EUV lithography offers a much desired path that allows for direct print of line and space at 36nm pitch and below and effectively addresses issues like cycle time, intra-level overlay and mask count costs associated with multi-patterning. However it also brings its own sets of challenges. One of the major barrier to high volume manufacturing implementation has been hitting the 250W power exposure required for adequate throughput [1]. Enabling patterning using a lower dose resist could help move us closer to the HVM throughput targets assuming required performance for roughness and pattern transfer can be met. As plasma etching is known to reduce line edge roughness on 193nm lithography printed features [2], we investigate in this paper the level of roughness that can be achieved on EUV photoresist exposed at a lower dose through etch process optimization into a typical back end of line film stack. We will study 16nm lines printed at 32 and 34nm pitch. MOX and CAR photoresist performance will be compared. We will review step by step etch chemistry development to reach adequate selectivity and roughness reduction to successfully pattern the target layer.
Man-in-the-loop study of filtering in airborne head tracking tasks
NASA Technical Reports Server (NTRS)
Lifshitz, S.; Merhav, S. J.
1992-01-01
A human-factors study is conducted of problems due to vibrations during the use of a helmet-mounted display (HMD) in tracking tasks whose major factors are target motion and head vibration. A method is proposed for improving aiming accuracy in such tracking tasks on the basis of (1) head-motion measurement and (2) the shifting of the reticle in the HMD in ways that inhibit much of the involuntary apparent motion of the reticle, relative to the target, and the nonvoluntary motion of the teleoperated device. The HMD inherently furnishes the visual feedback required by this scheme.
SFR test fixture for hemispherical and hyperhemispherical camera systems
NASA Astrophysics Data System (ADS)
Tamkin, John M.
2017-08-01
Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.
Subcellular Redox Targeting: Bridging in Vitro and in Vivo Chemical Biology.
Long, Marcus J C; Poganik, Jesse R; Ghosh, Souradyuti; Aye, Yimon
2017-03-17
Networks of redox sensor proteins within discrete microdomains regulate the flow of redox signaling. Yet, the inherent reactivity of redox signals complicates the study of specific redox events and pathways by traditional methods. Herein, we review designer chemistries capable of measuring flux and/or mimicking subcellular redox signaling at the cellular and organismal level. Such efforts have begun to decipher the logic underlying organelle-, site-, and target-specific redox signaling in vitro and in vivo. These data highlight chemical biology as a perfect gateway to interrogate how nature choreographs subcellular redox chemistry to drive precision redox biology.
Fusion genes in solid tumors: an emerging target for cancer diagnosis and treatment.
Parker, Brittany C; Zhang, Wei
2013-11-01
Studies over the past decades have uncovered fusion genes, a class of oncogenes that provide immense diagnostic and therapeutic advantages because of their tumor-specific expression. Originally associated with hemotologic cancers, fusion genes have recently been discovered in a wide array of solid tumors, including sarcomas, carcinomas, and tumors of the central nervous system. Fusion genes are attractive as both therapeutic targets and diagnostic tools due to their inherent expression in tumor tissue alone. Therefore, the discovery and elucidation of fusion genes in various cancer types may provide more effective therapies in the future for cancer patients.
Progress with variable cycle engines
NASA Technical Reports Server (NTRS)
Westmoreland, J. S.
1980-01-01
The evaluation of components of an advanced propulsion system for a future supersonic cruise vehicle is discussed. These components, a high performance duct burner for thrust augmentation and a low jet noise coannular exhaust nozzle, are part of the variable stream control engine. An experimental test program involving both isolated component and complete engine tests was conducted for the high performance, low emissions duct burner with excellent results. Nozzle model tests were completed which substantiate the inherent jet noise benefit associated with the unique velocity profile possible of a coannular exhaust nozzle system on a variable stream control engine. Additional nozzle model performance tests have established high thrust efficiency levels at takeoff and supersonic cruise for this nozzle system. Large scale testing of these two critical components is conducted using an F100 engine as the testbed for simulating the variable stream control engine.
A Potential Function Derivation of a Constitutive Equation for Inelastic Material Response
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Elfoutouh, N. A.
1983-01-01
Physical and thermodynamic concepts are used to develop a potential function for application to high temperature polycrystalline material response. Inherent in the formulation is a differential relationship between the potential function and constitutive equation in terms of the state variables. Integration of the differential relationship produces a state variable evolution equation that requires specification of the initial value of the state variable and its time derivative. It is shown that the initial loading rate, which is directly related to the initial hardening rate, can significantly influence subsequent material response. This effect is consistent with observed material behavior on the macroscopic and microscopic levels, and may explain the wide scatter in response often found in creep testing.
Barbu, Stéphanie; Martin, Nathael; Chevrot, Jean-Pierre
2014-01-01
The linguistic diversity enduring beyond institutional pressures and social prejudices against non-standard dialects questions the social forces influencing language maintenance across generations and how children contribute to this process. Children encounter multi-dialectal interactions in their early environment, and increasing evidence shows that the acquisition of sociolinguistic variation is not a side issue but an inherent part of the general acquisition process. Despite these recent advances in sociolinguistic acquisition, children's sociolinguistic uses remain under-studied in relation to peer social networks and the ability to use dialect for identity purposes. Our study focused on a grammatical sociolinguistic variable consisting of the alternation between a regional and a standard variant of the third person object pronoun in French. The regional variant is a remnant of the Francoprovençal language and its usage by adults is strongly associated with local identity in the French Alps. We described, using questionnaires, the social networks of 117 10–11 year-old girls and boys living in the same restricted rural area. Thirteen native target children (7 girls and 6 boys) were selected from the sample, as well as 39 same-sex friends chosen according to their place of birth (native vs. non-native) and the duration of their friendship with the targets (number of years they have known each other). The target children were recorded during spontaneous dyadic conversations during free play at school with each category of friends. Target boys, but not girls, used the regional variant significantly more frequently with their long-term native friends than with their non-native friends. This adjustment mirrored their partners' uses. Moreover, with long-term native friends, boys used the regional variant twice as frequently as girls. Boys appeared thus as key actors in the maintenance and the diffusion of regional cues in local social networks. PMID:25400617
Barbu, Stéphanie; Martin, Nathael; Chevrot, Jean-Pierre
2014-01-01
The linguistic diversity enduring beyond institutional pressures and social prejudices against non-standard dialects questions the social forces influencing language maintenance across generations and how children contribute to this process. Children encounter multi-dialectal interactions in their early environment, and increasing evidence shows that the acquisition of sociolinguistic variation is not a side issue but an inherent part of the general acquisition process. Despite these recent advances in sociolinguistic acquisition, children's sociolinguistic uses remain under-studied in relation to peer social networks and the ability to use dialect for identity purposes. Our study focused on a grammatical sociolinguistic variable consisting of the alternation between a regional and a standard variant of the third person object pronoun in French. The regional variant is a remnant of the Francoprovençal language and its usage by adults is strongly associated with local identity in the French Alps. We described, using questionnaires, the social networks of 117 10-11 year-old girls and boys living in the same restricted rural area. Thirteen native target children (7 girls and 6 boys) were selected from the sample, as well as 39 same-sex friends chosen according to their place of birth (native vs. non-native) and the duration of their friendship with the targets (number of years they have known each other). The target children were recorded during spontaneous dyadic conversations during free play at school with each category of friends. Target boys, but not girls, used the regional variant significantly more frequently with their long-term native friends than with their non-native friends. This adjustment mirrored their partners' uses. Moreover, with long-term native friends, boys used the regional variant twice as frequently as girls. Boys appeared thus as key actors in the maintenance and the diffusion of regional cues in local social networks.
Molecular Mechanisms of Aldehyde Toxicity: A Chemical Perspective
2015-01-01
Aldehydes are electrophilic compounds to which humans are pervasively exposed. Despite a significant health risk due to exposure, the mechanisms of aldehyde toxicity are poorly understood. This ambiguity is likely due to the structural diversity of aldehyde derivatives and corresponding differences in chemical reactions and biological targets. To gain mechanistic insight, we have used parameters based on the hard and soft, acids and bases (HSAB) theory to profile the different aldehyde subclasses with respect to electronic character (softness, hardness), electrophilic reactivity (electrophilic index), and biological nucleophilic targets. Our analyses indicate that short chain aldehydes and longer chain saturated alkanals are hard electrophiles that cause toxicity by forming adducts with hard biological nucleophiles, e.g., primary nitrogen groups on lysine residues. In contrast, α,β-unsaturated carbonyl derivatives, alkenals, and the α-oxoaldehydes are soft electrophiles that preferentially react with soft nucleophilic thiolate groups on cysteine residues. The aldehydes can therefore be grouped into subclasses according to common electronic characteristics (softness/hardness) and molecular mechanisms of toxicity. As we will discuss, the toxic potencies of these subgroups are generally related to corresponding electrophilicities. For some aldehydes, however, predictions of toxicity based on electrophilicity are less accurate due to inherent physicochemical variables that limit target accessibility, e.g., steric hindrance and solubility. The unsaturated aldehydes are also members of the conjugated type-2 alkene chemical class that includes α,β-unsaturated amide, ketone, and ester derivatives. Type-2 alkenes are electrophiles of varying softness and electrophilicity that share a common mechanism of toxicity. Therefore, exposure to an environmental mixture of unsaturated carbonyl derivatives could cause “type-2 alkene toxicity” through additive interactions. Finally, we propose that environmentally derived aldehydes can accelerate diseases by interacting with endogenous aldehydes generated during oxidative stress. This review provides a basis for understanding aldehyde mechanisms and environmental toxicity through the context of electronic structure, electrophilicity, and nucleophile target selectivity. PMID:24911545
Nanocarriers for cancer-targeted drug delivery.
Kumari, Preeti; Ghosh, Balaram; Biswas, Swati
2016-01-01
Nanoparticles as drug delivery system have received much attention in recent years, especially for cancer treatment. In addition to improving the pharmacokinetics of the loaded poorly soluble hydrophobic drugs by solubilizing them in the hydrophobic compartments, nanoparticles allowed cancer specific drug delivery by inherent passive targeting phenomena and adopted active targeting strategies. For this reason, nanoparticles-drug formulations are capable of enhancing the safety, pharmacokinetic profiles and bioavailability of the administered drugs leading to improved therapeutic efficacy compared to conventional therapy. The focus of this review is to provide an overview of various nanoparticle formulations in both research and clinical applications with a focus on various chemotherapeutic drug delivery systems for the treatment of cancer. The use of various nanoparticles, including liposomes, polymeric nanoparticles, dendrimers, magnetic and other inorganic nanoparticles for targeted drug delivery in cancer is detailed.
Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.
2011-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.
The use of groundwater age as a calibration target
Konikow, Leonard F.; Hornberger, G.Z.; Putnam, L.D.; Shapiro, A.M.; Zinn, B.A.
2008-01-01
Groundwater age (or residence time), as estimated on the basis of concentrations of one or more environmental tracers, can provide a useful and independent calibration target for groundwater models. However, concentrations of environmental tracers are affected by the complexities and mixing inherent in groundwater flow through heterogeneous media, especially in the presence of pumping wells. An analysis of flow and age distribution in the Madison aquifer in South Dakota, USA, illustrates the additional benefits and difficulties of using age as a calibration target. Alternative numerical approaches to estimating travel time and age with backward particle tracking are assessed, and the resulting estimates are used to refine estimates of effective porosity and to help assess the adequacy and credibility of the flow model.
Impulse-variability theory: implications for ballistic, multijoint motor skill performance.
Urbin, M A; Stodden, David F; Fischman, Mark G; Weimar, Wendi H
2011-01-01
Impulse-variability theory (R. A. Schmidt, H. N. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979) accounts for the curvilinear relationship between the magnitude and resulting variability of the muscular forces that influence the success of goal-directed limb movements. The historical roots of impulse-variability theory are reviewed in the 1st part of this article, including the relationship between movement speed and spatial error. The authors then address the relevance of impulse-variability theory for the control of ballistic, multijoint skills, such as throwing, striking, and kicking. These types of skills provide a stark contrast to the relatively simple, minimal degrees of freedom movements that characterized early research. However, the inherent demand for ballistic force generation is a strong parallel between these simple laboratory tasks and multijoint motor skills. Therefore, the authors conclude by recommending experimental procedures for evaluating the adequacy of impulse variability as a theoretical model within the context of ballistic, multijoint motor skill performance. Copyright © Taylor & Francis Group, LLC
Humidity: A review and primer on atmospheric moisture and human health.
Davis, Robert E; McGregor, Glenn R; Enfield, Kyle B
2016-01-01
Research examining associations between weather and human health frequently includes the effects of atmospheric humidity. A large number of humidity variables have been developed for numerous purposes, but little guidance is available to health researchers regarding appropriate variable selection. We examine a suite of commonly used humidity variables and summarize both the medical and biometeorological literature on associations between humidity and human health. As an example of the importance of humidity variable selection, we correlate numerous hourly humidity variables to daily respiratory syncytial virus isolates in Singapore from 1992 to 1994. Most water-vapor mass based variables (specific humidity, absolute humidity, mixing ratio, dewpoint temperature, vapor pressure) exhibit comparable correlations. Variables that include a thermal component (relative humidity, dewpoint depression, saturation vapor pressure) exhibit strong diurnality and seasonality. Humidity variable selection must be dictated by the underlying research question. Despite being the most commonly used humidity variable, relative humidity should be used sparingly and avoided in cases when the proximity to saturation is not medically relevant. Care must be taken in averaging certain humidity variables daily or seasonally to avoid statistical biasing associated with variables that are inherently diurnal through their relationship to temperature. Copyright © 2015 Elsevier Inc. All rights reserved.
Landslide Hazard Probability Derived from Inherent and Dynamic Determinants
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan
2016-04-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach combines an empirical inherent landslide probability with a numerical dynamic probability, generated by combining routed recharge from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model run in a Monte Carlo simulation. Landslide hazard mapping is advanced by adjusting the dynamic model of stability with an empirically-based scalar representing the inherent stability of the landscape, creating a probabilistic quantitative measure of geohazard prediction at a 30-m resolution. Climatology, soil, and topography control the dynamic nature of hillslope stability and the empirical information further improves the discriminating ability of the integrated model. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex, a rugged terrain with nearly 2,700 m (9,000 ft) of vertical relief, covering 2757 sq km (1064 sq mi) in northern Washington State, U.S.A.
Landslide Hazard from Coupled Inherent and Dynamic Probabilities
NASA Astrophysics Data System (ADS)
Strauch, R. L.; Istanbulluoglu, E.; Nudurupati, S. S.
2015-12-01
Landslide hazard research has typically been conducted independently from hydroclimate research. We sought to unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach couples an empirical inherent landslide probability, based on a frequency ratio analysis, with a numerical dynamic probability, generated by combining subsurface water recharge and surface runoff from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model. Landslide hazard mapping is advanced by combining static and dynamic models of stability into a probabilistic measure of geohazard prediction in both space and time. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex in northern Washington State.
Localization and loss of coherence in molecular double slit experiments
NASA Astrophysics Data System (ADS)
Langer, Burkhard; Becker, Uwe
2009-11-01
In their famous paper Einstein, Podolsky and Rosen questioned 1935 the completeness of quantum mechanics concerning a local realistic description of our reality. They argued on the basis of superpositions of position and momentum states against the inherent non-locality and loss of information on prior conditions by quantum mechanics. This pioneering proposal was, however, too vague to be implemented in any experimental proof. Consequently, angular momentum related variables such as the polarization of light became the working horse of all experiments proving the EPR predictions. However, the spin and its related polarization properties are abstract quantities compared to position and momentum. Here we present the first evidence that non-locality and loss of prior quantum state information occurs also for position in ordinary space. This shows that the tunnelling effect and entanglement are inherently correlated
Nonlinear optimal control for the synchronization of chaotic and hyperchaotic finance systems
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Loia, V.; Ademi, S.; Ghosh, T.
2017-11-01
It is possible to make specific finance systems get synchronized to other finance systems exhibiting chaotic and hyperchaotic dynamics, by applying nonlinear optimal (H-infinity) control. This signifies that chaotic behavior can be generated in finance systems by exerting a suitable control input. Actually, a lead financial system is considered which exhibits inherently chaotic dynamics. Moreover, a follower finance system is introduced having parameters in its model that inherently prohibit the appearance of chaotic dynamics. Through the application of a suitable nonlinear optimal (H-infinity) control input it is proven that the follower finance system can replicate the chaotic dynamics of the lead finance system. By applying Lyapunov analysis it is proven that asymptotically the follower finance system gets synchronized with the lead system and that the tracking error between the state variables of the two systems vanishes.
Chéron, Jean-Baptiste; Triki, Dhoha; Senac, Caroline; Flatters, Delphine; Camproux, Anne-Claude
2017-01-01
Protein flexibility is often implied in binding with different partners and is essential for protein function. The growing number of macromolecular structures in the Protein Data Bank entries and their redundancy has become a major source of structural knowledge of the protein universe. The analysis of structural variability through available redundant structures of a target, called multiple target conformations (MTC), obtained using experimental or modeling methods and under different biological conditions or different sources is one way to explore protein flexibility. This analysis is essential to improve the understanding of various mechanisms associated with protein target function and flexibility. In this study, we explored structural variability of three biological targets by analyzing different MTC sets associated with these targets. To facilitate the study of these MTC sets, we have developed an efficient tool, SA-conf, dedicated to capturing and linking the amino acid and local structure variability and analyzing the target structural variability space. The advantage of SA-conf is that it could be applied to divers sets composed of MTCs available in the PDB obtained using NMR and crystallography or homology models. This tool could also be applied to analyze MTC sets obtained by dynamics approaches. Our results showed that SA-conf tool is effective to quantify the structural variability of a MTC set and to localize the structural variable positions and regions of the target. By selecting adapted MTC subsets and comparing their variability detected by SA-conf, we highlighted different sources of target flexibility such as induced by binding partner, by mutation and intrinsic flexibility. Our results support the interest to mine available structures associated with a target using to offer valuable insight into target flexibility and interaction mechanisms. The SA-conf executable script, with a set of pre-compiled binaries are available at http://www.mti.univ-paris-diderot.fr/recherche/plateformes/logiciels. PMID:28817602
Regad, Leslie; Chéron, Jean-Baptiste; Triki, Dhoha; Senac, Caroline; Flatters, Delphine; Camproux, Anne-Claude
2017-01-01
Protein flexibility is often implied in binding with different partners and is essential for protein function. The growing number of macromolecular structures in the Protein Data Bank entries and their redundancy has become a major source of structural knowledge of the protein universe. The analysis of structural variability through available redundant structures of a target, called multiple target conformations (MTC), obtained using experimental or modeling methods and under different biological conditions or different sources is one way to explore protein flexibility. This analysis is essential to improve the understanding of various mechanisms associated with protein target function and flexibility. In this study, we explored structural variability of three biological targets by analyzing different MTC sets associated with these targets. To facilitate the study of these MTC sets, we have developed an efficient tool, SA-conf, dedicated to capturing and linking the amino acid and local structure variability and analyzing the target structural variability space. The advantage of SA-conf is that it could be applied to divers sets composed of MTCs available in the PDB obtained using NMR and crystallography or homology models. This tool could also be applied to analyze MTC sets obtained by dynamics approaches. Our results showed that SA-conf tool is effective to quantify the structural variability of a MTC set and to localize the structural variable positions and regions of the target. By selecting adapted MTC subsets and comparing their variability detected by SA-conf, we highlighted different sources of target flexibility such as induced by binding partner, by mutation and intrinsic flexibility. Our results support the interest to mine available structures associated with a target using to offer valuable insight into target flexibility and interaction mechanisms. The SA-conf executable script, with a set of pre-compiled binaries are available at http://www.mti.univ-paris-diderot.fr/recherche/plateformes/logiciels.
Montoya, Isaac D; Bell, David C
2006-11-01
This article examines the effect of target, perceiver, and relationship characteristics on the perceiver's assessment that the target may be HIV seropositive (HIV+). A sample of 267 persons was recruited from low income, high drug use neighborhoods. Respondents (perceivers) were asked to name people (targets) with whom they had a social, drug sharing, or sexual relationship. Perceivers described 1,640 such relationships. Perceivers were asked about the targets' age, gender, and race/ethnicity, whether the targets were good-looking, their level of trust with the target, and how long they had known them. Perceivers were then asked to evaluate the chances that the target mentioned was HIV+. Two regression models were estimated on the 1,640 relationships mentioned. Model 1 included variables reflecting only target characteristics as independent variables. Model 2 included variables reflecting target characteristics as well as variables reflecting perceivers and perceiver-target relationship characteristics. The results showed that targets that were female, younger, and good-looking were perceived as being less likely to be HIV+. However, when accounting for perceiver and relationship effects, some of the target characteristic effects disappeared. Copyright 2006 APA, all rights reserved.
Eric S. Fabio; Mary A. Arthur; Charles C. Rhoades
2009-01-01
Understanding how natural factors interact across the landscape to influence nitrogen (N) cycling is an important focus in temperate forests because of the great inherent variability in these forests. Site-specific attributes, including local topography, soils, and vegetation, can exert important controls on N processes and retention. Seasonal monitoring of N cycling...
ERIC Educational Resources Information Center
Bollen, Kenneth A.
2007-01-01
R. D. Howell, E. Breivik, and J. B. Wilcox (2007) have argued that causal (formative) indicators are inherently subject to interpretational confounding. That is, they have argued that using causal (formative) indicators leads the empirical meaning of a latent variable to be other than that assigned to it by a researcher. Their critique of causal…
B.M. Collins; S.L. Stephens
2010-01-01
The complexity inherent in variable, or mixed-severity fire regimes makes quantitative characterization of important fire regime attributes (e.g., proportion of landscape burned at different severities, size and distribution of stand-replacing patches) difficult. As a result, there is ambiguity associated with the term ‘mixed-severity’. We address...
ERIC Educational Resources Information Center
Bundotich, Sarah; Kimaiyo, Lilian
2015-01-01
Academic performance is a function of many interrelated variables including inherent study efforts, modes of teaching, school environment and students ability. Many gifted students may face myriads of academic problems, which may however, be masked by their academic prowess, yet research into this realm is limited in Kenya. The objectives of the…
A phenomenological calculus of Wiener description space.
Richardson, I W; Louie, A H
2007-10-01
The phenomenological calculus is a categorical example of Robert Rosen's modeling relation. This paper is an alligation of the phenomenological calculus and generalized harmonic analysis, another categorical example. Our epistemological exploration continues into the realm of Wiener description space, in which constitutive parameters are extended from vectors to vector-valued functions of a real variable. Inherent in the phenomenology are fundamental representations of time and nearness to equilibrium.
Soil variability in engineering applications
NASA Astrophysics Data System (ADS)
Vessia, Giovanna
2014-05-01
Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.
Multipotency of Adult Hippocampal NSCs In Vivo Is Restricted by Drosha/NFIB.
Rolando, Chiara; Erni, Andrea; Grison, Alice; Beattie, Robert; Engler, Anna; Gokhale, Paul J; Milo, Marta; Wegleiter, Thomas; Jessberger, Sebastian; Taylor, Verdon
2016-11-03
Adult neural stem cells (NSCs) are defined by their inherent capacity to self-renew and give rise to neurons, astrocytes, and oligodendrocytes. In vivo, however, hippocampal NSCs do not generate oligodendrocytes for reasons that have remained enigmatic. Here, we report that deletion of Drosha in adult dentate gyrus NSCs activates oligodendrogenesis and reduces neurogenesis at the expense of gliogenesis. We further find that Drosha directly targets NFIB to repress its expression independently of Dicer and microRNAs. Knockdown of NFIB in Drosha-deficient hippocampal NSCs restores neurogenesis, suggesting that the Drosha/NFIB mechanism robustly prevents oligodendrocyte fate acquisition in vivo. Taken together, our findings establish that adult hippocampal NSCs inherently possess multilineage potential but that Drosha functions as a molecular barrier preventing oligodendrogenesis. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.
1995-01-01
Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.
A multiscale modeling approach to inflammation: A case study in human endotoxemia
NASA Astrophysics Data System (ADS)
Scheff, Jeremy D.; Mavroudis, Panteleimon D.; Foteinou, Panagiota T.; An, Gary; Calvano, Steve E.; Doyle, John; Dick, Thomas E.; Lowry, Stephen F.; Vodovotz, Yoram; Androulakis, Ioannis P.
2013-07-01
Inflammation is a critical component in the body's response to injury. A dysregulated inflammatory response, in which either the injury is not repaired or the inflammatory response does not appropriately self-regulate and end, is associated with a wide range of inflammatory diseases such as sepsis. Clinical management of sepsis is a significant problem, but progress in this area has been slow. This may be due to the inherent nonlinearities and complexities in the interacting multiscale pathways that are activated in response to systemic inflammation, motivating the application of systems biology techniques to better understand the inflammatory response. Here, we review our past work on a multiscale modeling approach applied to human endotoxemia, a model of systemic inflammation, consisting of a system of compartmentalized differential equations operating at different time scales and through a discrete model linking inflammatory mediators with changing patterns in the beating of the heart, which has been correlated with outcome and severity of inflammatory disease despite unclear mechanistic underpinnings. Working towards unraveling the relationship between inflammation and heart rate variability (HRV) may enable greater understanding of clinical observations as well as novel therapeutic targets.
Robinson, Beatrice Bean E; Uhl, Gary; Miner, Michael; Bockting, Walter O; Scheltema, Karen E; Rosser, B R Simon; Westover, Bonita
2002-06-01
This randomized controlled trial evaluated an innovative culturally specific sexual health intervention-targeting, but not limited to, low-income African American women-in which HIV and sexually transmitted disease prevention strategies were combined with comprehensive sexuality education. The intervention was delivered and evaluated in community-based settings to 218 participants randomly assigned to treatment or a no-treatment control group. Participants were interviewed at pretest and 3 and 9 months after the intervention to assess changes in both sexuality and HIV risk variables. The intervention was effective in improving sexual anatomy knowledge at both 3- and 9-month follow-up. For a subset of women engaging in unprotected sex at pretest, the intervention group reported an increase in positive attitudes toward the female condom at 9-month follow-up. Reasons for the weak treatment effect are discussed in the context of challenges inherent in conducting community-based research with high-risk populations and sensitive topics. Recommendations are provided for improving sample attrition, statistical power, and response bias and for altering the intervention so as to strengthen its impact.
Mabey, S.; Watts, B.; Paxton, B.; Smith, F.; Truitt, B.; Dawson, D.
2005-01-01
Many conservation organizations and initiatives including Partners-in-Flight and the U.S. Fish and Wildlife Service's regional Joint Ventures have identified migratory songbird stopover habitat as a priority conservation target. However, the spatial and temporal variability inherent in migration presents a number of challenges to both identifying and characterizing stopover habitat. Scarce conservation resources further demand that stopover sites be classified on a scale of priority so that conservation action can proceed according to ecological value. We are applying weather surveillance radar data collected from the National Weather Service WSR-88D at Wakefield, VA, and NASA's Doppler radar, NPOL, in Oyster, VA, to identify passerine stopover sites in the lower Chesapeake Bay region and develop spatial models to characterize these sites based on relative migrant abundance and consistency of use between and within seasons. We are using the stopover patterns to generate hypotheses regarding the habitat, geographic, and stochastic factors contributing to the distribution of migrants at a regional scale. We are testing these hypotheses with detailed habitat data and ground surveys of migrating birds with the goal of creating a generalized prioritization system for stopover site conservation.
Jang, SoRi; Marjanovic, Jasmina; Gornicki, Piotr
2013-03-01
Eleven spontaneous mutations of acetyl-CoA carboxylase have been identified in many herbicide-resistant populations of 42 species of grassy weeds, hampering application of aryloxyphenoxypropionate, cyclohexadione and phenylpyrazoline herbicides in agriculture. IC(50) shifts (resistance indices) caused by herbicide-resistant mutations were determined using a recombinant yeast system that allows comparison of the effects of single amino acid mutations in the same biochemical background, avoiding the complexity inherent in the in planta experiments. The effect of six mutations on the sensitivity of acetyl-CoA carboxylase to nine herbicides representing the three chemical classes was studied. A combination of partially overlapping binding sites of the three classes of herbicides and the structure of their variable parts explains cross-resistance among and between the three classes of inhibitors, as well as differences in their specificity. Some degree of resistance was detected for 51 of 54 herbicide/mutation combinations. Introduction of new herbicides targeting acetyl-CoA carboxylase will depend on their ability to overcome the high degree of cross-resistance already existing in weed populations. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
IT and Activity Displacement: Behavioral Evidence from the U.S. General Social Survey (GSS)
ERIC Educational Resources Information Center
Robinson, John P.; Martin, Steven
2009-01-01
In order to track social change during a period of the rapid advances brought about by new information technologies (IT), a targeted module of IT-relevant and Internet questions was added to the 2000, 2002 and 2004 samples of the General Social Survey (GSS). The general issue inherent in and guiding the questions asked (as well as the analyses…
ERIC Educational Resources Information Center
Rice, Stephen; McCarley, Jason S.
2011-01-01
Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in…
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
False-Positive Rate of AKI Using Consensus Creatinine–Based Criteria
Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G.S.; Negoianu, Dan; Testani, Jeffrey M.; Berns, Jeffrey S.; Parikh, Chirag R.
2015-01-01
Background and objectives Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false–positive rates because of inherent laboratory and biologic variabilities of creatinine. Design, setting, participants, & measurements We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false–positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient’s true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Results Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false–positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%–8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false–positive AKI diagnosis rate of 30.5% (interquartile range =30.1%–30.9%) versus 2.0% (interquartile range =1.9%–2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Conclusions Use of small serum creatinine changes to diagnose AKI is limited by high false–positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. PMID:26336912
False-Positive Rate of AKI Using Consensus Creatinine-Based Criteria.
Lin, Jennie; Fernandez, Hilda; Shashaty, Michael G S; Negoianu, Dan; Testani, Jeffrey M; Berns, Jeffrey S; Parikh, Chirag R; Wilson, F Perry
2015-10-07
Use of small changes in serum creatinine to diagnose AKI allows for earlier detection but may increase diagnostic false-positive rates because of inherent laboratory and biologic variabilities of creatinine. We examined serum creatinine measurement characteristics in a prospective observational clinical reference cohort of 2267 adult patients with AKI by Kidney Disease Improving Global Outcomes creatinine criteria and used these data to create a simulation cohort to model AKI false-positive rates. We simulated up to seven successive blood draws on an equal population of hypothetical patients with unchanging true serum creatinine values. Error terms generated from laboratory and biologic variabilities were added to each simulated patient's true serum creatinine value to obtain the simulated measured serum creatinine for each blood draw. We determined the proportion of patients who would be erroneously diagnosed with AKI by Kidney Disease Improving Global Outcomes creatinine criteria. Within the clinical cohort, 75.0% of patients received four serum creatinine draws within at least one 48-hour period during hospitalization. After four simulated creatinine measurements that accounted for laboratory variability calculated from assay characteristics and 4.4% of biologic variability determined from the clinical cohort and publicly available data, the overall false-positive rate for AKI diagnosis was 8.0% (interquartile range =7.9%-8.1%), whereas patients with true serum creatinine ≥1.5 mg/dl (representing 21% of the clinical cohort) had a false-positive AKI diagnosis rate of 30.5% (interquartile range =30.1%-30.9%) versus 2.0% (interquartile range =1.9%-2.1%) in patients with true serum creatinine values <1.5 mg/dl (P<0.001). Use of small serum creatinine changes to diagnose AKI is limited by high false-positive rates caused by inherent variability of serum creatinine at higher baseline values, potentially misclassifying patients with CKD in AKI studies. Copyright © 2015 by the American Society of Nephrology.
Li, Lucia M.; Uehara, Kazumasa; Hanakawa, Takashi
2015-01-01
There has been an explosion of research using transcranial direct current stimulation (tDCS) for investigating and modulating human cognitive and motor function in healthy populations. It has also been used in many studies seeking to improve deficits in disease populations. With the slew of studies reporting “promising results” for everything from motor recovery after stroke to boosting memory function, one could be easily seduced by the idea of tDCS being the next panacea for all neurological ills. However, huge variability exists in the reported effects of tDCS, with great variability in the effect sizes and even contradictory results reported. In this review, we consider the interindividual factors that may contribute to this variability. In particular, we discuss the importance of baseline neuronal state and features, anatomy, age and the inherent variability in the injured brain. We additionally consider how interindividual variability affects the results of motor-evoked potential (MEP) testing with transcranial magnetic stimulation (TMS), which, in turn, can lead to apparent variability in response to tDCS in motor studies. PMID:26029052
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
Universal structures of normal and pathological heart rate variability.
Gañán-Calvo, Alfonso M; Fajardo-López, Juan
2016-02-25
The circulatory system of living organisms is an autonomous mechanical system softly tuned with the respiratory system, and both developed by evolution as a response to the complex oxygen demand patterns associated with motion. Circulatory health is rooted in adaptability, which entails an inherent variability. Here, we show that a generalized N-dimensional normalized graph representing heart rate variability reveals two universal arrhythmic patterns as specific signatures of health one reflects cardiac adaptability, and the other the cardiac-respiratory rate tuning. In addition, we identify at least three universal arrhythmic profiles whose presences raise in proportional detriment of the two healthy ones in pathological conditions (myocardial infarction; heart failure; and recovery from sudden death). The presence of the identified universal arrhythmic structures together with the position of the centre of mass of the heart rate variability graph provide a unique quantitative assessment of the health-pathology gradient.
Chotirmall, S H; Low, T B; Hassan, T; Branagan, P; Kernekamp, C; Flynn, M G; Gunaratnam, C; McElvaney, N G
2011-06-01
Cystic fibrosis (CF) is of particular importance in Ireland as the Irish population has both the highest incidence (2.98/10,000) and the highest carrier rate (1 in 19) in the world. Primary immunodeficiency has not been previously reported as co-existing with CF. We report a unique case of CF associated with a primary immunodeficiency syndrome--common variable immunodeficiency (CVID). Our patient has CF, CVID and the additional comorbidity of Aspergers syndrome. The challenges inherent in diagnosing and treating such a case are outlined herein and the successful management of this case is evidenced by the well-preserved lung function of our patient.
Plate falling in a fluid: Regular and chaotic dynamics of finite-dimensional models
NASA Astrophysics Data System (ADS)
Kuznetsov, Sergey P.
2015-05-01
Results are reviewed concerning the planar problem of a plate falling in a resisting medium studied with models based on ordinary differential equations for a small number of dynamical variables. A unified model is introduced to conduct a comparative analysis of the dynamical behaviors of models of Kozlov, Tanabe-Kaneko, Belmonte-Eisenberg-Moses and Andersen-Pesavento-Wang using common dimensionless variables and parameters. It is shown that the overall structure of the parameter spaces for the different models manifests certain similarities caused by the same inherent symmetry and by the universal nature of the phenomena involved in nonlinear dynamics (fixed points, limit cycles, attractors, and bifurcations).
Speech-discrimination scores modeled as a binomial variable.
Thornton, A R; Raffin, M J
1978-09-01
Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
Ranji, Peyman; Salmani Kesejini, Tayyebali; Saeedikhoo, Sara; Alizadeh, Ali Mohammad
2016-10-01
Cancer stem cells (CSCs) are a small subpopulation of tumor cells with capabilities of self-renewal, dedifferentiation, tumorigenicity, and inherent chemo-and-radio therapy resistance. Tumor resistance is believed to be caused by CSCs that are intrinsically challenging to common treatments. A number of CSC markers including CD44, CD133, receptor tyrosine kinase, aldehyde dehydrogenases, epithelial cell adhesion molecule/epithelial specific antigen, and ATP-binding cassette subfamily G member 2 have been proved as the useful targets for defining CSC population in solid tumors. Furthermore, targeting CSC markers through new therapeutic strategies will ultimately improve treatments and overcome cancer drug resistance. Therefore, the identification of novel strategies to increase sensitivity of CSC markers has major clinical implications. This review will focus on the innovative treatment methods such as nano-, immuno-, gene-, and chemotherapy approaches for targeting CSC-specific markers and/or their associated signaling pathways.
Continuous movement decoding using a target-dependent model with EMG inputs.
Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J
2011-01-01
Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.
Cardiotoxicity of the new cancer therapeutics- mechanisms of, and approaches to, the problem
Force, Thomas; Kerkelä, Risto
2009-01-01
Cardiotoxicity of some targeted therapeutics, including monoclonal antibodies and small molecule inhibitors, is a reality. Herein we will examine why it occurs, focusing on molecular mechanisms to better understand the issue. We will also examine how big the problem is and, more importantly, how big it may become in the future. We will review models for detecting cardiotoxicity in the pre-clinical phase. We will also focus on two key areas that drive cardiotoxicity- multi-targeting and the inherent lack of selectivity of ATP-competitive antagonists. Finally, we will examine the issue of reversibility and discuss possible approaches to keeping patients on therapy. PMID:18617014
NASA Astrophysics Data System (ADS)
Kotowich, Steven
Studies of a non-thermal atmospheric pressure plasma source on an organic heterocycle were conducted to determine reaction parameters and rearrangement conditions. The target compound 3,7-bis(dimethylamino)-phenothiazin-5-ium chloride, commonly referred to as methylene blue, was determine to polymerize after exposure to a non-thermal atmospheric pressure plasma source. The presence of charge retention and a free electron radical were detected inherent to the polymer. Evaluation of the structure and mechanism of the polymer were also presented for evidence and clarification. Additional description of the plasma source environment was correlated to the manipulation of the target compound.
Byeon, Ji-Yeon; Bailey, Ryan C
2011-09-07
High affinity capture agents recognizing biomolecular targets are essential in the performance of many proteomic detection methods. Herein, we report the application of a label-free silicon photonic biomolecular analysis platform for simultaneously determining kinetic association and dissociation constants for two representative protein capture agents: a thrombin-binding DNA aptamer and an anti-thrombin monoclonal antibody. The scalability and inherent multiplexing capability of the technology make it an attractive platform for simultaneously evaluating the binding characteristics of multiple capture agents recognizing the same target antigen, and thus a tool complementary to emerging high-throughput capture agent generation strategies.
Exploiting the epigenome to control cancer promoting gene expression programs
Brien, Gerard L.; Valerio, Daria G.; Armstrong, Scott A.
2016-01-01
Summary The epigenome is a key determinant of transcriptional output. Perturbations within the epigenome are thought to be a key feature of many, perhaps all cancers, and it is now clear that epigenetic changes are instrumental in cancer development. The inherent reversibility of these changes makes them attractive targets for therapeutic manipulation and a number of small molecules targeting chromatin-based mechanisms are currently in clinical trials. In this perspective we discuss how understanding the cancer epigenome is providing insights into disease pathogenesis and informing drug development. We also highlight additional opportunities to further unlock the therapeutic potential within the cancer epigenome. PMID:27070701
Porphyrin as an ideal biomarker in the search for extraterrestrial life.
Suo, Zhiyong; Avci, Recep; Schweitzer, Mary Higby; Deliorman, Muhammedin
2007-08-01
A key issue in astrobiological research is identifying target molecules that are unambiguously biological in origin and can be easily detected and recognized. We suggest porphyrin derivatives as an ideal target, because these chromophores are global in distribution and found in virtually all living organisms on Earth, including microorganisms that may approximate the early evolution of life on Earth. We discuss the inherent qualities that make porphyrin ideally suited for astrobiological research and discuss methods for detecting porphyrin molecules in terrestrial sedimentary environments. We present preliminary data to support the use of ToFSIMS as a powerful technique in the identification of porphyrins.
Hilal, Talal; Gea Banacloche, Juan C; Leis, Jose F
2018-03-16
Chronic lymphocytic leukemia (CLL) is the most common adult leukemia in the world. Patient with CLL are at particular risk for infections due to inherent disease-related immune dysfunction in addition to the effect of certain systemic therapies on the immune system. The advent of B-cell receptor (BCR) inhibitors such as ibrutinib and idelalisib has led to a practice change that utilizes these targeted agents in the treatment of CLL, either in place of chemoimmunotherapy (CIT) or in later line settings. In this paper, we review the pathophysiology of immune dysfunction in CLL, the spectrum of immunodeficiency with the various therapeutic agents along with prevention strategies with a focus on targeted therapies. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh
1998-01-01
An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.
Ecological advice for the global fisher crisis.
Roberts, C M
1997-01-01
Fisheries science was the precursor of population ecology and continues to contribute important theoretical advances. Despite this, fishery scientists have a poor record for applying their insights to real-world fisheries management. Is there a gulf between theory and application or does the high variability inherent in fish populations and complexity of multispecies fisheries demand a different approach to management? Perhaps the solution to the world fisheries crisis is obvious after all?
Basic Research on Seismic and Infrasonic Monitoring of the European Arctic
2008-09-01
characteristics as well as the inherent variability among these signals . We have used available recordings both from the Apatity infrasound array and from...experimentally attempt to generate an infrasonic event bulletin using only the estimated azimuths and detection times of infrasound phases recorded by... detection . Our studies have shown a remarkably efficient wave propagation from events near Novaya Zemlya across the Barents Sea. Significant signal
Determining How to Best Predict Navy Recruiting Using Various Economic Variables
2015-03-01
M ., & Ajzen , I . (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley. Leonhardt, D... Fishbein & Ajzen , 1975), behavioral intentions are strong predictors of actual behavior. The higher propensity rates are at a given time inherently...J. R., Arkes, J., Fair, C. C., Sharp, J., & Totten, M . (2002). Military recruiting and retention after the fiscal year 2000 military pay (MR-1532-OSD
Review of Literature on Probability of Detection for Liquid Penetrant Nondestructive Testing
2011-11-01
increased maintenance costs , or catastrophic failure of safety- critical structure. Knowledge of the reliability achieved by NDT methods, including...representative components to gather data for statistical analysis, which can be prohibitively expensive. To account for sampling variability inherent in any...Sioux City and Pensacola. (Those recommendations were discussed in Section 3.4.) Drury et al report on a factorial experiment aimed at identifying the
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
CRISPR Approaches to Small Molecule Target Identification. | Office of Cancer Genomics
A long-standing challenge in drug development is the identification of the mechanisms of action of small molecules with therapeutic potential. A number of methods have been developed to address this challenge, each with inherent strengths and limitations. We here provide a brief review of these methods with a focus on chemical-genetic methods that are based on systematically profiling the effects of genetic perturbations on drug sensitivity.
Wayne D. Shepperd
2007-01-01
One of the difficulties of apportioning growing stock across diameter classes in multi- or uneven-aged forests is estimating how closely the target stocking value compares to the maximum stocking that could occur in a particular forest type and eco-region. Although the BDQ method had been used to develop uneven-aged prescriptions, it is not inherently related to any...
Multiple man-machine interfaces
NASA Technical Reports Server (NTRS)
Stanton, L.; Cook, C. W.
1981-01-01
The multiple man machine interfaces inherent in military pilot training, their social implications, and the issue of possible negative feedback were explored. Modern technology has produced machines which can see, hear, and touch with greater accuracy and precision than human beings. Consequently, the military pilot is more a systems manager, often doing battle against a target he never sees. It is concluded that unquantifiable human activity requires motivation that is not intrinsic in a machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creasy, John T
2015-05-12
This project has the objective to reduce and/or eliminate the use of HEU in commerce. Steps in the process include developing a target testing methodology that is bounding for all Mo-99 target irradiators, establishing a maximum target LEU-foil mass, developing a LEU-foil target qualification document, developing a bounding target failure analysis methodology (failure in reactor containment), optimizing safety vs. economics (goal is to manufacture a safe, but relatively inexpensive target to offset the inherent economic disadvantage of using LEU in place of HEU), and developing target material specifications and manufacturing QC test criteria. The slide presentation is organized under themore » following topics: Objective, Process Overview, Background, Team Structure, Key Achievements, Experiment and Activity Descriptions, and Conclusions. The High Density Target project has demonstrated: approx. 50 targets irradiated through domestic and international partners; proof of concept for two front end processing methods; fabrication of uranium foils for target manufacture; quality control procedures and steps for manufacture; multiple target assembly techniques; multiple target disassembly devices; welding of targets; thermal, hydraulic, and mechanical modeling; robust target assembly parametric studies; and target qualification analysis for insertion into very high flux environment. The High Density Target project has tested and proven several technologies that will benefit current and future Mo-99 producers.« less
Robot-assisted vitreoretinal surgery: current perspectives
Roizenblatt, Marina; Edwards, Thomas L; Gehlbach, Peter L
2018-01-01
Vitreoretinal microsurgery is among the most technically challenging of the minimally invasive surgical techniques. Exceptional precision is required to operate on micron scale targets presented by the retina while also maneuvering in a tightly constrained and fragile workspace. These challenges are compounded by inherent limitations of the unassisted human hand with regard to dexterity, tremor and precision in positioning instruments. The limited human ability to visually resolve targets on the single-digit micron scale is a further limitation. The inherent attributes of robotic approaches therefore, provide logical, strategic and promising solutions to the numerous challenges associated with retinal microsurgery. Robotic retinal surgery is a rapidly emerging technology that has witnessed an exponential growth in capabilities and applications over the last decade. There is now a worldwide movement toward evaluating robotic systems in an expanding number of clinical applications. Coincident with this expanding application is growth in the number of laboratories committed to “robotic medicine”. Recent technological advances in conventional retina surgery have also led to tremendous progress in the surgeon’s capabilities, enhanced outcomes, a reduction of patient discomfort, limited hospitalization and improved safety. The emergence of robotic technology into this rapidly advancing domain is expected to further enhance important aspects of the retinal surgery experience for the patients, surgeons and society. PMID:29527537
Robot-assisted vitreoretinal surgery: current perspectives.
Roizenblatt, Marina; Edwards, Thomas L; Gehlbach, Peter L
2018-01-01
Vitreoretinal microsurgery is among the most technically challenging of the minimally invasive surgical techniques. Exceptional precision is required to operate on micron scale targets presented by the retina while also maneuvering in a tightly constrained and fragile workspace. These challenges are compounded by inherent limitations of the unassisted human hand with regard to dexterity, tremor and precision in positioning instruments. The limited human ability to visually resolve targets on the single-digit micron scale is a further limitation. The inherent attributes of robotic approaches therefore, provide logical, strategic and promising solutions to the numerous challenges associated with retinal microsurgery. Robotic retinal surgery is a rapidly emerging technology that has witnessed an exponential growth in capabilities and applications over the last decade. There is now a worldwide movement toward evaluating robotic systems in an expanding number of clinical applications. Coincident with this expanding application is growth in the number of laboratories committed to "robotic medicine". Recent technological advances in conventional retina surgery have also led to tremendous progress in the surgeon's capabilities, enhanced outcomes, a reduction of patient discomfort, limited hospitalization and improved safety. The emergence of robotic technology into this rapidly advancing domain is expected to further enhance important aspects of the retinal surgery experience for the patients, surgeons and society.
Ohta, Yoichi
2017-12-01
The present study aimed to clarify the effects of oncoming target velocities on the ability of rapid force production and accuracy and variability of simultaneous control of both force production intensity and timing. Twenty male participants (age: 21.0 ± 1.4 years) performed rapid gripping with a handgrip dynamometer to coincide with the arrival of an oncoming target by using a horizontal electronic trackway. The oncoming target velocities were 4, 8, and 12 m · s -1 , which were randomly produced. The grip force required was 30% of the maximal voluntary contraction. Although the peak force (Pf) and rate of force development (RFD) increased with increasing target velocity, the value of the RFD to Pf ratio was constant across the 3 target velocities. The accuracy of both force production intensity and timing decreased at higher target velocities. Moreover, the intrapersonal variability in temporal parameters was lower in the fast target velocity condition, but constant variability in 3 target velocities was observed in force intensity parameters. These results suggest that oncoming target velocity does not intrinsically affect the ability for rapid force production. However, the oncoming target velocity affects accuracy and variability of force production intensity and timing during rapid force production.
Woerly, Eric M; Roy, Jahnabi; Burke, Martin D
2014-06-01
The inherent modularity of polypeptides, oligonucleotides and oligosaccharides has been harnessed to achieve generalized synthesis platforms. Importantly, like these other targets, most small-molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled both the synthesis of a wide range of polyene frameworks that covered all of this natural-product chemical space and the first total syntheses of the polyene natural products asnipyrone B, physarigin A and neurosporaxanthin b-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach to making small molecules in the laboratory.
NASA Astrophysics Data System (ADS)
Woerly, Eric M.; Roy, Jahnabi; Burke, Martin D.
2014-06-01
The inherent modularity of polypeptides, oligonucleotides and oligosaccharides has been harnessed to achieve generalized synthesis platforms. Importantly, like these other targets, most small-molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled both the synthesis of a wide range of polyene frameworks that covered all of this natural-product chemical space and the first total syntheses of the polyene natural products asnipyrone B, physarigin A and neurosporaxanthin β-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach to making small molecules in the laboratory.
Antibody Protein Array Analysis of the Tear Film Cytokines
Li, Shimin; Sack, Robert; Vijmasi, Trinka; Sathe, Sonal; Beaton, Ann; Quigley, David; Gallup, Marianne; McNamara, Nancy A.
2013-01-01
Purpose Many bioactive proteins including cytokines are reported to increase in dry eye disease although the specific profile and concentration of inflammatory mediators varies considerably from study to study. In part this variability results from inherent difficulties in quantifying low abundance proteins in a limited sample volume using relatively low sensitivity dot ELISA methods. Additional complexity comes with the use of pooled samples collected using a variety of techniques and intrinsic variation in the diurnal pattern of individual tear proteins. The current study describes a recent advance in the area of proteomics that has allowed the identification of dozens of low abundance proteins in human tear samples. Methods Commercially available stationary phase antibody protein arrays were adapted to improve suitability for use in small volume biological fluid analysis with particular emphasis on tear film proteomics. Arrays were adapted to allow simultaneous screening for a panel of inflammatory cytokines in low volume tear samples collected from individual eyes. Results A preliminary study comparing tear array results in a small population of Sjögren’s syndrome patients was conducted. The multiplex microplate array assays of cytokines in tear fluid present an unanticipated challenge due to the unique nature of tear fluid. The presence of factors that exhibit an affinity for plastic, capture antibodies and IgG and create a complex series of matrix effects profoundly impacting the reliability of dot ELISA, including with elevated levels of background reactivity and reduction in capacity to bind targeted protein. Conclusions Preliminary results using tears collected from patients with Sjögren’s syndrome reveal methodological advantages of protein array technology and support the concept that autoimmune-mediated dry eye disease has an inflammatory component. They also emphasize the inherent difficulties one can face when interpreting the results of micro-well arrays that result from blooming effects, matrix effects, image saturation and cross-talk between capture and probe antibodies that can greatly reduce signal-to-noise and limit the ability to obtain meaningful results. PMID:18677223
Donovan, Dennis M.; Bigelow, George E.; Brigham, Gregory S.; Carroll, Kathleen M.; Cohen, Allan J.; Gardin, John G.; Hamilton, John A.; Huestis, Marilyn A.; Hughes, John R.; Lindblad, Robert; Marlatt, G. Alan; Preston, Kenzie L.; Selzer, Jeffrey A.; Somoza, Eugene C.; Wakim, Paul G.; Wells, Elizabeth A.
2012-01-01
Aims Clinical trials test the safety and efficacy of behavioral and pharmacological interventions in drug-dependent individuals. However, there is no consensus about the most appropriate outcome(s) to consider in determining treatment efficacy or on the most appropriate methods for assessing selected outcome(s). We summarize the discussion and recommendations of treatment and research experts, convened by the US National Institute on Drug Abuse, to select appropriate primary outcomes for drug dependence treatment clinical trials, and in particular the feasibility of selecting a common outcome to be included in all or most trials. Methods A brief history of outcomes employed in prior drug dependence treatment research, incorporating perspectives from tobacco and alcohol research, is included. The relative merits and limitations of focusing on drug-taking behavior, as measured by self-report and qualitative or quantitative biological markers, are evaluated. Results Drug-taking behavior, measured ideally by a combination of self-report and biological indicators, is seen as the most appropriate proximal primary outcome in drug dependence treatment clinical trials. Conclusions We conclude that the most appropriate outcome will vary as a function of salient variables inherent in the clinical trial, such as the type of intervention, its target, treatment goals (e.g. abstinence or reduction of use) and the perspective being taken (e.g. researcher, clinical program, patient, society). It is recommended that a decision process, based on such trial variables, be developed to guide the selection of primary and secondary outcomes as well as the methods to assess them. PMID:21781202
The research of edge extraction and target recognition based on inherent feature of objects
NASA Astrophysics Data System (ADS)
Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo
2008-03-01
Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots fields. The results of simulation experiments and theory analyzing demonstrate that the proposed method could suppress noise effectively, extracted target edges robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
TU-AB-BRB-00: New Methods to Ensure Target Coverage
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Conservative-variable average states for equilibrium gas multi-dimensional fluxes
NASA Technical Reports Server (NTRS)
Iannelli, G. S.
1992-01-01
Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.
Liang, Xiaoyun; Vaughan, David N; Connelly, Alan; Calamante, Fernando
2018-05-01
The conventional way to estimate functional networks is primarily based on Pearson correlation along with classic Fisher Z test. In general, networks are usually calculated at the individual-level and subsequently aggregated to obtain group-level networks. However, such estimated networks are inevitably affected by the inherent large inter-subject variability. A joint graphical model with Stability Selection (JGMSS) method was recently shown to effectively reduce inter-subject variability, mainly caused by confounding variations, by simultaneously estimating individual-level networks from a group. However, its benefits might be compromised when two groups are being compared, given that JGMSS is blinded to other groups when it is applied to estimate networks from a given group. We propose a novel method for robustly estimating networks from two groups by using group-fused multiple graphical-lasso combined with stability selection, named GMGLASS. Specifically, by simultaneously estimating similar within-group networks and between-group difference, it is possible to address inter-subject variability of estimated individual networks inherently related with existing methods such as Fisher Z test, and issues related to JGMSS ignoring between-group information in group comparisons. To evaluate the performance of GMGLASS in terms of a few key network metrics, as well as to compare with JGMSS and Fisher Z test, they are applied to both simulated and in vivo data. As a method aiming for group comparison studies, our study involves two groups for each case, i.e., normal control and patient groups; for in vivo data, we focus on a group of patients with right mesial temporal lobe epilepsy.
NASA Astrophysics Data System (ADS)
Le, Chengfeng; Hu, Chuanmin; English, David; Cannizzaro, Jennifer; Chen, Zhiqiang; Kovach, Charles; Anastasiou, Christopher J.; Zhao, Jun; Carder, Kendall L.
2013-01-01
Inherent and apparent optical properties (IOPs and AOPs) of Tampa Bay (Florida, USA) were measured during fourteen cruises between February 1998 and October 2010 to understand how these properties relate to one another and what controls light absorption and diffuse attenuation in this moderately sized (˜1000 km2), shallow estuary (average depth ˜4 m). The IOPs and AOPs included: 1) absorption coefficients of three optically significant constituents: phytoplankton pigments, detrital particles, and colored dissolved organic matter (CDOM); 2) particulate backscattering coefficients; 3) chlorophyll-a concentrations; 4) above-water remote sensing reflectance; 5) downwelling diffuse attenuation coefficients (Kd) at eight wavelengths and photosynthetically active radiation (PAR). Results showed substantial variability in all IOPs and AOPs in both space and time, with most IOPs spanning more than two orders of magnitude and showing strong co-variations. Of all four bay segments, Old Tampa Bay showed unique optical characteristics. During the wet season, the magnitude of blue-green-light absorption was dominated by CDOM, while during the dry season all three constituents contributed significantly. However, the variability in Kd (PAR, 490 nm, 555 nm) was driven mainly by the variability of detrital particles and phytoplankton as opposed to CDOM. This observation explained, at least to first order, why a nutrient reduction management strategy used by the Tampa Bay Estuary Program since the 1990s led to improved water clarity in most of Tampa Bay. The findings of this study provided the optical basis to fine tune existing or develop new algorithms to estimate the various optical water quality parameters from space.
Kretova, Olga V; Chechetkin, Vladimir R; Fedoseeva, Daria M; Kravatsky, Yuri V; Sosin, Dmitri V; Alembekov, Ildar R; Gorbacheva, Maria A; Gashnikova, Natalya M; Tchurikov, Nickolai A
2017-02-01
Any method for silencing the activity of the HIV-1 retrovirus should tackle the extremely high variability of HIV-1 sequences and mutational escape. We studied sequence variability in the vicinity of selected RNA interference (RNAi) targets from isolates of HIV-1 subtype A in Russia, and we propose that using artificial RNAi is a potential alternative to traditional antiretroviral therapy. We prove that using multiple RNAi targets overcomes the variability in HIV-1 isolates. The optimal number of targets critically depends on the conservation of the target sequences. The total number of targets that are conserved with a probability of 0.7-0.8 should exceed at least 2. Combining deep sequencing and multitarget RNAi may provide an efficient approach to cure HIV/AIDS.
Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W
2017-01-01
Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.
Blum, Mathias; Gamper, Hannes A; Waldner, Maya; Sierotzki, Helge; Gisi, Ulrich
2012-04-01
Proper disease control is very important to minimize yield losses caused by oomycetes in many crops. Today, oomycete control is partially achieved by breeding for resistance, but mainly by application of single-site mode of action fungicides including the carboxylic acid amides (CAAs). Despite having mostly specific targets, fungicidal activity can differ even in species belonging to the same phylum but the underlying mechanisms are often poorly understood. In an attempt to elucidate the phylogenetic basis and underlying molecular mechanism of sensitivity and tolerance to CAAs, the cellulose synthase 3 (CesA3) gene was isolated and characterized, encoding the target site of this fungicide class. The CesA3 gene was present in all 25 species included in this study representing the orders Albuginales, Leptomitales, Peronosporales, Pythiales, Rhipidiales and Saprolegniales, and based on phylogenetic analyses, enabled good resolution of all the different taxonomic orders. Sensitivity assays using the CAA fungicide mandipropamid (MPD) demonstrated that only species belonging to the Peronosporales were inhibited by the fungicide. Molecular data provided evidence, that the observed difference in sensitivity to CAAs between Peronosporales and CAA tolerant species is most likely caused by an inherent amino acid configuration at position 1109 in CesA3 possibly affecting fungicide binding. The present study not only succeeded in linking CAA sensitivity of various oomycetes to the inherent CesA3 target site configuration, but could also relate it to the broader phylogenetic context. Copyright © 2012 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Truong, Trong-Kha; Guidon, Arnaud
2014-01-01
Purpose To develop and compare three novel reconstruction methods designed to inherently correct for motion-induced phase errors in multi-shot spiral diffusion tensor imaging (DTI) without requiring a variable-density spiral trajectory or a navigator echo. Theory and Methods The first method simply averages magnitude images reconstructed with sensitivity encoding (SENSE) from each shot, whereas the second and third methods rely on SENSE to estimate the motion-induced phase error for each shot, and subsequently use either a direct phase subtraction or an iterative conjugate gradient (CG) algorithm, respectively, to correct for the resulting artifacts. Numerical simulations and in vivo experiments on healthy volunteers were performed to assess the performance of these methods. Results The first two methods suffer from a low signal-to-noise ratio (SNR) or from residual artifacts in the reconstructed diffusion-weighted images and fractional anisotropy maps. In contrast, the third method provides high-quality, high-resolution DTI results, revealing fine anatomical details such as a radial diffusion anisotropy in cortical gray matter. Conclusion The proposed SENSE+CG method can inherently and effectively correct for phase errors, signal loss, and aliasing artifacts caused by both rigid and nonrigid motion in multi-shot spiral DTI, without increasing the scan time or reducing the SNR. PMID:23450457
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering. PMID:27872840
Mangado, Nerea; Piella, Gemma; Noailly, Jérôme; Pons-Prats, Jordi; Ballester, Miguel Ángel González
2016-01-01
Computational modeling has become a powerful tool in biomedical engineering thanks to its potential to simulate coupled systems. However, real parameters are usually not accurately known, and variability is inherent in living organisms. To cope with this, probabilistic tools, statistical analysis and stochastic approaches have been used. This article aims to review the analysis of uncertainty and variability in the context of finite element modeling in biomedical engineering. Characterization techniques and propagation methods are presented, as well as examples of their applications in biomedical finite element simulations. Uncertainty propagation methods, both non-intrusive and intrusive, are described. Finally, pros and cons of the different approaches and their use in the scientific community are presented. This leads us to identify future directions for research and methodological development of uncertainty modeling in biomedical engineering.
The Potential for Predicting Precipitation on Seasonal-to-Interannual Timescales
NASA Technical Reports Server (NTRS)
Koster, R. D.
1999-01-01
The ability to predict precipitation several months in advance would have a significant impact on water resource management. This talk provides an overview of a project aimed at developing this prediction capability. NASA's Seasonal-to-Interannual Prediction Project (NSIPP) will generate seasonal-to-interannual sea surface temperature predictions through detailed ocean circulation modeling and will then translate these SST forecasts into forecasts of continental precipitation through the application of an atmospheric general circulation model and a "SVAT"-type land surface model. As part of the process, ocean variables (e.g., height) and land variables (e.g., soil moisture) will be updated regularly via data assimilation. The overview will include a discussion of the variability inherent in such a modeling system and will provide some quantitative estimates of the absolute upper limits of seasonal-to-interannual precipitation predictability.
Wind and Solar on the Power Grid: Myths and Misperceptions, Greening the Grid (Spanish Version)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Authors: Denholm, Paul; Cochran, Jaquelin; Brancucci Martinez-Anido, Carlo
This is the Spanish version of the 'Greening the Grid - Wind and Solar on the Power Grid: Myths and Misperceptions'. Wind and solar are inherently more variable and uncertain than the traditional dispatchable thermal and hydro generators that have historically provided a majority of grid-supplied electricity. The unique characteristics of variable renewable energy (VRE) resources have resulted in many misperceptions regarding their contribution to a low-cost and reliable power grid. Common areas of concern include: 1) The potential need for increased operating reserves, 2) The impact of variability and uncertainty on operating costs and pollutant emissions of thermal plants,more » and 3) The technical limits of VRE penetration rates to maintain grid stability and reliability. This fact sheet corrects misperceptions in these areas.« less
Bodies with noncircular cross sections and bank-to-turn missiles
NASA Technical Reports Server (NTRS)
Jackson, C. M., Jr.; Sawyer, W. C.
1992-01-01
A development status evaluation is presented for the aerodynamics of missile configurations with noncircular cross-sections and bank-to-turn maneuvering systems, giving attention to cases with elliptical and square cross-sections, as well as bodies with variable cross-sections. The assessment of bank-to-turn missile performance notes inherent stability/control problems. A summary and index are provided for aerodynamic data on monoplanar configurations, including those which incorporate airbreathing propulsion systems.
1976-08-01
can easily change any of the parameters controlling the r, experimenter. B.2.3.3 The PLATO Laborato >• A block diagram of the laboratory is...the parameters of an adaptive filter, or to perform the computations required by the more complex displays. In addition to its role as the prime...by the inherent response variability which precludes reliable estimates of attention-sensitive parameters from a single observation. Thus
Chekmarev, Sergei F
2013-10-14
Using the Helmholtz decomposition of the vector field of folding fluxes in a two-dimensional space of collective variables, a potential of the driving force for protein folding is introduced. The potential has two components. One component is responsible for the source and sink of the folding flows, which represent respectively, the unfolded states and the native state of the protein, and the other, which accounts for the flow vorticity inherently generated at the periphery of the flow field, is responsible for the canalization of the flow between the source and sink. The theoretical consideration is illustrated by calculations for a model β-hairpin protein.
Model-based analyses: Promises, pitfalls, and example applications to the study of cognitive control
Mars, Rogier B.; Shea, Nicholas J.; Kolling, Nils; Rushworth, Matthew F. S.
2011-01-01
We discuss a recent approach to investigating cognitive control, which has the potential to deal with some of the challenges inherent in this endeavour. In a model-based approach, the researcher defines a formal, computational model that performs the task at hand and whose performance matches that of a research participant. The internal variables in such a model might then be taken as proxies for latent variables computed in the brain. We discuss the potential advantages of such an approach for the study of the neural underpinnings of cognitive control and its pitfalls, and we make explicit the assumptions underlying the interpretation of data obtained using this approach. PMID:20437297
Rapid Design of Gravity Assist Trajectories
NASA Technical Reports Server (NTRS)
Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.
1991-01-01
Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.
Experimental and computational investigation of lateral gauge response in polycarbonate
NASA Astrophysics Data System (ADS)
Eliot, Jim; Harris, Ernst; Hazell, Paul; Appleby-Thomas, Gareth; Winter, Ronald; Wood, David; Owen, Gareth
2011-06-01
Polycarbonate's use in personal armour systems means its high strain-rate response has been extensively studied. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges need to be embedded in a central (typically) epoxy interlayer - an inherently invasive approach. Recently, research has suggested that in such metal systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. This work extends previous efforts such that similar impedance exists between the interlayer and matrix (target) interface. Further, experiments and simulations are presented investigating the effects of a ``dry joint'' in polycarbonate, in which no encapsulating medium is employed.
Current Advances and Future Challenges in Adenoviral Vector Biology and Targeting
Campos, Samuel K.; Barry, Michael A.
2008-01-01
Gene delivery vectors based on Adenoviral (Ad) vectors have enormous potential for the treatment of both hereditary and acquired disease. Detailed structural analysis of the Ad virion, combined with functional studies has broadened our knowledge of the structure/function relationships between Ad vectors and host cells/tissues and substantial achievement has been made towards a thorough understanding of the biology of Ad vectors. The widespread use of Ad vectors for clinical gene therapy is compromised by their inherent immunogenicity. The generation of safer and more effective Ad vectors, targeted to the site of disease, has therefore become a great ambition in the field of Ad vector development. This review provides a synopsis of the structure/function relationships between Ad vectors and host systems and summarizes the many innovative approaches towards achieving Ad vector targeting. PMID:17584037
The Case for Targeting Leadership in War
1992-02-13
to the four-fold increase in terrorist incidents worldwide during Desert Shield, 35% of which were in the U.S.72 Others warn of the risk that our...inherently disadvantaged --both in the means by which they can assassinate the President, and in the potential impact a successful assassination would...mechanisms for ensuring survival of the Communist Party and its leadership cadres," the U.S. has placed at risk ’those political entities the Soviet
Effects of Base Cavity Depth on a Free Spinning Wrap-Around Fin Missile Configuration
1995-12-01
packaging problem. Current missile systems which possess wrap-around fin designs are the Army’s Multiple Launch Rocket System (MLRS) and the Hard Target...aerodynamic irregularities (2). Of particular importance to projectile designers is the side force/moment inherent to wrap-around fin configurations. During...virtual instrument programs integrated to perform all necessary aspects of calibration, data collection, and reduction. The details surrounding the design
Space Transportation System Availability Relationships to Life Cycle Cost
NASA Technical Reports Server (NTRS)
Rhodes, Russel E.; Donahue, Benjamin B.; Chen, Timothy T.
2009-01-01
Future space transportation architectures and designs must be affordable. Consequently, their Life Cycle Cost (LCC) must be controlled. For the LCC to be controlled, it is necessary to identify all the requirements and elements of the architecture at the beginning of the concept phase. Controlling LCC requires the establishment of the major operational cost drivers. Two of these major cost drivers are reliability and maintainability, in other words, the system's availability (responsiveness). Potential reasons that may drive the inherent availability requirement are the need to control the number of unique parts and the spare parts required to support the transportation system's operation. For more typical space transportation systems used to place satellites in space, the productivity of the system will drive the launch cost. This system productivity is the resultant output of the system availability. Availability is equal to the mean uptime divided by the sum of the mean uptime plus the mean downtime. Since many operational factors cannot be projected early in the definition phase, the focus will be on inherent availability which is equal to the mean time between a failure (MTBF) divided by the MTBF plus the mean time to repair (MTTR) the system. The MTBF is a function of reliability or the expected frequency of failures. When the system experiences failures the result is added operational flow time, parts consumption, and increased labor with an impact to responsiveness resulting in increased LCC. The other function of availability is the MTTR, or maintainability. In other words, how accessible is the failed hardware that requires replacement and what operational functions are required before and after change-out to make the system operable. This paper will describe how the MTTR can be equated to additional labor, additional operational flow time, and additional structural access capability, all of which drive up the LCC. A methodology will be presented that provides the decision makers with the understanding necessary to place constraints on the design definition. This methodology for the major drivers will determine the inherent availability, safety, reliability, maintainability, and the life cycle cost of the fielded system. This methodology will focus on the achievement of an affordable, responsive space transportation system. It is the intent of this paper to not only provide the visibility of the relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability, but also to provide the capability to bound the variables, thus providing the insight required to control the system's engineering solution. An example of this visibility is the need to provide integration of similar discipline functions to allow control of the total parts count of the space transportation system. Also, selecting a reliability requirement will place a constraint on parts count to achieve a given inherent availability requirement, or require accepting a larger parts count with the resulting higher individual part reliability requirements. This paper will provide an understanding of the relationship of mean repair time (mean downtime) to maintainability (accessibility for repair), and both mean time between failure (reliability of hardware) and the system inherent availability.
Mechanistic systems modeling to guide drug discovery and development
Schmidt, Brian J.; Papin, Jason A.; Musante, Cynthia J.
2013-01-01
A crucial question that must be addressed in the drug development process is whether the proposed therapeutic target will yield the desired effect in the clinical population. Pharmaceutical and biotechnology companies place a large investment on research and development, long before confirmatory data are available from human trials. Basic science has greatly expanded the computable knowledge of disease processes, both through the generation of large omics data sets and a compendium of studies assessing cellular and systemic responses to physiologic and pathophysiologic stimuli. Given inherent uncertainties in drug development, mechanistic systems models can better inform target selection and the decision process for advancing compounds through preclinical and clinical research. PMID:22999913
Mechanistic systems modeling to guide drug discovery and development.
Schmidt, Brian J; Papin, Jason A; Musante, Cynthia J
2013-02-01
A crucial question that must be addressed in the drug development process is whether the proposed therapeutic target will yield the desired effect in the clinical population. Pharmaceutical and biotechnology companies place a large investment on research and development, long before confirmatory data are available from human trials. Basic science has greatly expanded the computable knowledge of disease processes, both through the generation of large omics data sets and a compendium of studies assessing cellular and systemic responses to physiologic and pathophysiologic stimuli. Given inherent uncertainties in drug development, mechanistic systems models can better inform target selection and the decision process for advancing compounds through preclinical and clinical research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ogden, Jane
2016-09-01
Within any discipline there is always a degree of variability. For medicine it takes the form of Health Professional's behaviour, for education it's the style and content of the classroom, and for health psychology, it can be found in patient's behaviour, the theories used and clinical practice. Over recent years, attempts have been made to reduce this variability through the use of the Behaviour Change Technique Taxonomy, the COM-B and the Behaviour Change Wheel. This paper argues that although the call for better descriptions of what is done is useful for clarity and replication, this systematisation may be neither feasible nor desirable. In particular, it is suggested that the gaps inherent in the translational process from coding a protocol to behaviour will limit the effectiveness of reducing patient variability, that theory variability is necessary for the health and well-being of a discipline and that practice variability is central to the professional status of our practitioners. It is therefore argued that we should celebrate rather than remove this variability in order for our discipline to thrive and for us to remain as professionals rather than as technicians.
The Speed of Serial Attention Shifts in Visual Search: Evidence from the N2pc Component.
Grubert, Anna; Eimer, Martin
2016-02-01
Finding target objects among distractors in visual search display is often assumed to be based on sequential movements of attention between different objects. However, the speed of such serial attention shifts is still under dispute. We employed a search task that encouraged the successive allocation of attention to two target objects in the same search display and measured N2pc components to determine how fast attention moved between these objects. Each display contained one digit in a known color (fixed-color target) and another digit whose color changed unpredictably across trials (variable-color target) together with two gray distractor digits. Participants' task was to find the fixed-color digit and compare its numerical value with that of the variable-color digit. N2pc components to fixed-color targets preceded N2pc components to variable-color digits, demonstrating that these two targets were indeed selected in a fixed serial order. The N2pc to variable-color digits emerged approximately 60 msec after the N2pc to fixed-color digits, which shows that attention can be reallocated very rapidly between different target objects in the visual field. When search display durations were increased, thereby relaxing the temporal demands on serial selection, the two N2pc components to fixed-color and variable-color targets were elicited within 90 msec of each other. Results demonstrate that sequential shifts of attention between different target locations can operate very rapidly at speeds that are in line with the assumptions of serial selection models of visual search.
Health and Household Air Pollution from Solid Fuel Use: The Need for Improved Exposure Assessment
Peel, Jennifer L.; Balakrishnan, Kalpana; Breysse, Patrick N.; Chillrud, Steven N.; Naeher, Luke P.; Rodes, Charles E.; Vette, Alan F.; Balbus, John M.
2013-01-01
Background: Nearly 3 billion people worldwide rely on solid fuel combustion to meet basic household energy needs. The resulting exposure to air pollution causes an estimated 4.5% of the global burden of disease. Large variability and a lack of resources for research and development have resulted in highly uncertain exposure estimates. Objective: We sought to identify research priorities for exposure assessment that will more accurately and precisely define exposure–response relationships of household air pollution necessary to inform future cleaner-burning cookstove dissemination programs. Data Sources: As part of an international workshop in May 2011, an expert group characterized the state of the science and developed recommendations for exposure assessment of household air pollution. Synthesis: The following priority research areas were identified to explain variability and reduce uncertainty of household air pollution exposure measurements: improved characterization of spatial and temporal variability for studies examining both short- and long-term health effects; development and validation of measurement technology and approaches to conduct complex exposure assessments in resource-limited settings with a large range of pollutant concentrations; and development and validation of biomarkers for estimating dose. Addressing these priority research areas, which will inherently require an increased allocation of resources for cookstove research, will lead to better characterization of exposure–response relationships. Conclusions: Although the type and extent of exposure assessment will necessarily depend on the goal and design of the cookstove study, without improved understanding of exposure–response relationships, the level of air pollution reduction necessary to meet the health targets of cookstove interventions will remain uncertain. Citation: Clark ML, Peel JL, Balakrishnan K, Breysse PN, Chillrud SN, Naeher LP, Rodes CE, Vette AF, Balbus JM. 2013. Health and household air pollution from solid fuel use: the need for improved exposure assessment. Environ Health Perspect 121:1120–1128; http://dx.doi.org/10.1289/ehp.1206429 PMID:23872398
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atalar, Banu; Modlin, Leslie A.; Choi, Clara Y.H.
Purpose: We sought to determine the risk of leptomeningeal disease (LMD) in patients treated with stereotactic radiosurgery (SRS) targeting the postsurgical resection cavity of a brain metastasis, deferring whole-brain radiation therapy (WBRT) in all patients. Methods and Materials: We retrospectively reviewed 175 brain metastasis resection cavities in 165 patients treated from 1998 to 2011 with postoperative SRS. The cumulative incidence rates, with death as a competing risk, of LMD, local failure (LF), and distant brain parenchymal failure (DF) were estimated. Variables associated with LMD were evaluated, including LF, DF, posterior fossa location, resection type (en-bloc vs piecemeal or unknown), andmore » histology (lung, colon, breast, melanoma, gynecologic, other). Results: With a median follow-up of 12 months (range, 1-157 months), median overall survival was 17 months. Twenty-one of 165 patients (13%) developed LMD at a median of 5 months (range, 2-33 months) following SRS. The 1-year cumulative incidence rates, with death as a competing risk, were 10% (95% confidence interval [CI], 6%-15%) for developing LF, 54% (95% CI, 46%-61%) for DF, and 11% (95% CI, 7%-17%) for LMD. On univariate analysis, only breast cancer histology (hazard ratio, 2.96) was associated with an increased risk of LMD. The 1-year cumulative incidence of LMD was 24% (95% CI, 9%-41%) for breast cancer compared to 9% (95% CI, 5%-14%) for non-breast histology (P=.004). Conclusions: In patients treated with SRS targeting the postoperative cavity following resection, those with breast cancer histology were at higher risk of LMD. It is unknown whether the inclusion of whole-brain irradiation or novel strategies such as preresection SRS would improve this risk or if the rate of LMD is inherently higher with breast histology.« less
Klimstra, J.D.; O'Connell, A.F.; Pistrang, M.J.; Lewis, L.M.; Herrig, J.A.; Sauer, J.R.
2007-01-01
Science-based monitoring of biological resources is important for a greater understanding of ecological systems and for assessment of the target population using theoretic-based management approaches. When selecting variables to monitor, managers first need to carefully consider their objectives, the geographic and temporal scale at which they will operate, and the effort needed to implement the program. Generally, monitoring can be divided into two categories: index and inferential. Although index monitoring is usually easier to implement, analysis of index data requires strong assumptions about consistency in detection rates over time and space, and parameters are often biasednot accounting for detectability and spatial variation. In most cases, individuals are not always available for detection during sampling periods, and the entire area of interest cannot be sampled. Conversely, inferential monitoring is more rigorous because it is based on nearly unbiased estimators of spatial distribution. Thus, we recommend that detectability and spatial variation be considered for all monitoring programs that intend to make inferences about the target population or the area of interest. Application of these techniques is especially important for the monitoring of Threatened and Endangered (T&E) species because it is critical to determine if population size is increasing or decreasing with some level of certainty. Use of estimation-based methods and probability sampling will reduce many of the biases inherently associated with index data and provide meaningful information with respect to changes that occur in target populations. We incorporated inferential monitoring into protocols for T&E species spanning a wide range of taxa on the Cherokee National Forest in the Southern Appalachian Mountains. We review the various approaches employed for different taxa and discuss design issues, sampling strategies, data analysis, and the details of estimating detectability using site occupancy. These techniques provide a science-based approach for monitoring and can be of value to all resource managers responsible for management of T&E species.
Methods for measuring, enhancing, and accounting for medication adherence in clinical trials.
Vrijens, B; Urquhart, J
2014-06-01
Adherence to rationally prescribed medications is essential for effective pharmacotherapy. However, widely variable adherence to protocol-specified dosing regimens is prevalent among participants in ambulatory drug trials, mostly manifested in the form of underdosing. Drug actions are inherently dose and time dependent, and as a result, variable underdosing diminishes the actions of trial medications by various degrees. The ensuing combination of increased variability and decreased magnitude of trial drug actions reduces statistical power to discern between-group differences in drug actions. Variable underdosing has many adverse consequences, some of which can be mitigated by the combination of reliable measurements of ambulatory patients' adherence to trial and nontrial medications, measurement-guided management of adherence, statistically and pharmacometrically sound analyses, and modifications in trial design. Although nonadherence is prevalent across all therapeutic areas in which the patients are responsible for treatment administration, the significance of the adverse consequences depends on the characteristics of both the disease and the medications.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
NASA Astrophysics Data System (ADS)
Lavely, Adam; Vijayakumar, Ganesh; Brasseur, James; Paterson, Eric; Kinzel, Michael
2011-11-01
Using large-eddy simulation (LES) of the neutral and moderately convective atmospheric boundary layers (NBL, MCBL), we analyze the impact of coherent turbulence structure of the atmospheric surface layer on the short-time statistics that are commonly collected from wind turbines. The incoming winds are conditionally sampled with a filtering and thresholding algorithm into high/low horizontal and vertical velocity fluctuation coherent events. The time scales of these events are ~5 - 20 blade rotations and are roughly twice as long in the MCBL as the NBL. Horizontal velocity events are associated with greater variability in rotor power, lift and blade-bending moment than vertical velocity events. The variability in the industry standard 10 minute average for rotor power, sectional lift and wind velocity had a standard deviation of ~ 5% relative to the ``infinite time'' statistics for the NBL and ~10% for the MCBL. We conclude that turbulence structure associated with atmospheric stability state contributes considerable, quantifiable, variability to wind turbine statistics. Supported by NSF and DOE.
NASA Astrophysics Data System (ADS)
Elag, M.; Kumar, P.
2014-12-01
Often, scientists and small research groups collect data, which target to address issues and have limited geographic or temporal range. A large number of such collections together constitute a large database that is of immense value to Earth Science studies. Complexity of integrating these data include heterogeneity in dimensions, coordinate systems, scales, variables, providers, users and contexts. They have been defined as long-tail data. Similarly, we use "long-tail models" to characterize a heterogeneous collection of models and/or modules developed for targeted problems by individuals and small groups, which together provide a large valuable collection. Complexity of integrating across these models include differing variable names and units for the same concept, model runs at different time steps and spatial resolution, use of differing naming and reference conventions, etc. Ability to "integrate long-tail models and data" will provide an opportunity for the interoperability and reusability of communities' resources, where not only models can be combined in a workflow, but each model will be able to discover and (re)use data in application specific context of space, time and questions. This capability is essential to represent, understand, predict, and manage heterogeneous and interconnected processes and activities by harnessing the complex, heterogeneous, and extensive set of distributed resources. Because of the staggering production rate of long-tail models and data resulting from the advances in computational, sensing, and information technologies, an important challenge arises: how can geoinformatics bring together these resources seamlessly, given the inherent complexity among model and data resources that span across various domains. We will present a semantic-based framework to support integration of "long-tail" models and data. This builds on existing technologies including: (i) SEAD (Sustainable Environmental Actionable Data) which supports curation and preservation of long-tail data during its life-cycle; (ii) BrownDog, which enhances the machine interpretability of large unstructured and uncurated data; and (iii) CSDMS (Community Surface Dynamics Modeling System), which "componentizes" models by providing plug-and-play environment for models integration.
Jowsey, Ian R; Clapp, Catherine J; Safford, Bob; Gibbons, Ben T; Basketter, David A
2008-01-01
The identification and characterization of chemicals that possess skin-sensitizing potential are typically performed using predictive tests. However, human exposure to skin-sensitizing chemicals often occurs via a matrix (vehicle) that differs from that used in these tests. It is thus important to account for the potential impact of vehicle differences when undertaking quantitative risk assessment for skin sensitization. This is achieved through the application of a specific sensitization assessment factor (SAF), scaled between 1 and 10, when identifying an acceptable exposure level. The objective of the analysis described herein is to determine the impact of vehicle differences on local lymph node assay (LLNA) EC3 values (concentrations of test chemical required to provoke a 3-fold increase in lymph node cell proliferation). Initially, the inherent variability of the LLNA was investigated by examining the reproducibility of EC3 values for 14 chemicals that have been tested more than once in the same vehicle (4:1 acetone:olive oil, AOO). This analysis reveals that the variability in EC3 value for these chemicals following multiple assessments is <5-fold. Next, data from the literature and previously unpublished studies were compiled for 18 chemicals that had been assessed in the LLNA using at least 2 of 15 different vehicles. These data demonstrate that often the variability in EC3 values observed for a given chemical in different vehicles is no greater than the 5-fold inherent variability observed when assessing a chemical in the same vehicle on multiple occasions. However, there are examples where EC3 values for a chemical differ by a factor of more than 10 between different vehicles. These observations were often associated with an apparent underestimation of potency (higher EC3 values) with predominantly aqueous vehicles or propylene glycol. These data underscore the need to consider vehicle effects in the context of skin-sensitization risk assessments.
Novel Magnetic Resonance Detection and Profiling of Ovarian Cancer Across Specimens
2012-10-01
Cancer Cells in Fine - Needle Aspirates. Proc. Natl. Acad. Sci. U.S.A. 2009, 106, 12459–12464. 25. Han,H. S.; Devaraj,N. K.; Lee, J.; Hilderbrand, S.A... fine needle aspirates, biopsies, ascites, blood, sputum), which are inherently complex in composition, as well as heterogeneous and variable in cell...in ascitic fluid, we anticipate that this method could similarly be applied to fine needle aspirates, blood, biopsy spec- imens, sputum, and other
Greaves, Mel; Maley, Carlo C.
2012-01-01
Cancers evolve by a reiterative process of clonal expansion, genetic diversification and clonal selection within the adaptive landscapes of tissue ecosystems. The dynamics are complex with highly variable patterns of genetic diversity and resultant clonal architecture. Therapeutic intervention may decimate cancer clones, and erode their habitats, but inadvertently provides potent selective pressure for the expansion of resistant variants. The inherently Darwinian character of cancer lies at the heart of therapeutic failure but perhaps also holds the key to more effective control. PMID:22258609
Aircraft Pitch Control With Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, C. R.; Homaifar, Abdollah
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
Aircraft Pitch Control with Fixed Order LQ Compensators
NASA Technical Reports Server (NTRS)
Green, James; Ashokkumar, Cr.; Homaifar, A.
1997-01-01
This paper considers a given set of fixed order compensators for aircraft pitch control problem. By augmenting compensator variables to the original state equations of the aircraft, a new dynamic model is considered to seek a LQ controller. While the fixed order compensators can achieve a set of desired poles in a specified region, LQ formulation provides the inherent robustness properties. The time response for ride quality is significantly improved with a set of dynamic compensators.
Beyond scene gist: Objects guide search more than scene background.
Koehler, Kathryn; Eckstein, Miguel P
2017-06-01
Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Variations in Scientific Data Production: What Can We Learn from #Overlyhonestmethods?
Bezuidenhout, Louise
2015-12-01
In recent months months the hashtag #overlyhonestmethods has steadily been gaining popularity. Posts under this hashtag--presumably by scientists--detail aspects of daily scientific research that differ considerably from the idealized interpretation of scientific experimentation as standardized, objective and reproducible. Over and above its entertainment value, the popularity of this hashtag raises two important points for those who study both science and scientists. Firstly, the posts highlight that the generation of data through experimentation is often far less standardized than is commonly assumed. Secondly, the popularity of the hashtag together with its relatively blasé reception by the scientific community reveal that the actions reported in the tweets are far from shocking and indeed may be considered just "part of scientific research". Such observations give considerable pause for thought, and suggest that current conceptions of data might be limited by failing to recognize this "inherent variability" within the actions of generation--and thus within data themselves. Is it possible, we must ask, that epistemic virtues such as standardization, consistency, reportability and reproducibility need to be reevaluated? Such considerations are, of course, of particular importance to data sharing discussions and the Open Data movement. This paper suggests that the notion of a "moral professionalism" for data generation and sharing needs to be considered in more detail if the inherent variability of data are to be addressed in any meaningful manner.
A Decadal Climate Cycle in the North Atlantic Ocean as Simulated by the ECHO Coupled GCM.
NASA Astrophysics Data System (ADS)
Grötzner, A.; Latif, M.; Barnett, T. P.
1998-05-01
In this paper a decadal climate cycle in the North Atlantic that was derived from an extended-range integration with a coupled ocean-atmosphere general circulation model is described. The decadal mode shares many features with the observed decadal variability in the North Atlantic. The period of the simulated oscillation, however, is somewhat longer than that estimated from observations. While the observations indicate a period of about 12 yr, the coupled model simulation yields a period of about 17 yr. The cyclic nature of the decadal variability implies some inherent predictability at these timescales.The decadal mode is based on unstable air-sea interactions and must be therefore regarded as an inherently coupled mode. It involves the subtropical gyre and the North Atlantic oscillation. The memory of the coupled system, however, resides in the ocean and is related to horizontal advection and to the oceanic adjustment to low-frequency wind stress curl variations. In particular, it is found that variations in the intensity of the Gulf Stream and its extension are crucial to the oscillation. Although differing in details, the North Atlantic decadal mode and the North Pacific mode described by M. Latif and T. P. Barnett are based on the same fundamental mechanism: a feedback loop between the wind driven subtropical gyre and the extratropical atmospheric circulation.
Estimations of natural variability between satellite measurements of trace species concentrations
NASA Astrophysics Data System (ADS)
Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.
2017-12-01
In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.
Prediction of Indian Summer-Monsoon Onset Variability: A Season in Advance.
Pradhan, Maheswar; Rao, A Suryachandra; Srivastava, Ankur; Dakate, Ashish; Salunke, Kiran; Shameera, K S
2017-10-27
Monsoon onset is an inherent transient phenomenon of Indian Summer Monsoon and it was never envisaged that this transience can be predicted at long lead times. Though onset is precipitous, its variability exhibits strong teleconnections with large scale forcing such as ENSO and IOD and hence may be predictable. Despite of the tremendous skill achieved by the state-of-the-art models in predicting such large scale processes, the prediction of monsoon onset variability by the models is still limited to just 2-3 weeks in advance. Using an objective definition of onset in a global coupled ocean-atmosphere model, it is shown that the skillful prediction of onset variability is feasible under seasonal prediction framework. The better representations/simulations of not only the large scale processes but also the synoptic and intraseasonal features during the evolution of monsoon onset are the comprehensions behind skillful simulation of monsoon onset variability. The changes observed in convection, tropospheric circulation and moisture availability prior to and after the onset are evidenced in model simulations, which resulted in high hit rate of early/delay in monsoon onset in the high resolution model.
Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A
2008-08-01
A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.
Fujii, Shinya; Lulic, Tea; Chen, Joyce L.
2016-01-01
Motor learning is a process whereby the acquisition of new skills occurs with practice, and can be influenced by the provision of feedback. An important question is what frequency of feedback facilitates motor learning. The guidance hypothesis assumes that the provision of less augmented feedback is better than more because a learner can use his/her own inherent feedback. However, it is unclear whether this hypothesis holds true for all types of augmented feedback, including for example sonified information about performance. Thus, we aimed to test what frequency of augmented sonified feedback facilitates the motor learning of a novel joint coordination pattern. Twenty healthy volunteers first reached to a target with their arm (baseline phase). We manipulated this baseline kinematic data for each individual to create a novel target joint coordination pattern. Participants then practiced to learn the novel target joint coordination pattern, receiving either feedback on every trial i.e., 100% feedback (n = 10), or every other trial, i.e., 50% feedback (n = 10; acquisition phase). We created a sonification system to provide the feedback. This feedback was a pure tone that varied in intensity in proportion to the error of the performed joint coordination relative to the target pattern. Thus, the auditory feedback contained information about performance in real-time (i.e., “concurrent, knowledge of performance feedback”). Participants performed the novel joint coordination pattern with no-feedback immediately after the acquisition phase (immediate retention phase), and on the next day (delayed retention phase). The root-mean squared error (RMSE) and variable error (VE) of joint coordination were significantly reduced during the acquisition phase in both 100 and 50% feedback groups. There was no significant difference in VE between the groups at immediate and delayed retention phases. However, at both these retention phases, the 100% feedback group showed significantly smaller RMSE than the 50% group. Thus, contrary to the guidance hypothesis, our findings suggest that the provision of more, concurrent knowledge of performance auditory feedback during the acquisition of a novel joint coordination pattern, may result in better skill retention. PMID:27375414
Rich, Ryan M; Stankowska, Dorota L; Maliwal, Badri P; Sørensen, Thomas Just; Laursen, Bo W; Krishnamoorthy, Raghu R; Gryczynski, Zygmunt; Borejdo, Julian; Gryczynski, Ignacy; Fudala, Rafal
2013-02-01
Sample autofluorescence (fluorescence of inherent components of tissue and fixative-induced fluorescence) is a significant problem in direct imaging of molecular processes in biological samples. A large variety of naturally occurring fluorescent components in tissue results in broad emission that overlaps the emission of typical fluorescent dyes used for tissue labeling. In addition, autofluorescence is characterized by complex fluorescence intensity decay composed of multiple components whose lifetimes range from sub-nanoseconds to a few nanoseconds. For these reasons, the real fluorescence signal of the probe is difficult to separate from the unwanted autofluorescence. Here we present a method for reducing the autofluorescence problem by utilizing an azadioxatriangulenium (ADOTA) dye with a fluorescence lifetime of approximately 15 ns, much longer than those of most of the components of autofluorescence. A probe with such a long lifetime enables us to use time-gated intensity imaging to separate the signal of the targeting dye from the autofluorescence. We have shown experimentally that by discarding photons detected within the first 20 ns of the excitation pulse, the signal-to-background ratio is improved fivefold. This time-gating eliminates over 96 % of autofluorescence. Analysis using a variable time-gate may enable quantitative determination of the bound probe without the contributions from the background.
Hsu, David
2015-09-27
Clustering methods are often used to model energy consumption for two reasons. First, clustering is often used to process data and to improve the predictive accuracy of subsequent energy models. Second, stable clusters that are reproducible with respect to non-essential changes can be used to group, target, and interpret observed subjects. However, it is well known that clustering methods are highly sensitive to the choice of algorithms and variables. This can lead to misleading assessments of predictive accuracy and mis-interpretation of clusters in policymaking. This paper therefore introduces two methods to the modeling of energy consumption in buildings: clusterwise regression,more » also known as latent class regression, which integrates clustering and regression simultaneously; and cluster validation methods to measure stability. Using a large dataset of multifamily buildings in New York City, clusterwise regression is compared to common two-stage algorithms that use K-means and model-based clustering with linear regression. Predictive accuracy is evaluated using 20-fold cross validation, and the stability of the perturbed clusters is measured using the Jaccard coefficient. These results show that there seems to be an inherent tradeoff between prediction accuracy and cluster stability. This paper concludes by discussing which clustering methods may be appropriate for different analytical purposes.« less
Raychaudhuri, Subhadip; Raychaudhuri, Somkanya C
2013-01-01
Apoptotic cell death is coordinated through two distinct (type 1 and type 2) intracellular signaling pathways. How the type 1/type 2 choice is made remains a central problem in the biology of apoptosis and has implications for apoptosis related diseases and therapy. We study the problem of type 1/type 2 choice in silico utilizing a kinetic Monte Carlo model of cell death signaling. Our results show that the type 1/type 2 choice is linked to deterministic versus stochastic cell death activation, elucidating a unique regulatory control of the apoptotic pathways. Consistent with previous findings, our results indicate that caspase 8 activation level is a key regulator of the choice between deterministic type 1 and stochastic type 2 pathways, irrespective of cell types. Expression levels of signaling molecules downstream also regulate the type 1/type 2 choice. A simplified model of DISC clustering elucidates the mechanism of increased active caspase 8 generation and type 1 activation in cancer cells having increased sensitivity to death receptor activation. We demonstrate that rapid deterministic activation of the type 1 pathway can selectively target such cancer cells, especially if XIAP is also inhibited; while inherent cell-to-cell variability would allow normal cells stay protected. PMID:24709706
Natural Killer Cells for Therapy of Leukemia
Suck, Garnet; Linn, Yeh Ching; Tonn, Torsten
2016-01-01
Summary Clinical application of natural killer (NK) cells against leukemia is an area of intense investigation. In human leukocyte antigen-mismatched allogeneic hematopoietic stem cell transplantations (HSCT), alloreactive NK cells exert powerful anti-leukemic activity in preventing relapse in the absence of graft-versus-host disease, particularly in acute myeloid leukemia patients. Adoptive transfer of donor NK cells post-HSCT or in non-transplant scenarios may be superior to the currently widely used unmanipulated donor lymphocyte infusion. This concept could be further improved through transfusion of activated NK cells. Significant progress has been made in good manufacturing practice (GMP)-compliant large-scale production of stimulated effectors. However, inherent limitations remain. These include differing yields and compositions of the end-product due to donor variability and inefficient means for cryopreservation. Moreover, the impact of the various novel activation strategies on NK cell biology and in vivo behavior are barely understood. In contrast, reproduction of the third-party NK-92 drug from a cryostored GMP-compliant master cell bank is straightforward and efficient. Safety for the application of this highly cytotoxic cell line was demonstrated in first clinical trials. This novel ‘off-the-shelf’ product could become a treatment option for a broad patient population. For specific tumor targeting chimeric-antigen-receptor-engineered NK-92 cells have been designed. PMID:27226791
NASA Astrophysics Data System (ADS)
Permar, W.; Hu, L.; Fischer, E. V.
2017-12-01
Despite being the second largest primary source of tropospheric volatile organic compounds (VOCs), biomass burning is poorly understood relative to other sources due in part to its large variability and the difficulty inherent to sampling smoke. In light of this, several field campaigns are planned to better characterize wildfire plume emissions and chemistry through airborne sampling of smoke plumes. As part of this effort, we will deploy a high-resolution proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) on the NSF/NCAR C-130 research aircraft during the collaborative Western wildfire Experiment for Cloud chemistry, Aerosol absorption and Nitrogen (WE-CAN) mission. PTR-ToF-MS is well suited for airborne measurements of VOC in wildfire smoke plumes due to its ability to collect real time, high-resolution data for the full mass range of ionizable organic species, many of which remain uncharacterized or unidentified. In this work, we will report on our initial measurements from the WE-CAN test flights in September 2017. We will also discuss challenges associated with deploying the instrument for airborne missions targeting wildfire smoke and goals for further study in WE-CAN 2018.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fish, Vincent L.; Doeleman, Sheperd S.; Lu, Ru-Sen
The Galactic Center black hole Sagittarius A* (Sgr A*) is a prime observing target for the Event Horizon Telescope (EHT), which can resolve the 1.3 mm emission from this source on angular scales comparable to that of the general relativistic shadow. Previous EHT observations have used visibility amplitudes to infer the morphology of the millimeter-wavelength emission. Potentially much richer source information is contained in the phases. We report on 1.3 mm phase information on Sgr A* obtained with the EHT on a total of 13 observing nights over four years. Closure phases, which are the sum of visibility phases along a closedmore » triangle of interferometer baselines, are used because they are robust against phase corruptions introduced by instrumentation and the rapidly variable atmosphere. The median closure phase on a triangle including telescopes in California, Hawaii, and Arizona is nonzero. This result conclusively demonstrates that the millimeter emission is asymmetric on scales of a few Schwarzschild radii and can be used to break 180° rotational ambiguities inherent from amplitude data alone. The stability of the sign of the closure phase over most observing nights indicates persistent asymmetry in the image of Sgr A* that is not obscured by refraction due to interstellar electrons along the line of sight.« less
Development of Ion Chemosensors Based on Porphyrin Analogues.
Ding, Yubin; Zhu, Wei-Hong; Xie, Yongshu
2017-02-22
Sensing of metal ions and anions is of great importance because of their widespread distribution in environmental systems and biological processes. Colorimetric and fluorescent chemosensors based on organic molecular species have been demonstrated to be effective for the detection of various ions and possess the significant advantages of low cost, high sensitivity, and convenient implementation. Of the available classes of organic molecules, porphyrin analogues possess inherently many advantageous features, making them suitable for the design of ion chemosensors, with the targeted sensing behavior achieved and easily modulated based on their following characteristics: (1) NH moieties properly disposed for binding of anions through cooperative hydrogen-bonding interactions; (2) multiple pyrrolic N atoms or other heteroatoms for selectively chelating metal ions; (3) variability of macrocycle size and peripheral substitution for modulation of ion selectivity and sensitivity; and (4) tunable near-infrared emission and good biocompatibility. In this Review, design strategies, sensing mechanisms, and sensing performance of ion chemosensors based on porphyrin analogues are described by use of extensive examples. Ion chemosensors based on normal porphyrins and linear oligopyrroles are also briefly described. This Review provides valuable information for researchers of related areas and thus may inspire the development of more practical and effective approaches for designing high-performance ion chemosensors based on porphyrin analogues and other relevant compounds.
Target Detection Routine (TADER). User’s Guide.
1987-09-01
o System range capability subset (one record - omitted for standoff SLAR and penetrating system) o System inherent detection probability subset ( IELT ...records, i.e., one per element type) * System capability modifier subset/A=1, E=1 ( IELT records) o System capability modifier subset/A=1, E=2 ( IELT ...records) s System capability modifier subset/A=2, E=1 ( IELT records) o System capability modifier subset/A=2, E=2 ( IELT records) Unit Data Set (one set
Mitigating the Shortage of Special Operations Aviation By an Unconventional Approach
2017-12-01
Second World War, and the majority of air power theorists suggested that when technology finally caught up with the inherent ability of aviation, air...assessment of an American expert [Richard D. Newton, Joint Special Operations University] in air special operations at the Air Force’s annual Air Power ...scope and time in order to “seize, destroy, disrupt, capture, exploit, recover, or damage high value or high pay-off targets.”48 When these operations
Zhang, Hongbo; Qu, Xiangmeng; Chen, Hong; Kong, Haixin; Ding, Ruihua; Chen, Dong; Zhang, Xu; Pei, Hao; Santos, Hélder A; Hai, Mingtan; Weitz, David A
2017-10-01
DNA origami is designed by folding DNA strands at the nanoscale with arbitrary control. Due to its inherent biological nature, DNA origami is used in drug delivery for enhancement of synergism and multidrug resistance inhibition, cancer diagnosis, and many other biomedical applications, where it shows great potential. However, the inherent instability and low payload capacity of DNA origami restrict its biomedical applications. Here, this paper reports the fabrication of an advanced biocompatible nano-in-nanocomposite, which protects DNA origami from degradation and facilities drug loading. The DNA origami, gold nanorods, and molecular targeted drugs are co-incorporated into pH responsive calcium phosphate [Ca 3 (PO 4 ) 2 ] nanoparticles. Subsequently, a thin layer of phospholipid is coated onto the Ca 3 (PO 4 ) 2 nanoparticle to offer better biocompatibility. The fabricated nanocomposite shows high drug loading capacity, good biocompatibility, and a photothermal and pH-responsive payload release profile and it fully protects DNA origami from degradation. The codelivery of DNA origami with cancer drugs synergistically induces cancer cell apoptosis, reduces the multidrug resistance, and enhances the targeted killing efficiency toward human epidermal growth factor receptor 2 positive cells. This nanocomposite is foreseen to open new horizons for a variety of clinical and biomedical applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Combinatorial Approaches for the Identification of Brain Drug Delivery Targets
Stutz, Charles C.; Zhang, Xiaobin; Shusta, Eric V.
2018-01-01
The blood-brain barrier (BBB) represents a large obstacle for the treatment of central nervous system diseases. Targeting endogenous nutrient transporters that transcytose the BBB is one promising approach to selectively and noninvasively deliver a drug payload to the brain. The main limitations of the currently employed transcytosing receptors are their ubiquitous expression in the peripheral vasculature and the inherent low levels of transcytosis mediated by such systems. In this review, approaches designed to increase the repertoire of transcytosing receptors which can be targeted for the purpose of drug delivery are discussed. In particular, combinatorial protein libraries can be screened on BBB cells in vitro or in vivo to isolate targeting peptides or antibodies that can trigger transcytosis. Once these targeting reagents are discovered, the cognate BBB transcytosis system can be identified using techniques such as expression cloning or immunoprecipitation coupled with mass spectrometry. Continued technological advances in BBB genomics and proteomics, membrane protein manipulation, and in vitro BBB technology promise to further advance the capability to identify and optimize peptides and antibodies capable of mediating drug transport across the BBB. PMID:23789958
Harris, M.S.; Gayes, P.T.; Kindinger, J.L.; Flocks, J.G.; Krantz, D.E.; Donovan, P.
2005-01-01
Coastal landscapes evolve over wide-ranging spatial and temporal scales in response to physical and biological pro-cesses that interact with a wide range of variables. To develop better predictive models for these dynamic areas, we must understand the influence of these variables on coastal morphologies and ultimately how they influence coastal processes. This study defines the influence of geologic framework variability on a classic mixed-energy coastline, and establishes four categorical scales of spatial and temporal influence on the coastal system. The near-surface, geologic framework was delineated using high-resolution seismic profiles, shallow vibracores, detailed geomorphic maps, historical shorelines, aerial photographs, and existing studies, and compared to the long- and short-term development of two coastal compartments near Charleston, South Carolina. Although it is clear that the imprint of a mixed-energy tidal and wave signal (basin-scale) dictates formation of drumstick barriers and that immediate responses to wave climate are dramatic, island size, position, and longer-term dynamics are influenced by a series of inherent, complex near-surface stratigraphic geometries. Major near-surface Tertiary geometries influence inlet placement and drainage development (island-scale) through multiple interglacial cycles and overall channel morphology (local-scale). During the modern marine transgression, the halo of ebb-tidal deltas greatly influence inlet region dynamics, while truncated beach ridges and exposed, differentially erodable Cenozoic deposits in the active system influence historical shoreline dynamics and active shoreface morphologies (blockscale). This study concludes that the mixed-energy imprint of wave and tide theories dominates general coastal morphology, but that underlying stratigraphic influences on the coast provide site-specific, long-standing imprints on coastal evolution.
NASA Astrophysics Data System (ADS)
Tatchyn, Roman
1992-01-01
Insertion devices that are tuned by electrical period variation are particularly suited for the design of flexible polarized-light sources [R. Tatchyn, J. Appl. Phys. 65, 4107 (1989); R. Tatchyn and T. Cremer, IEEE Trans. Mag. 26, 3102 (1990)]. Important advantages vis-a-vis mechanical or hybrid variable field designs include: (1) significantly more rapid modulation of both polarization and energy, (2) an inherently larger set of polarization modulation capabilities and (3) polarization/energy modulation at continuously optimized values of K. In this paper we outline some of the general considerations that enter into the design of hysteresis-free variable-period/polarizing undulator structures and present the parameters of a recently-completed prototype design capable of generating intense levels of UV/VUV photon flux on SPEAR running at 3 GeV.
Probabilistic liquefaction triggering based on the cone penetration test
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Tokimatsu, K.
2005-01-01
Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.
ERPs and Psychopathology. I. Behavioral process issues.
Roth, W T; Tecce, J J; Pfefferbaum, A; Rosenbloom, M; Callaway, E
1984-01-01
The clinical study of ERPs has an inherent defect--a self-selection of clinical populations that hampers equating of clinically defined groups on factors extraneous to the independent variables. Such ex post facto studies increase the likelihood of confounding variables in the interpretation of findings. Hence, the development of lawful relationships between clinical variables and ERPs is impeded and the fulfillment of description, explanation, prediction, and control in brain science is thwarted. Proper methodologies and theory development can increase the likelihood of establishing these lawful relationships. One methodology of potential value in the clinical application of ERPs, particularly in studies of aging, is that of divided attention. Two promising theoretical developments in the understanding of brain functioning and aging are the distraction-arousal hypothesis and the controlled-automatic attention model. The evaluation of ERPs in the study of brain-behavior relations in clinical populations might be facilitated by the differentiation of concurrent, predictive, content, and construct validities.
Best (but oft-forgotten) practices: mediation analysis.
Fairchild, Amanda J; McDaniel, Heather L
2017-06-01
This contribution in the "Best (but Oft-Forgotten) Practices" series considers mediation analysis. A mediator (sometimes referred to as an intermediate variable, surrogate endpoint, or intermediate endpoint) is a third variable that explains how or why ≥2 other variables relate in a putative causal pathway. The current article discusses mediation analysis with the ultimate intention of helping nutrition researchers to clarify the rationale for examining mediation, avoid common pitfalls when using the model, and conduct well-informed analyses that can contribute to improving causal inference in evaluations of underlying mechanisms of effects on nutrition-related behavioral and health outcomes. We give specific attention to underevaluated limitations inherent in common approaches to mediation. In addition, we discuss how to conduct a power analysis for mediation models and offer an applied example to demonstrate mediation analysis. Finally, we provide an example write-up of mediation analysis results as a model for applied researchers. © 2017 American Society for Nutrition.
Best (but oft-forgotten) practices: mediation analysis12
McDaniel, Heather L
2017-01-01
This contribution in the “Best (but Oft-Forgotten) Practices” series considers mediation analysis. A mediator (sometimes referred to as an intermediate variable, surrogate endpoint, or intermediate endpoint) is a third variable that explains how or why ≥2 other variables relate in a putative causal pathway. The current article discusses mediation analysis with the ultimate intention of helping nutrition researchers to clarify the rationale for examining mediation, avoid common pitfalls when using the model, and conduct well-informed analyses that can contribute to improving causal inference in evaluations of underlying mechanisms of effects on nutrition-related behavioral and health outcomes. We give specific attention to underevaluated limitations inherent in common approaches to mediation. In addition, we discuss how to conduct a power analysis for mediation models and offer an applied example to demonstrate mediation analysis. Finally, we provide an example write-up of mediation analysis results as a model for applied researchers. PMID:28446497
Servant teaching: the power and promise for nursing education.
Robinson, F Patrick
2009-01-01
The best theoretical or practical approaches to achieving learning outcomes in nursing likely depend on multiple variables, including instructor-related variables. This paper explores one such variable and its potential impact on learning. Application of the principles inherent in servant leadership to teaching/learning in nursing education is suggested as a way to produce professional nurses who are willing and able to transform the health care environment to achieve higher levels of quality and safety. Thus, the concept of servant teaching is introduced with discussion of the following principles and their application to teaching in nursing: judicious use of power, listening and empathy, willingness to change, reflection and contemplation, collaboration and consensus, service learning, healing, conceptualization, stewardship, building community, and commitment to the growth of people. Faculty colleagues are invited to explore the use of servant teaching and its potential for nursing education.
TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unkelbach, J.
The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less
Characterizing the Diversity and Biological Relevance of the MLPCN Assay Manifold and Screening Set
Zhang, Jintao; Lushington, Gerald H.; Huan, Jun
2011-01-01
The NIH Molecular Libraries Initiative (MLI), launched in 2004 with initial goals of identifying chemical probes for characterizing gene function and druggability, has produced PubChem, a chemical genomics knowledgebase for fostering translation of basic research into new therapeutic strategies. This paper assesses progress toward these goals by evaluating MLI target novelty and propensity for undergoing biochemically or therapeutically relevant modulations and the degree of chemical diversity and biogenic bias inherent in the MLI screening set. Our analyses suggest that while MLI target selection has not yet been fully optimized for biochemical diversity, it covers biologically interesting pathway space that complements established drug targets. We find the MLI screening set to be chemically diverse and to have greater biogenic bias than comparable collections of commercially available compounds. Biogenic enhancements such as incorporation of more metabolite-like chemotypes are suggested. PMID:21568288
Targeting receptor-mediated endocytotic pathways with nanoparticles: rationale and advances
Xu, Shi; Olenyuk, Bogdan Z.; Okamoto, Curtis T.; Hamm-Alvarez, Sarah F.
2012-01-01
Targeting of drugs and their carrier systems by using receptor-mediated endocytotic pathways was in its nascent stages 25 years ago. In the intervening years, an explosion of knowledge focused on design and synthesis of nanoparticulate delivery systems as well as elucidation of the cellular complexity of what was previously-termed receptor-mediated endocytosis has now created a situation when it has become possible to design and test the feasibility of delivery of highly specific nanoparticle drug carriers to specific cells and tissue. This review outlines the mechanisms governing the major modes of receptor-mediated endocytosis used in drug delivery and highlights recent approaches using these as targets for in vivo drug delivery of nanoparticles. The review also discusses some of the inherent complexity associated with the simple shift from a ligand-drug conjugate versus a ligand-nanoparticle conjugate, in terms of ligand valency and its relationship to the mode of receptor-mediated internalization. PMID:23026636
Method and apparatus for executing an asynchronous clutch-to-clutch shift in a hybrid transmission
Demirovic, Besim; Gupta, Pinaki; Kaminsky, Lawrence A.; Naqvi, Ali K.; Heap, Anthony H.; Sah, Jy-Jen F.
2014-08-12
A hybrid transmission includes first and second electric machines. A method for operating the hybrid transmission in response to a command to execute a shift from an initial continuously variable mode to a target continuously variable mode includes increasing torque of an oncoming clutch associated with operating in the target continuously variable mode and correspondingly decreasing a torque of an off-going clutch associated with operating in the initial continuously variable mode. Upon deactivation of the off-going clutch, torque outputs of the first and second electric machines and the torque of the oncoming clutch are controlled to synchronize the oncoming clutch. Upon synchronization of the oncoming clutch, the torque for the oncoming clutch is increased and the transmission is operated in the target continuously variable mode.
Conspicuous plumage colours are highly variable
Szecsenyi, Beatrice; Nakagawa, Shinichi; Peters, Anne
2017-01-01
Elaborate ornamental traits are often under directional selection for greater elaboration, which in theory should deplete underlying genetic variation. Despite this, many ornamental traits appear to remain highly variable and how this essential variation is maintained is a key question in evolutionary biology. One way to address this question is to compare differences in intraspecific variability across different types of traits to determine whether high levels of variation are associated with specific trait characteristics. Here we assess intraspecific variation in more than 100 plumage colours across 55 bird species to test whether colour variability is linked to their level of elaboration (indicated by degree of sexual dichromatism and conspicuousness) or their condition dependence (indicated by mechanism of colour production). Conspicuous colours had the highest levels of variation and conspicuousness was the strongest predictor of variability, with high explanatory power. After accounting for this, there were no significant effects of sexual dichromatism or mechanisms of colour production. Conspicuous colours may entail higher production costs or may be more sensitive to disruptions during production. Alternatively, high variability could also be related to increased perceptual difficulties inherent to discriminating highly elaborate colours. Such psychophysical effects may constrain the exaggeration of animal colours. PMID:28100823
Conspicuous plumage colours are highly variable.
Delhey, Kaspar; Szecsenyi, Beatrice; Nakagawa, Shinichi; Peters, Anne
2017-01-25
Elaborate ornamental traits are often under directional selection for greater elaboration, which in theory should deplete underlying genetic variation. Despite this, many ornamental traits appear to remain highly variable and how this essential variation is maintained is a key question in evolutionary biology. One way to address this question is to compare differences in intraspecific variability across different types of traits to determine whether high levels of variation are associated with specific trait characteristics. Here we assess intraspecific variation in more than 100 plumage colours across 55 bird species to test whether colour variability is linked to their level of elaboration (indicated by degree of sexual dichromatism and conspicuousness) or their condition dependence (indicated by mechanism of colour production). Conspicuous colours had the highest levels of variation and conspicuousness was the strongest predictor of variability, with high explanatory power. After accounting for this, there were no significant effects of sexual dichromatism or mechanisms of colour production. Conspicuous colours may entail higher production costs or may be more sensitive to disruptions during production. Alternatively, high variability could also be related to increased perceptual difficulties inherent to discriminating highly elaborate colours. Such psychophysical effects may constrain the exaggeration of animal colours. © 2017 The Author(s).
Automatic identification of variables in epidemiological datasets using logic regression.
Lorenz, Matthias W; Abdi, Negin Ashtiani; Scheckenbach, Frank; Pflug, Anja; Bülbül, Alpaslan; Catapano, Alberico L; Agewall, Stefan; Ezhov, Marat; Bots, Michiel L; Kiechl, Stefan; Orth, Andreas
2017-04-13
For an individual participant data (IPD) meta-analysis, multiple datasets must be transformed in a consistent format, e.g. using uniform variable names. When large numbers of datasets have to be processed, this can be a time-consuming and error-prone task. Automated or semi-automated identification of variables can help to reduce the workload and improve the data quality. For semi-automation high sensitivity in the recognition of matching variables is particularly important, because it allows creating software which for a target variable presents a choice of source variables, from which a user can choose the matching one, with only low risk of having missed a correct source variable. For each variable in a set of target variables, a number of simple rules were manually created. With logic regression, an optimal Boolean combination of these rules was searched for every target variable, using a random subset of a large database of epidemiological and clinical cohort data (construction subset). In a second subset of this database (validation subset), this optimal combination rules were validated. In the construction sample, 41 target variables were allocated on average with a positive predictive value (PPV) of 34%, and a negative predictive value (NPV) of 95%. In the validation sample, PPV was 33%, whereas NPV remained at 94%. In the construction sample, PPV was 50% or less in 63% of all variables, in the validation sample in 71% of all variables. We demonstrated that the application of logic regression in a complex data management task in large epidemiological IPD meta-analyses is feasible. However, the performance of the algorithm is poor, which may require backup strategies.
The Figure.tar.gz contains a directory for each WRF ensemble run. In these directories are *.csv files for each meteorology variable examined. These are comma delimited text files that contain statistics for each observation site. Also provided is an R script that reads these files (user would need to change directory pointers) and computes the variability of error and bias of the ensemble at each site and plots these for reproduction of figure 3.This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).
tICA-Metadynamics: Accelerating Metadynamics by Using Kinetically Selected Collective Variables.
M Sultan, Mohammad; Pande, Vijay S
2017-06-13
Metadynamics is a powerful enhanced molecular dynamics sampling method that accelerates simulations by adding history-dependent multidimensional Gaussians along selective collective variables (CVs). In practice, choosing a small number of slow CVs remains challenging due to the inherent high dimensionality of biophysical systems. Here we show that time-structure based independent component analysis (tICA), a recent advance in Markov state model literature, can be used to identify a set of variationally optimal slow coordinates for use as CVs for Metadynamics. We show that linear and nonlinear tICA-Metadynamics can complement existing MD studies by explicitly sampling the system's slowest modes and can even drive transitions along the slowest modes even when no such transitions are observed in unbiased simulations.
NASA Astrophysics Data System (ADS)
Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien
2006-03-01
Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.
The shallow water equation and the vorticity equation for a change in height of the topography.
Da, ChaoJiu; Shen, BingLu; Yan, PengCheng; Ma, DeShan; Song, Jian
2017-01-01
We consider the shallow water equation and the vorticity equations for a variable height of topography. On the assumptions that the atmosphere is incompressible and a constant density, we simplify the coupled dynamic equations. The change in topographic height is handled as the sum of the inherent and changing topography using the perturbation method, together with appropriate boundary conditions of the atmosphere, to obtain the relationship between the relative height of the flow, the inherent topography and the changing topography. We generalize the conservation of the function of relative position, and quantify the relationship between the height of the topography and the relative position of a fluid element. If the height of the topography increases (decreases), the relative position of a fluid element descends (ascends). On this basis, we also study the relationship between the vorticity and the topography to find the vorticity decreasing (increasing) for an increasing (decreasing) height of the topography.
The shallow water equation and the vorticity equation for a change in height of the topography
Shen, BingLu; Yan, PengCheng; Ma, DeShan; Song, Jian
2017-01-01
We consider the shallow water equation and the vorticity equations for a variable height of topography. On the assumptions that the atmosphere is incompressible and a constant density, we simplify the coupled dynamic equations. The change in topographic height is handled as the sum of the inherent and changing topography using the perturbation method, together with appropriate boundary conditions of the atmosphere, to obtain the relationship between the relative height of the flow, the inherent topography and the changing topography. We generalize the conservation of the function of relative position, and quantify the relationship between the height of the topography and the relative position of a fluid element. If the height of the topography increases (decreases), the relative position of a fluid element descends (ascends). On this basis, we also study the relationship between the vorticity and the topography to find the vorticity decreasing (increasing) for an increasing (decreasing) height of the topography. PMID:28591129
Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.
Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt
2008-07-01
MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.
Wirth, Brian D.; Hu, Xunxiang; Kohnert, Aaron; ...
2015-03-02
Exposure of metallic structural materials to irradiation environments results in significant microstructural evolution, property changes, and performance degradation, which limits the extended operation of current generation light water reactors and restricts the design of advanced fission and fusion reactors. Further, it is well recognized that these irradiation effects are a classic example of inherently multiscale phenomena and that the mix of radiation-induced features formed and the corresponding property degradation depend on a wide range of material and irradiation variables. This inherently multiscale evolution emphasizes the importance of closely integrating models with high-resolution experimental characterization of the evolving radiation-damaged microstructure. Lastly,more » this article provides a review of recent models of the defect microstructure evolution in irradiated body-centered cubic materials, which provide good agreement with experimental measurements, and presents some outstanding challenges, which will require coordinated high-resolution characterization and modeling to resolve.« less
Development of Water Target for Radioisotope Production
NASA Astrophysics Data System (ADS)
Tripp, Nathan
2011-10-01
Ongoing studies of plant physiology at TUNL require a supply of nitrogen-13 for use as a radiotracer. Production of nitrogen-13 using a water target and a proton beam follows the nuclear reaction 16-O(p,a)13-N. Unfortunately the irradiation of trace amounts of oxygen-18 within a natural water target produces fluorine-18 by the reaction 18-O(p, n)18-F. The presence of this second radioisotope reduces the efficacy of nitrogen-13 as a radiotracer. Designing a natural water target for nitrogen-13 production at TUNL required the design of several new systems to address the problems inherent in nitrogen-13 production. A heat exchanger cools the target water after irradiation within the target cell. The resulting improved thermal regulation of the target water prevents the system from overheating and minimizes the effect of the cavitations occurring within the target. Alumina pellets within a scrubbing unit remove the fluorine-18 contamination from the irradiated water. The modular design of the water target apparatus makes the system highly adaptable, allowing for easy reuse and adaptation of the different components into future projects. The newly designed and constructed water target should meet the current and future needs of TUNL researchers in the production of nitrogen-13. This TUNL REU project was funded in part by a grant from the National Science Foundation (NSF) NSF-PHY-08-51813.
Thin film surface treatments for lowering dust adhesion on Mars Rover calibration targets
NASA Astrophysics Data System (ADS)
Sabri, F.; Werhner, T.; Hoskins, J.; Schuerger, A. C.; Hobbs, A. M.; Barreto, J. A.; Britt, D.; Duran, R. A.
The current generation of calibration targets on Mars Rover serve as a color and radiometric reference for the panoramic camera. They consist of a transparent silicon-based polymer tinted with either color or grey-scale pigments and cast with a microscopically rough Lambertian surface for a diffuse reflectance pattern. This material has successfully withstood the harsh conditions existent on Mars. However, the inherent roughness of the Lambertian surface (relative to the particle size of the Martian airborne dust) and the tackiness of the polymer in the calibration targets has led to a serious dust accumulation problem. In this work, non-invasive thin film technology was successfully implemented in the design of future generation calibration targets leading to significant reduction of dust adhesion and capture. The new design consists of a μm-thick interfacial layer capped with a nm-thick optically transparent layer of pure metal. The combination of these two additional layers is effective in burying the relatively rough Lambertian surface while maintaining diffuse properties of the samples which is central to the correct operation as calibration targets. A set of these targets are scheduled for flight on the Mars Phoenix mission.
Space moving target detection and tracking method in complex background
NASA Astrophysics Data System (ADS)
Lv, Ping-Yue; Sun, Sheng-Li; Lin, Chang-Qing; Liu, Gao-Rui
2018-06-01
The background of the space-borne detectors in real space-based environment is extremely complex and the signal-to-clutter ratio is very low (SCR ≈ 1), which increases the difficulty for detecting space moving targets. In order to solve this problem, an algorithm combining background suppression processing based on two-dimensional least mean square filter (TDLMS) and target enhancement based on neighborhood gray-scale difference (GSD) is proposed in this paper. The latter can filter out most of the residual background clutter processed by the former such as cloud edge. Through this procedure, both global and local SCR have obtained substantial improvement, indicating that the target has been greatly enhanced. After removing the detector's inherent clutter region through connected domain processing, the image only contains the target point and the isolated noise, in which the isolated noise could be filtered out effectively through multi-frame association. The proposed algorithm in this paper has been compared with some state-of-the-art algorithms for moving target detection and tracking tasks. The experimental results show that the performance of this algorithm is the best in terms of SCR gain, background suppression factor (BSF) and detection results.
Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J
2011-12-01
Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those measured in the 4 RCTs. It is possible that other variables also mediated or predicted the changes in outcomes. The relatively small sample size in the PEDALS trial limited statistical power for those analyses. Evaluating the mediators or predictors of change between each ICF level and for 2 fundamentally different outcome variables (pain versus walking) provided insights into the complexities inherent across 4 prevalent disability groups.
To belong, contribute, and hope: first stage development of a measure of social recovery.
Marino, Casadi Khaki
2015-04-01
Recovery from mental health challenges is beginning to be explored as an inherently social process. There is a need to measure social recovery. Targeted measures would be utilized in needs assessment, service delivery, and program evaluation. This paper reports on the first stage of development of a social recovery measure. Explore the social aspects of recovery as reported by individuals with lived experience. A qualitative study using thematic analysis of data from focus groups with 41 individuals in recovery. Three meta-themes of social recovery emerged: community, self-concept, and capacities. Each theme contained a number of sub-themes concerned with a sense of belonging, inherent acceptability of the self, and ability to cope with mental distress and engage socially. Study participants clearly spoke to common human needs to belong, contribute, and have hope for one's future. Findings converged with results of consumer-led research that emphasize the importance of overcoming the impact of illness on the self and social context.
NASA Technical Reports Server (NTRS)
Vaughan, Arthur H. (Inventor)
1993-01-01
A strip imaging wide angle optical system is provided. The optical system is provided with a 'virtual' material stop to avoid aberrational effects inherent in wide angle optical systems. The optical system includes a spherical mirror section for receiving light from a 180 deg strip or arc of a target image. Light received by the spherical mirror section is reflected to a frustoconical mirror section for subsequent rereflection to a row of optical fibers. Each optical fiber transmits a portion of the received light to a detector. The optical system exploits the narrow cone of acceptance associated with optical fibers to substantially eliminate vignetting effects inherent in wide angle systems. Further, the optical system exploits the narrow cone of acceptance of the optical fibers to substantially limit spherical aberration. The optical system is ideally suited for any application wherein a 180 deg strip image need be detected, and is particularly well adapted for use in hostile environments such as in planetary exploration.
Wide field strip-imaging optical system
NASA Technical Reports Server (NTRS)
Vaughan, Arthur H. (Inventor)
1994-01-01
A strip imaging wide angle optical system is provided. The optical system is provided with a 'virtual' material stop to avoid aberrational effects inherent in wide angle optical systems. The optical system includes a spherical mirror section for receiving light from a 180-degree strip or arc of a target image. Light received by the spherical mirror section is reflected to a frusto-conical mirror section for subsequent rereflection to a row of optical fibers. Each optical fiber transmits a portion of the received light to a detector. The optical system exploits the narrow cone of acceptance associated with optical fibers to substantially eliminate vignetting effects inherent in wide-angle systems. Further, the optical system exploits the narrow cone of acceptance of the optical fibers to substantially limit spherical aberration. The optical system is ideally suited for any application wherein a 180-degree strip image need be detected, and is particularly well adapted for use in hostile environments such as in planetary exploration.
Woerly, Eric M.; Roy, Jahnabi; Burke, Martin D.
2014-01-01
The inherent modularity of polypeptides, oligonucleotides, and oligosaccharides has been harnessed to achieve generalized building block-based synthesis platforms. Importantly, like these other targets, most small molecule natural products are biosynthesized via iterative coupling of bifunctional building blocks. This suggests that many small molecules also possess inherent modularity commensurate with systematic building block-based construction. Supporting this hypothesis, here we report that the polyene motifs found in >75% of all known polyene natural products can be synthesized using just 12 building blocks and one coupling reaction. Using the same general retrosynthetic algorithm and reaction conditions, this platform enabled the synthesis of a wide range of polyene frameworks covering all of this natural product chemical space, and first total syntheses of the polyene natural products asnipyrone B, physarigin A, and neurosporaxanthin β-D-glucopyranoside. Collectively, these results suggest the potential for a more generalized approach for making small molecules in the laboratory. PMID:24848233
Potential predictability of Northern America surface temperature in AGCMs and CGCMs
NASA Astrophysics Data System (ADS)
Tang, Youmin; Chen, Dake; Yan, Xiaoqin
2015-07-01
In this study, the potential predictability of the Northern America (NA) surface air temperature (SAT) was explored using an information-based predictability framework and two multiple model ensemble products: a one-tier prediction by coupled models (T1), and a two-tier prediction by atmospheric models only (T2). Furthermore, the potential predictability was optimally decomposed into different modes for both T1 and T2, by extracting the most predictable structures. Emphasis was placed on the comparison of the predictability between T1 and T2. It was found that the potential predictability of the NA SAT is seasonal and spatially dependent in both T1 and T2. Higher predictability occurs in spring and winter and over the southeastern US and northwestern Canada. There is no significant difference of potential predictability between T1 and T2 for most areas of NA, although T1 has higher potential predictability than T2 in the southeastern US. Both T1 and T2 display similar most predictable components (PrCs) for the NA SAT, characterized by the inter-annual variability mode and the long-term trend mode. The first one is inherent to the tropical Pacific sea surface temperature forcing, such as the El Nino-Southern Oscillation, whereas the second one is closely associated with global warming. In general, the PrC modes can better characterize the predictability in T1 than in T2, in particular for the inter-annual variability mode in the fall. The prediction skill against observations is better measured by the PrC analysis than by principal component analysis for all seasons, indicating the stronger capability of PrCA in extracting prediction targets.
NASA Astrophysics Data System (ADS)
Ahmad, J. A.; Forman, B. A.
2017-12-01
High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.
Sound scattering by several zooplankton groups. II. Scattering models.
Stanton, T K; Chu, D; Wiebe, P H
1998-01-01
Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon the acoustically inferred boundary conditions determined from laboratory backscattering data presented in part I of this series [Stanton et al., J. Acoust. Soc. Am. 103, 225-235 (1998)]. The models use a combination of ray theory, modal-series solution, and distorted wave Born approximation (DWBA). The formulations, which are inherently approximate, are designed to include only the dominant scattering mechanisms as determined from the experiments. The models for the fluidlike animals (euphausiids in this case) ranged from the simplest case involving two rays, which could qualitatively describe the structure of target strength versus frequency for single pings, to the most complex case involving a rough inhomogeneous asymmetrically tapered bent cylinder using the DWBA-based formulation which could predict echo levels over all angles of incidence (including the difficult region of end-on incidence). The model for the elastic shelled body (gastropods in this case) involved development of an analytical model which takes into account irregularities and discontinuities of the shell. The model for gas-bearing animals (siphonophores) is a hybrid model which is composed of the summation of the exact solution to the gas sphere and the approximate DWBA-based formulation for arbitrarily shaped fluidlike bodies. There is also a simplified ray-based model for the siphonophore. The models are applied to data involving single pings, ping-to-ping variability, and echoes averaged over many pings. There is reasonable qualitative agreement between the predictions and single ping data, and reasonable quantitative agreement between the predictions and variability and averages of echo data.
NASA Astrophysics Data System (ADS)
Jones, William I.
This study examined the understanding of nature of science among participants in their final year of a 4-year undergraduate teacher education program at a Midwest liberal arts university. The Logic Model Process was used as an integrative framework to focus the collection, organization, analysis, and interpretation of the data for the purpose of (1) describing participant understanding of NOS and (2) to identify participant characteristics and teacher education program features related to those understandings. The Views of Nature of Science Questionnaire form C (VNOS-C) was used to survey participant understanding of 7 target aspects of Nature of Science (NOS). A rubric was developed from a review of the literature to categorize and score participant understanding of the target aspects of NOS. Participants' high school and college transcripts, planning guides for their respective teacher education program majors, and science content and science teaching methods course syllabi were examined to identify and categorize participant characteristics and teacher education program features. The R software (R Project for Statistical Computing, 2010) was used to conduct an exploratory analysis to determine correlations of the antecedent and transaction predictor variables with participants' scores on the 7 target aspects of NOS. Fourteen participant characteristics and teacher education program features were moderately and significantly ( p < .01) correlated with participant scores on the target aspects of NOS. The 6 antecedent predictor variables were entered into multiple regression analyses to determine the best-fit model of antecedent predictor variables for each target NOS aspect. The transaction predictor variables were entered into separate multiple regression analyses to determine the best-fit model of transaction predictor variables for each target NOS aspect. Variables from the best-fit antecedent and best-fit transaction models for each target aspect of NOS were then combined. A regression analysis for each of the combined models was conducted to determine the relative effect of these variables on the target aspects of NOS. Findings from the multiple regression analyses revealed that each of the fourteen predictor variables was present in the best-fit model for at least 1 of the 7 target aspects of NOS. However, not all of the predictor variables were statistically significant (p < .007) in the models and their effect (beta) varied. Participants in the teacher education program who had higher ACT Math scores, completed more high school science credits, and were enrolled either in the Middle Childhood with a science concentration program major or in the Adolescent/Young Adult Science Education program major were more likely to have an informed understanding on each of the 7 target aspects of NOS. Analyses of the planning guides and the course syllabi in each teacher education program major revealed differences between the program majors that may account for the results.
Heavy particle transport in sputtering systems
NASA Astrophysics Data System (ADS)
Trieschmann, Jan
2015-09-01
This contribution aims to discuss the theoretical background of heavy particle transport in plasma sputtering systems such as direct current magnetron sputtering (dcMS), high power impulse magnetron sputtering (HiPIMS), or multi frequency capacitively coupled plasmas (MFCCP). Due to inherently low process pressures below one Pa only kinetic simulation models are suitable. In this work a model appropriate for the description of the transport of film forming particles sputtered of a target material has been devised within the frame of the OpenFOAM software (specifically dsmcFoam). The three dimensional model comprises of ejection of sputtered particles into the reactor chamber, their collisional transport through the volume, as well as deposition of the latter onto the surrounding surfaces (i.e. substrates, walls). An angular dependent Thompson energy distribution fitted to results from Monte-Carlo simulations is assumed initially. Binary collisions are treated via the M1 collision model, a modified variable hard sphere (VHS) model. The dynamics of sputtered and background gas species can be resolved self-consistently following the direct simulation Monte-Carlo (DSMC) approach or, whenever possible, simplified based on the test particle method (TPM) with the assumption of a constant, non-stationary background at a given temperature. At the example of an MFCCP research reactor the transport of sputtered aluminum is specifically discussed. For the peculiar configuration and under typical process conditions with argon as process gas the transport of aluminum sputtered of a circular target is shown to be governed by a one dimensional interaction of the imposed and backscattered particle fluxes. The results are analyzed and discussed on the basis of the obtained velocity distribution functions (VDF). This work is supported by the German Research Foundation (DFG) in the frame of the Collaborative Research Centre TRR 87.
Cooke, Steven J; Martins, Eduardo G; Struthers, Daniel P; Gutowsky, Lee F G; Power, Michael; Doka, Susan E; Dettmers, John M; Crook, David A; Lucas, Martyn C; Holbrook, Christopher M; Krueger, Charles C
2016-04-01
Freshwater fish move vertically and horizontally through the aquatic landscape for a variety of reasons, such as to find and exploit patchy resources or to locate essential habitats (e.g., for spawning). Inherent challenges exist with the assessment of fish populations because they are moving targets. We submit that quantifying and describing the spatial ecology of fish and their habitat is an important component of freshwater fishery assessment and management. With a growing number of tools available for studying the spatial ecology of fishes (e.g., telemetry, population genetics, hydroacoustics, otolith microchemistry, stable isotope analysis), new knowledge can now be generated and incorporated into biological assessment and fishery management. For example, knowing when, where, and how to deploy assessment gears is essential to inform, refine, or calibrate assessment protocols. Such information is also useful for quantifying or avoiding bycatch of imperiled species. Knowledge of habitat connectivity and usage can identify critically important migration corridors and habitats and can be used to improve our understanding of variables that influence spatial structuring of fish populations. Similarly, demographic processes are partly driven by the behavior of fish and mediated by environmental drivers. Information on these processes is critical to the development and application of realistic population dynamics models. Collectively, biological assessment, when informed by knowledge of spatial ecology, can provide managers with the ability to understand how and when fish and their habitats may be exposed to different threats. Naturally, this knowledge helps to better evaluate or develop strategies to protect the long-term viability of fishery production. Failure to understand the spatial ecology of fishes and to incorporate spatiotemporal data can bias population assessments and forecasts and potentially lead to ineffective or counterproductive management actions.
Evaluation of a direct blood culture disk diffusion antimicrobial susceptibility test.
Doern, G V; Scott, D R; Rashad, A L; Kim, K S
1981-01-01
A total of 556 unique blood culture isolates of nonfastidious aerobic and facultatively anaerobic bacteria were examined by direct and standardized disk susceptibility test methods (4,234 antibiotic-organism comparisons). When discrepancies which could be accounted for by the variability inherent in disk diffusion susceptibility tests were excluded, the direct method demonstrated 96.8% overall agreement with the standardized method. A total of 1.6% minor, 1.5% major, and 0.1% very major discrepancies were noted. PMID:7325634
A public health model of Medicaid emergency room use.
de Alteriis, M; Fanning, T
1991-01-01
This study builds a public health model of Medicaid emergency room use for 57 upstate counties in New York from 1985 to 1987. The principle explanatory variables are primary care use (based in physicians' offices, freestanding clinics, and hospital outpatient departments), the concentration of poverty, and geographic and hospital availability. These factors influence the emergency room use of all Medicaid aid categories apart from the Supplemental Security Income recipients. Inherent in these findings are a number of policy implications that are explored in this article.
Two-pass smoother based on the SVSF estimation strategy
NASA Astrophysics Data System (ADS)
Gadsden, S. A.; Al-Shabi, M.; Kirubarajan, T.
2015-05-01
The smooth variable structure filter (SVSF) has seen significant development and research activity in recent years. It is based on sliding mode concepts, which utilizes a switching gain that brings an inherent amount of stability to the estimation process. In this paper, the SVSF is reformulated to present a two-pass smoother based on the SVSF gain. The proposed method is applied on an aerospace flight surface actuator, and the results are compared with the popular Kalman-based two-pass smoother.
Photographic investigation into the mechanism of combustion in irregular detonation waves
NASA Astrophysics Data System (ADS)
Kiyanda, C. B.; Higgins, A. J.
2013-03-01
Irregular detonations are supersonic combustion waves in which the inherent multi-dimensional structure is highly variable. In such waves, it is questionable whether auto-ignition induced by shock compression is the only combustion mechanism present. Through the use of high-speed schlieren and self-emitted light photography, the velocity of the different components of detonation waves in a {{ CH}}_4+2{ O}_2 mixture is analyzed. The observed burn-out of unreacted pockets is hypothesized to be due to turbulent combustion.
Analyzing nonstationary financial time series via hilbert-huang transform (HHT)
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2008-01-01
An apparatus, computer program product and method of analyzing non-stationary time varying phenomena. A representation of a non-stationary time varying phenomenon is recursively sifted using Empirical Mode Decomposition (EMD) to extract intrinsic mode functions (IMFs). The representation is filtered to extract intrinsic trends by combining a number of IMFs. The intrinsic trend is inherent in the data and identifies an IMF indicating the variability of the phenomena. The trend also may be used to detrend the data.
Activated Microglia Targeting Dendrimer-Minocycline Conjugate as Therapeutics for Neuroinflammation.
Sharma, Rishi; Kim, Soo-Young; Sharma, Anjali; Zhang, Zhi; Kambhampati, Siva Pramodh; Kannan, Sujatha; Kannan, Rangaramanujam M
2017-11-15
Brain-related disorders have outmatched cancer and cardiovascular diseases worldwide as the leading cause of morbidity and mortality. The lack of effective therapies and the relatively dry central nervous system (CNS) drug pipeline pose formidable challenge. Superior, targeted delivery of current clinically approved drugs may offer significant potential. Minocycline has shown promise for the treatment of neurological diseases owing to its ability to penetrate the blood-brain barrier (BBB) and potency. Despite its potential in the clinic and in preclinical models, the high doses needed to affect a positive therapeutic response have led to side effects. Targeted delivery of minocycline to the injured site and injured cells in the brain can be highly beneficial. Systemically administered hydroxyl poly(amidoamine) (PAMAM) generation-6 (G6) dendrimers have a longer blood circulation time and have been shown to cross the impaired BBB. We have successfully prepared and characterized the in vitro efficacy and in vivo targeting ability of hydroxyl-G6 PAMAM dendrimer-9-amino-minocycline conjugate (D-mino). Minocycline is a challenging drug to carry out chemical transformations due to its inherent instability. We used a combination of a highly efficient and mild copper catalyzed azide-alkyne click reaction (CuAAC) along with microwave energy to conjugate 9-amino-minocycline (mino) to the dendrimer surface via enzyme responsive linkages. D-mino was further evaluated for anti-inflammatory and antioxidant activity in lipopolysaccharides-activated murine microglial cells. D-mino conjugates enhanced the intracellular availability of the drug due to their rapid uptake, suppressed inflammatory cytokine tumor necrosis factor α (TNF-α) production, and reduced oxidative stress by suppressing nitric oxide production, all significantly better than the free drug. Fluorescently labeled dendrimer conjugate (Cy5-D-mino) was systematically administered (intravenous, 55 mg/kg) on postnatal day 1 to rabbit kits with a clinically relevant phenotype of cerebral palsy. The in vivo imaging study indicates that Cy5-D-mino crossed the impaired blood-brain barrier and co-localized with activated microglia at the periventricular white matter areas, including the corpus callosum and the angle of the lateral ventricle, with significant implications for positive therapeutic outcomes. The enhanced efficacy of D-mino, when combined with the inherent neuroinflammation-targeting capability of the PAMAM dendrimers, may provide new opportunities for targeted drug delivery to treat neurological disorders.
Analysis on influencing factors and decision-making of pedestrian crossing at intersections
NASA Astrophysics Data System (ADS)
Liu, Likun; Wang, Ziyang
2017-10-01
The city signal intersection always has complex traffic flow and many traffic accidents. As vulnerable participants, the proportion of traffic accidents involving pedestrians remain high. And a lot of insecure crossing behavior seriously reduce the safety of the intersection. Therefore, it is necessary to carry out in-depth study on the traversing characteristics of pedestrians, reveal the inherent laws of pedestrian crossing, and then put forward targeted measures to improve pedestrian traffic environment, protect pedestrian crossing safety and improve traffic efficiency.
RNA interference in the clinic: challenges and future directions
Pecot, Chad V.; Calin, George A.; Coleman, Robert L.; Lopez-Berestein, Gabriel; Sood, Anil K.
2011-01-01
Inherent difficulties with blocking many desirable targets using conventional approaches have prompted many to consider using RNA interference (RNAi) as a therapeutic approach. Although exploitation of RNAi has immense potential as a cancer therapeutic, many physiological obstacles stand in the way of successful and efficient delivery. This Review explores current challenges to the development of synthetic RNAi-based therapies and considers new approaches to circumvent biological barriers, to avoid intolerable side effects and to achieve controlled and sustained release. PMID:21160526
2014-09-01
college student alongside you, little sis! To Jes- xix sika Miller, Lauren Garcia and Caity White , my closest friends and confidants of ten years, who...arena corresponding coverage to the GUI is outlined in white 2.1.3 Challenges in the Model There are inherent challenges with any model that implements...source middleware originally maintained by Willow Garage [36] and now managed by the Open Source Robotics Foundation [37]. It provides a framework for
1986-06-10
the solution of the base could be the solution of the target. If expert systems are to mimic humans , then they should inherently utilize analogy. In the...expert systems environment, the theory of frames for representing knowledge developed partly because humans usually solve problems by first seeing if...Goals," Computer, May 1975, p. 17. 8. A.I. Wasserman, "Some Principles of User Software Engineering for Information Systems ," Digest of Papers, COMPCON
Epigenetic Regulation of ZBTB18 Promotes Glioblastoma Progression. | Office of Cancer Genomics
Glioblastoma (GBM) comprises distinct subtypes characterized by their molecular profile. Mesenchymal identity in GBM has been associated with a comparatively unfavorable prognosis, primarily due to inherent resistance of these tumors to current therapies. The identification of molecular determinants of mesenchymal transformation could potentially allow for the discovery of new therapeutic targets. Zinc Finger and BTB Domain Containing 18 (ZBTB18/ZNF238/RP58) is a zinc finger transcriptional repressor with a crucial role in brain development and neuronal differentiation.
Quiet Eye Duration Is Responsive to Variability of Practice and to the Axis of Target Changes
ERIC Educational Resources Information Center
Horn, Robert R.; Okumura, Michelle S.; Alexander, Melissa G. F.; Gardin, Fredrick A.; Sylvester, Curtis T.
2012-01-01
We tested the hypothesis that quiet eye, the final fixation before the initiation of a movement in aiming tasks, is used to scale the movement's parameters. Two groups of 12 participants (N = 24) threw darts to targets in the horizontal and vertical axes under conditions of higher (random) or lower (blocked) target variability. Supporting our…
Variable Distance Angular Symbology Reader
NASA Technical Reports Server (NTRS)
Schramm, Harry F., Jr. (Inventor); Corder, Eric L. (Inventor)
2006-01-01
A variable distance angular symbology, reader utilizes at least one light source to direct light through a beam splitter and onto a target. A target may be angled relative to the impinging light beam up to and maybe even greater than 45deg. A reflected beam from the target passes through the beam splitter and is preferably directed 90deg relative to the light source through a telecentric lens to a scanner which records an image of the target such as a direct part marking code.
NASA Astrophysics Data System (ADS)
Holsman, Kirstin K.; Ianelli, James; Aydin, Kerim; Punt, André E.; Moffitt, Elizabeth A.
2016-12-01
Multi-species statistical catch at age models (MSCAA) can quantify interacting effects of climate and fisheries harvest on species populations, and evaluate management trade-offs for fisheries that target several species in a food web. We modified an existing MSCAA model to include temperature-specific growth and predation rates and applied the modified model to three fish species, walleye pollock (Gadus chalcogrammus), Pacific cod (Gadus macrocephalus) and arrowtooth flounder (Atheresthes stomias), from the eastern Bering Sea (USA). We fit the model to data from 1979 through 2012, with and without trophic interactions and temperature effects, and use projections to derive single- and multi-species biological reference points (BRP and MBRP, respectively) for fisheries management. The multi-species model achieved a higher over-all goodness of fit to the data (i.e. lower negative log-likelihood) for pollock and Pacific cod. Variability from water temperature typically resulted in 5-15% changes in spawning, survey, and total biomasses, but did not strongly impact recruitment estimates or mortality. Despite this, inclusion of temperature in projections did have a strong effect on BRPs, including recommended yield, which were higher in single-species models for Pacific cod and arrowtooth flounder that included temperature compared to the same models without temperature effects. While the temperature-driven multi-species model resulted in higher yield MBPRs for arrowtooth flounder than the same model without temperature, we did not observe the same patterns in multi-species models for pollock and Pacific cod, where variability between harvest scenarios and predation greatly exceeded temperature-driven variability in yield MBRPs. Annual predation on juvenile pollock (primarily cannibalism) in the multi-species model was 2-5 times the annual harvest of adult fish in the system, thus predation represents a strong control on population dynamics that exceeds temperature-driven changes to growth and is attenuated through harvest-driven reductions in predator populations. Additionally, although we observed differences in spawning biomasses at the accepted biological catch (ABC) proxy between harvest scenarios and single- and multi-species models, discrepancies in spawning stock biomass estimates did not translate to large differences in yield. We found that multi-species models produced higher estimates of combined yield for aggregate maximum sustainable yield (MSY) targets than single species models, but were more conservative than single-species models when individual MSY targets were used, with the exception of scenarios where minimum biomass thresholds were imposed. Collectively our results suggest that climate and trophic drivers can interact to affect MBRPs, but for prey species with high predation rates, trophic- and management-driven changes may exceed direct effects of temperature on growth and predation. Additionally, MBRPs are not inherently more conservative than single-species BRPs. This framework provides a basis for the application of MSCAA models for tactical ecosystem-based fisheries management decisions under changing climate conditions.
Individual Movement Variability Magnitudes Are Explained by Cortical Neural Variability.
Haar, Shlomi; Donchin, Opher; Dinstein, Ilan
2017-09-13
Humans exhibit considerable motor variability even across trivial reaching movements. This variability can be separated into specific kinematic components such as extent and direction that are thought to be governed by distinct neural processes. Here, we report that individual subjects (males and females) exhibit different magnitudes of kinematic variability, which are consistent (within individual) across movements to different targets and regardless of which arm (right or left) was used to perform the movements. Simultaneous fMRI recordings revealed that the same subjects also exhibited different magnitudes of fMRI variability across movements in a variety of motor system areas. These fMRI variability magnitudes were also consistent across movements to different targets when performed with either arm. Cortical fMRI variability in the posterior-parietal cortex of individual subjects explained their movement-extent variability. This relationship was apparent only in posterior-parietal cortex and not in other motor system areas, thereby suggesting that individuals with more variable movement preparation exhibit larger kinematic variability. We therefore propose that neural and kinematic variability are reliable and interrelated individual characteristics that may predispose individual subjects to exhibit distinct motor capabilities. SIGNIFICANCE STATEMENT Neural activity and movement kinematics are remarkably variable. Although intertrial variability is rarely studied, here, we demonstrate that individual human subjects exhibit distinct magnitudes of neural and kinematic variability that are reproducible across movements to different targets and when performing these movements with either arm. Furthermore, when examining the relationship between cortical variability and movement variability, we find that cortical fMRI variability in parietal cortex of individual subjects explained their movement extent variability. This enabled us to explain why some subjects performed more variable movements than others based on their cortical variability magnitudes. Copyright © 2017 the authors 0270-6474/17/379076-10$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fruth, T.; Cabrera, J.; Csizmadia, Sz.
2013-11-01
A photometric survey of three southern target fields with BEST II yielded the detection of 2406 previously unknown variable stars and an additional 617 stars with suspected variability. This study presents a catalog including their coordinates, magnitudes, light curves, ephemerides, amplitudes, and type of variability. In addition, the variability of 17 known objects is confirmed, thus validating the results. The catalog contains a number of known and new variables that are of interest for further astrophysical investigations, in order to, e.g., search for additional bodies in eclipsing binary systems, or to test stellar interior models. Altogether, 209,070 stars were monitoredmore » with BEST II during a total of 128 nights in 2009/2010. The overall variability fraction of 1.2%-1.5% in these target fields is well comparable to similar ground-based photometric surveys. Within the main magnitude range of R in [11, 17], we identify 0.67(3)% of all stars to be eclipsing binaries, which indicates a completeness of about one third for this particular type in comparison to space surveys.« less
Yao, Lei; Chen, Liding; Wei, Wei
2017-01-01
In the context of global urbanization, urban flood risk in many cities has become a serious environmental issue, threatening the health of residents and the environment. A number of hydrological studies have linked urban flooding issues closely to the spectrum of spatial patterns of urbanization, but relatively little attention has been given to small-scale catchments within the realm of urban systems. This study aims to explore the hydrological effects of small-scaled urbanized catchments assigned with various landscape patterns. Twelve typical residential catchments in Beijing were selected as the study areas. Total Impervious Area (TIA), Directly Connected Impervious Area (DCIA), and a drainage index were used as the catchment spatial metrics. Three scenarios were designed as different spatial arrangement of catchment imperviousness. Runoff variables including total and peak runoff depth (Qt and Qp) were simulated by using Strom Water Management Model (SWMM). The relationship between catchment spatial patterns and runoff variables were determined, and the results demonstrated that, spatial patterns have inherent influences on flood risks in small urbanized catchments. Specifically: (1) imperviousness acts as an effective indicator in affecting both Qt and Qp; (2) reducing the number of rainwater inlets appropriately will benefit the catchment peak flow mitigation; (3) different spatial concentrations of impervious surfaces have inherent influences on Qp. These findings provide insights into the role of urban spatial patterns in driving rainfall-runoff processes in small urbanized catchments, which is essential for urban planning and flood management. PMID:28264521
Yao, Lei; Chen, Liding; Wei, Wei
2017-02-28
In the context of global urbanization, urban flood risk in many cities has become a serious environmental issue, threatening the health of residents and the environment. A number of hydrological studies have linked urban flooding issues closely to the spectrum of spatial patterns of urbanization, but relatively little attention has been given to small-scale catchments within the realm of urban systems. This study aims to explore the hydrological effects of small-scaled urbanized catchments assigned with various landscape patterns. Twelve typical residential catchments in Beijing were selected as the study areas. Total Impervious Area ( TIA ), Directly Connected Impervious Area ( DCIA ), and a drainage index were used as the catchment spatial metrics. Three scenarios were designed as different spatial arrangement of catchment imperviousness. Runoff variables including total and peak runoff depth ( Q t and Q p ) were simulated by using Strom Water Management Model (SWMM). The relationship between catchment spatial patterns and runoff variables were determined, and the results demonstrated that, spatial patterns have inherent influences on flood risks in small urbanized catchments. Specifically: (1) imperviousness acts as an effective indicator in affecting both Q t and Q p ; (2) reducing the number of rainwater inlets appropriately will benefit the catchment peak flow mitigation; (3) different spatial concentrations of impervious surfaces have inherent influences on Q p . These findings provide insights into the role of urban spatial patterns in driving rainfall-runoff processes in small urbanized catchments, which is essential for urban planning and flood management.
Longo, Alessia; Federolf, Peter; Haid, Thomas; Meulenbroek, Ruud
2018-06-01
In many daily jobs, repetitive arm movements are performed for extended periods of time under continuous cognitive demands. Even highly monotonous tasks exhibit an inherent motor variability and subtle fluctuations in movement stability. Variability and stability are different aspects of system dynamics, whose magnitude may be further affected by a cognitive load. Thus, the aim of the study was to explore and compare the effects of a cognitive dual task on the variability and local dynamic stability in a repetitive bimanual task. Thirteen healthy volunteers performed the repetitive motor task with and without a concurrent cognitive task of counting aloud backwards in multiples of three. Upper-body 3D kinematics were collected and postural reconfigurations-the variability related to the volunteer's postural change-were determined through a principal component analysis-based procedure. Subsequently, the most salient component was selected for the analysis of (1) cycle-to-cycle spatial and temporal variability, and (2) local dynamic stability as reflected by the largest Lyapunov exponent. Finally, end-point variability was evaluated as a control measure. The dual cognitive task proved to increase the temporal variability and reduce the local dynamic stability, marginally decrease endpoint variability, and substantially lower the incidence of postural reconfigurations. Particularly, the latter effect is considered to be relevant for the prevention of work-related musculoskeletal disorders since reduced variability in sustained repetitive tasks might increase the risk of overuse injuries.
The variability of the rainfall rate as a function of area
NASA Astrophysics Data System (ADS)
Jameson, A. R.; Larsen, M. L.
2016-01-01
Distributions of drop sizes can be expressed as DSD = Nt × PSD, where Nt is the total number of drops in a sample and PSD is the frequency distribution of drop diameters (D). Their discovery permitted remote sensing techniques for rainfall estimation using radars and satellites measuring over large domains of several kilometers. Because these techniques depend heavily on higher moments of the PSD, there has been a bias toward attributing the variability of the intrinsic rainfall rates R over areas (σR) to the variability of the PSDs. While this variability does increase up to a point with increasing domain dimension L, the variability of the rainfall rate R also depends upon the variability in the total number of drops Nt. We show that while the importance of PSDs looms large for small domains used in past studies, it is the variability of Nt that dominates the variability of R as L increases to 1 km and beyond. The PSDs contribute to the variability of R through the relative dispersion of χ = D3Vt, where Vt is the terminal fall speed of drops of diameter D. However, the variability of χ is inherently limited because drop sizes and fall speeds are physically limited. In contrast, it is shown that the variance of Nt continuously increases as the domain expands for physical reasons explained below. Over domains larger than around 1 km, it is shown that Nt dominates the variance of the rainfall rate with increasing L regardless of the PSD.
Operational Assessment of Apollo Lunar Surface Extravehicular Activity
NASA Technical Reports Server (NTRS)
Miller, Matthew James; Claybrook, Austin; Greenlund, Suraj; Marquez, Jessica J.; Feigh, Karen M.
2017-01-01
Quantifying the operational variability of extravehicular activity (EVA) execution is critical to help design and build future support systems to enable astronauts to monitor and manage operations in deep-space, where ground support operators will no longer be able to react instantly and manage execution deviations due to the significant communication latency. This study quantifies the operational variability exhibited during Apollo 14-17 lunar surface EVA operations to better understand the challenges and natural tendencies of timeline execution and life support system performance involved in surface operations. Each EVA (11 in total) is individually summarized as well as aggregated to provide descriptive trends exhibited throughout the Apollo missions. This work extends previous EVA task analyses by calculating deviations between planned and as-performed timelines as well as examining metabolic rate and consumables usage throughout the execution of each EVA. The intent of this work is to convey the natural variability of EVA operations and to provide operational context for coping with the variability inherent to EVA execution as a means to support future concepts of operations.
Social vulnerability and climate variability in southern Brazil: a TerraPop case study
NASA Astrophysics Data System (ADS)
Adamo, S. B.; Fitch, C. A.; Kugler, T.; Doxsey-Whitfield, E.
2014-12-01
Climate variability is an inherent characteristic of the Earth's climate, including but not limited to climate change. It affects and impacts human society in different ways, depending on the underlying socioeconomic vulnerability of specific places, social groups, households and individuals. This differential vulnerability presents spatial and temporal variations, and is rooted in historical patterns of development and relations between human and ecological systems. This study aims to assess the impact of climate variability on livelihoods and well-being, as well as their changes over time and across space, and for rural and urban populations. The geographic focus is Southern Brazil-the states of Parana, Santa Catarina and Rio Grande do Sul-- and the objectives include (a) to identify and map critical areas or hotspots of exposure to climate variability (temperature and precipitation), and (b) to identify internal variation or differential vulnerability within these areas and its evolution over time (1980-2010), using newly available integrated data from the Terra Populus project. These data include geo-referenced climate and agricultural data, and data describing demographic and socioeconomic characteristics of individuals, households and places.
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
Gaussian Process Kalman Filter for Focal Plane Wavefront Correction and Exoplanet Signal Extraction
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy
2018-01-01
Currently, the ultimate limitation of space-based coronagraphy is the ability to subtract the residual PSF after wavefront correction to reveal the planet. Called reference difference imaging (RDI), the technique consists of conducting wavefront control to collect the reference point spread function (PSF) by observing a bright star, and then extracting target planet signals by subtracting a weighted sum of reference PSFs. Unfortunately, this technique is inherently inefficient because it spends a significant fraction of the observing time on the reference star rather than the target star with the planet. Recent progress in model based wavefront estimation suggests an alternative approach. A Kalman filter can be used to estimate the stellar PSF for correction by the wavefront control system while simultaneously estimating the planet signal. Without observing the reference star, the (extended) Kalman filter directly utilizes the wavefront correction data and combines the time series observations and model predictions to estimate the stellar PSF and planet signals. Because wavefront correction is used during the entire observation with no slewing, the system has inherently better stability. In this poster we show our results aimed at further improving our Kalman filter estimation accuracy by including not only temporal correlations but also spatial correlations among neighboring pixels in the images. This technique is known as a Gaussian process Kalman filter (GPKF). We also demonstrate the advantages of using a Kalman filter rather than RDI by simulating a real space exoplanet detection mission.
Genetic relationships among some hawthorn (Crataegus spp.) species and genotypes.
Yilmaz, Kadir Ugurtan; Yanar, Makbule; Ercisli, Sezai; Sahiner, Hatice; Taskin, Tuncer; Zengin, Yasar
2010-10-01
The genus Crataegus is well distributed in Turkey as a wild plant, with numerous, inherently variable species and genotypes. RAPD markers were used to study 17 hawthorn genotypes belonging to Crataegus monogyna ssp. monogyna Jacq (2 genotypes), C. monogyna ssp. azarella Jacq (1), Crataegus pontica K.Koch (3), Crataegus orientalis var. orientalis Pallas Ex Bieb (3), Crataegus pseudoheterophylla Pojark (1), Crataegus aronia var. dentata Browicz (1), C. aronia var. aronia Browicz (4), and Crateagus x bornmuelleri Zabel (2). The 10 RAPD primers produced 72 polymorphic bands (88% polymorphism). A dendrogram based on Jaccard's index included four major groups and one outgroup according to taxa. The lowest genetic variability was observed within C. aronia var. aronia genotypes. The study demonstrated that RAPD analysis is efficient for genotyping wild-grown hawthorns.
Zhao, Lin; Guan, Dongxue; Landry, René Jr.; Cheng, Jianhua; Sydorenko, Kostyantyn
2015-01-01
Target positioning systems based on MEMS gyros and laser rangefinders (LRs) have extensive prospects due to their advantages of low cost, small size and easy realization. The target positioning accuracy is mainly determined by the LR’s attitude derived by the gyros. However, the attitude error is large due to the inherent noises from isolated MEMS gyros. In this paper, both accelerometer/magnetometer and LR attitude aiding systems are introduced to aid MEMS gyros. A no-reset Federated Kalman Filter (FKF) is employed, which consists of two local Kalman Filters (KF) and a Master Filter (MF). The local KFs are designed by using the Direction Cosine Matrix (DCM)-based dynamic equations and the measurements from the two aiding systems. The KFs can estimate the attitude simultaneously to limit the attitude errors resulting from the gyros. Then, the MF fuses the redundant attitude estimates to yield globally optimal estimates. Simulation and experimental results demonstrate that the FKF-based system can improve the target positioning accuracy effectively and allow for good fault-tolerant capability. PMID:26512672
Insights into the key roles of proteoglycans in breast cancer biology and translational medicine
Theocharis, Achilleas D.; Skandalis, Spyros S.; Neill, Thomas; Multhaupt, Hinke A. B.; Hubo, Mario; Frey, Helena; Gopal, Sandeep; Gomes, Angélica; Afratis, Nikos; Lim, Hooi Ching; Couchman, John R.; Filmus, Jorge; Sanderson, Ralph D.; Schaefer, Liliana; Iozzo, Renato V.; Karamanos, Nikos K.
2015-01-01
Proteoglycans control numerous normal and pathological processes, among which are morphogenesis, tissue repair, inflammation, vascularization and cancer metastasis. During tumor development and growth, proteoglycan expression is markedly modified in the tumor microenvironment. Altered expression of proteoglycans on tumor and stromal cell membranes affects cancer cell signaling, growth and survival, cell adhesion, migration and angiogenesis. Despite the high complexity and heterogeneity of breast cancer, the rapid evolution in our knowledge that proteoglycans are among the key players in the breast tumor microenvironment suggests their potential as pharmacological targets in this type of cancer. It has been recently suggested that pharmacological treatment may target proteoglycan metabolism, their utilization as targets for immunotherapy or their direct use as therapeutic agents. The diversity inherent in the proteoglycans that will be presented herein provides the potential for multiple layers of regulation of breast tumor behavior. This review summarizes recent developments concerning the biology of selected proteoglycans in breast cancer, and presents potential targeted therapeutic approaches based on their novel key roles in breast cancer. PMID:25829250
Rotary target method to prepare thin films of CdS/SiO 2 by pulsed laser deposition
NASA Astrophysics Data System (ADS)
Wang, H.; Zhu, Y.; Ong, P. P.
2000-12-01
Thin films of CdS-doped SiO 2 glass were prepared by using the conventional pulsed laser deposition (PLD) technique. The laser target consisted of a specially constructed rotary wheel which provided easy control of the exposure-area ratio to expose alternately the two materials to the laser beam. The physical target assembly avoided the potential complications inherent in chemically mixed targets such as in the sol-gel method. Time-of-flight (TOF) spectra confirmed the existence of the SiO 2 and CdS components in the thin-film samples so produced. X-ray diffraction (XRD) and atomic force microscopy(AFM) results showed the different sizes and structures of the as-deposited and annealed films. The wurtzite phase of CdS was found in the 600 oC-annealed sample, while the as-deposited film showed a cubic-hexagonal mixed structure. In the corresponding PL (photoluminescence) spectra, a red shift of the CdS band edge emission was found, which may be a result of the interaction between the CdS nanocrystallite and SiO 2 at their interface.
Pro-Tumoral Inflammatory Myeloid Cells as Emerging Therapeutic Targets.
Szebeni, Gabor J; Vizler, Csaba; Nagy, Lajos I; Kitajka, Klara; Puskas, Laszlo G
2016-11-23
Since the observation of Virchow, it has long been known that the tumor microenvironment constitutes the soil for the infiltration of inflammatory cells and for the release of inflammatory mediators. Under certain circumstances, inflammation remains unresolved and promotes cancer development. Here, we review some of these indisputable experimental and clinical evidences of cancer related smouldering inflammation. The most common myeloid infiltrate in solid tumors is composed of myeloid-derived suppressor cells (MDSCs) and tumor-associated macrophages (TAMs). These cells promote tumor growth by several mechanisms, including their inherent immunosuppressive activity, promotion of neoangiogenesis, mediation of epithelial-mesenchymal transition and alteration of cellular metabolism. The pro-tumoral functions of TAMs and MDSCs are further enhanced by their cross-talk offering a myriad of potential anti-cancer therapeutic targets. We highlight these main pro-tumoral mechanisms of myeloid cells and give a general overview of their phenotypical and functional diversity, offering examples of possible therapeutic targets. Pharmacological targeting of inflammatory cells and molecular mediators may result in therapies improving patient condition and prognosis. Here, we review experimental and clinical findings on cancer-related inflammation with a major focus on creating an inventory of current small molecule-based therapeutic interventions targeting cancer-related inflammatory cells: TAMs and MDSCs.
Nano-aggregates: emerging delivery tools for tumor therapy.
Sharma, Vinod Kumar; Jain, Ankit; Soni, Vandana
2013-01-01
A plethora of formulation techniques have been reported in the literature for site-specific targeting of water-soluble and -insoluble anticancer drugs. Along with other vesicular and particulate carrier systems, nano-aggregates have recently emerged as a novel supramolecular colloidal carrier with promise for using poorly water-soluble drugs in molecular targeted therapies. Nano-aggregates possess some inherent properties such as size in the nanometers, high loading efficiency, and in vivo stability. Nano-aggregates can provide site-specific drug delivery via either a passive or active targeting mechanism. Nano-aggregates are formed from a polymer-drug conjugated amphiphilic block copolymer. They are suitable for encapsulation of poorly water-soluble drugs by covalent conjugation as well as physical encapsulation. Because of physical encapsulation, a maximum amount of drug can be loaded in nano-aggregates, which helps to achieve a sufficiently high drug concentration at the target site. Active transport can be achieved by conjugating a drug with vectors or ligands that bind specifically to receptors being overexpressed in the tumor cells. In this review, we explore synthesis and tumor targeting potential of nano-aggregates with active and passive mechanisms, and we discuss various characterization parameters, ex vivo studies, biodistribution studies, clinical trials, and patents.
A universal entropy-driven mechanism for thioredoxin–target recognition
Palde, Prakash B.; Carroll, Kate S.
2015-01-01
Cysteine residues in cytosolic proteins are maintained in their reduced state, but can undergo oxidation owing to posttranslational modification during redox signaling or under conditions of oxidative stress. In large part, the reduction of oxidized protein cysteines is mediated by a small 12-kDa thiol oxidoreductase, thioredoxin (Trx). Trx provides reducing equivalents for central metabolic enzymes and is implicated in redox regulation of a wide number of target proteins, including transcription factors. Despite its importance in cellular redox homeostasis, the precise mechanism by which Trx recognizes target proteins, especially in the absence of any apparent signature binding sequence or motif, remains unknown. Knowledge of the forces associated with the molecular recognition that governs Trx–protein interactions is fundamental to our understanding of target specificity. To gain insight into Trx–target recognition, we have thermodynamically characterized the noncovalent interactions between Trx and target proteins before S-S reduction using isothermal titration calorimetry (ITC). Our findings indicate that Trx recognizes the oxidized form of its target proteins with exquisite selectivity, compared with their reduced counterparts. Furthermore, we show that recognition is dependent on the conformational restriction inherent to oxidized targets. Significantly, the thermodynamic signatures for multiple Trx targets reveal favorable entropic contributions as the major recognition force dictating these protein–protein interactions. Taken together, our data afford significant new insight into the molecular forces responsible for Trx–target recognition and should aid the design of new strategies for thiol oxidoreductase inhibition. PMID:26080424
Experimental and computational investigation of lateral gauge response in polycarbonate
NASA Astrophysics Data System (ADS)
Eliot, Jim; Harris, Ernest Joseph; Hazell, Paul; Appleby-Thomas, Gareth James; Winter, Ron; Wood, David Christopher
2012-03-01
The shock behaviour of polycarbonate is of interest due to its extensive use in defence applications. Interestingly, embedded lateral manganin stress gauges in polycarbonate have shown gradients behind incident shocks, suggestive of increasing shear strength. However, such gauges are commonly embedded in a central epoxy interlayer. This is an inherently invasive approach. Recently, research has suggested that in such systems interlayer/target impedance may contribute to observed gradients in lateral stress. Here, experimental T-gauge (Vishay Micro-Measurements® type J2M-SS-580SF-025) traces from polycarbonate targets are compared to computational simulations. The effects of gauge environment are investigated by looking at the response of lateral gauges with both standard "glued-joint" and a "dry joint" encapsulation, where no encapsulating medium is employed.
High-resolution streaming video integrated with UGS systems
NASA Astrophysics Data System (ADS)
Rohrer, Matthew
2010-04-01
Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.
Retroviral integration: Site matters
Demeulemeester, Jonas; De Rijck, Jan
2015-01-01
Here, we review genomic target site selection during retroviral integration as a multistep process in which specific biases are introduced at each level. The first asymmetries are introduced when the virus takes a specific route into the nucleus. Next, by co‐opting distinct host cofactors, the integration machinery is guided to particular chromatin contexts. As the viral integrase captures a local target nucleosome, specific contacts introduce fine‐grained biases in the integration site distribution. In vivo, the established population of proviruses is subject to both positive and negative selection, thereby continuously reshaping the integration site distribution. By affecting stochastic proviral expression as well as the mutagenic potential of the virus, integration site choice may be an inherent part of the evolutionary strategies used by different retroviruses to maximise reproductive success. PMID:26293289
Addressing variability in the acoustic startle reflex for accurate gap detection assessment.
Longenecker, Ryan J; Kristaponyte, Inga; Nelson, Gregg L; Young, Jesse W; Galazyuk, Alexander V
2018-06-01
The acoustic startle reflex (ASR) is subject to substantial variability. This inherent variability consequently shapes the conclusions drawn from gap-induced prepulse inhibition of the acoustic startle reflex (GPIAS) assessments. Recent studies have cast doubt as to the efficacy of this methodology as it pertains to tinnitus assessment, partially, due to variability in and between data sets. The goal of this study was to examine the variance associated with several common data collection variables and data analyses with the aim to improve GPIAS reliability. To study this the GPIAS tests were conducted in adult male and female CBA/CaJ mice. Factors such as inter-trial interval, circadian rhythm, sex differences, and sensory adaptation were each evaluated. We then examined various data analysis factors which influence GPIAS assessment. Gap-induced facilitation, data processing options, and assessments of tinnitus were studied. We found that the startle reflex is highly variable in CBA/CaJ mice, but this can be minimized by certain data collection factors. We also found that careful consideration of temporal fluctuations of the ASR and controlling for facilitation can lead to more accurate GPIAS results. This study provides a guide for reducing variance in the GPIAS methodology - thereby improving the diagnostic power of the test. Copyright © 2018 Elsevier B.V. All rights reserved.
Isazadeh, Siavash; Jauffur, Shameem; Frigon, Dominic
2016-12-01
Effect of ecological variables on community assembly of heterotrophic bacteria at eight full-scale and two pilot-scale activated sludge wastewater treatment plants (AS-WWTPs) were explored by pyrosequencing of 16S rRNA gene amplicons. In total, 39 samples covering a range of abiotic factors spread over space and time were analyzed. A core bacterial community of 24 families detected in at least six of the eight AS-WWTPs was defined. In addition to the core families, plant-specific families (observed at <50% AS-WWTPs) were found to be also important in the community structure. Observed beta diversity was partitioned with respect to ecological variables. Specifically, the following variables were considered: influent wastewater characteristics, season (winter vs. summer), process operations (conventional, oxidation ditch, and sequence batch reactor), reactor sizes (pilot-scale vs. full-scale reactors), chemical stresses defined by ozonation of return activated sludge, interannual variation, and geographical locations. Among the assessed variables, influent wastewater characteristics and geographical locations contributed more in explaining the differences between AS-WWTP bacterial communities with a maximum of approximately 26% of the observed variations. Partitioning of beta diversity is necessary to interpret the inherent variability in microbial community assembly and identify the driving forces at play in engineered microbial ecosystem. © 2016 The Authors. MicrobiologyOpen published by John Wiley & Sons Ltd.
Katsanos, Dimitris; Koneru, Sneha L.; Mestek Boukhibar, Lamia; Gritti, Nicola; Ghose, Ritobrata; Appleford, Peter J.; Doitsidou, Maria; Woollard, Alison; van Zon, Jeroen S.; Poole, Richard J.
2017-01-01
Biological systems are subject to inherent stochasticity. Nevertheless, development is remarkably robust, ensuring the consistency of key phenotypic traits such as correct cell numbers in a certain tissue. It is currently unclear which genes modulate phenotypic variability, what their relationship is to core components of developmental gene networks, and what is the developmental basis of variable phenotypes. Here, we start addressing these questions using the robust number of Caenorhabditis elegans epidermal stem cells, known as seam cells, as a readout. We employ genetics, cell lineage tracing, and single molecule imaging to show that mutations in lin-22, a Hes-related basic helix-loop-helix (bHLH) transcription factor, increase seam cell number variability. We show that the increase in phenotypic variability is due to stochastic conversion of normally symmetric cell divisions to asymmetric and vice versa during development, which affect the terminal seam cell number in opposing directions. We demonstrate that LIN-22 acts within the epidermal gene network to antagonise the Wnt signalling pathway. However, lin-22 mutants exhibit cell-to-cell variability in Wnt pathway activation, which correlates with and may drive phenotypic variability. Our study demonstrates the feasibility to study phenotypic trait variance in tractable model organisms using unbiased mutagenesis screens. PMID:29108019
Stochastic inversion of ocean color data using the cross-entropy method.
Salama, Mhd Suhyb; Shen, Fang
2010-01-18
Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.
Westendorp, Hendrik; Surmann, Kathrin; van de Pol, Sandrine M G; Hoekstra, Carel J; Kattevilder, Robert A J; Nuver, Tonnis T; Moerland, Marinus A; Slump, Cornelis H; Minken, André W
The quality of permanent prostate brachytherapy can be increased by addition of imaging modalities in the intraoperative procedure. This addition involves image registration, which inherently has inter- and intraobserver variabilities. We sought to quantify the inter- and intraobserver variabilities in geometry and dosimetry for contouring and image registration and analyze the results for our dynamic 125 I brachytherapy procedure. Five observers contoured 11 transrectal ultrasound (TRUS) data sets three times and 11 CT data sets one time. The observers registered 11 TRUS and MRI data sets to cone beam CT (CBCT) using fiducial gold markers. Geometrical and dosimetrical inter- and intraobserver variabilities were assessed. For the contouring study, structures were subdivided into three parts along the craniocaudal axis. We analyzed 165 observations. Interobserver geometrical variability for prostate was 1.1 mm, resulting in a dosimetric variability of 1.6% for V 100 and 9.3% for D 90 . The geometric intraobserver variability was 0.6 mm with a V 100 of 0.7% and D 90 of 1.1%. TRUS-CBCT registration showed an interobserver variability in V 100 of 2.0% and D 90 of 3.1%. Intraobserver variabilities were 0.9% and 1.6%, respectively. For MRI-CBCT registration, V 100 and D 90 were 1.3% and 2.1%. Intraobserver variabilities were 0.7% and 1.1% for the same. Prostate dosimetry is affected by interobserver contouring and registration variability. The observed variability is smaller than underdosages that are adapted during our dynamic brachytherapy procedure. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Synthetic aperture radar operator tactical target acquisition research
NASA Technical Reports Server (NTRS)
Hershberger, M. L.; Craig, D. W.
1978-01-01
A radar target acquisition research study was conducted to access the effects of two levels of 13 radar sensor, display, and mission parameters on operator tactical target acquisition. A saturated fractional-factorial screening design was employed to examine these parameters. Data analysis computed ETA squared values for main and second-order effects for the variables tested. Ranking of the research parameters in terms of importance to system design revealed four variables (radar coverage, radar resolution/multiple looks, display resolution, and display size) accounted for 50 percent of the target acquisition probability variance.
EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search
Brouwer, Anne-Marie; Hogervorst, Maarten A.; Oudejans, Bob; Ries, Anthony J.; Touryan, Jonathan
2017-01-01
EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later. PMID:28559807
A new FOD recognition algorithm based on multi-source information fusion and experiment analysis
NASA Astrophysics Data System (ADS)
Li, Yu; Xiao, Gang
2011-08-01
Foreign Object Debris (FOD) is a kind of substance, debris or article alien to an aircraft or system, which would potentially cause huge damage when it appears on the airport runway. Due to the airport's complex circumstance, quick and precise detection of FOD target on the runway is one of the important protections for airplane's safety. A multi-sensor system including millimeter-wave radar and Infrared image sensors is introduced and a developed new FOD detection and recognition algorithm based on inherent feature of FOD is proposed in this paper. Firstly, the FOD's location and coordinate can be accurately obtained by millimeter-wave radar, and then according to the coordinate IR camera will take target images and background images. Secondly, in IR image the runway's edges which are straight lines can be extracted by using Hough transformation method. The potential target region, that is, runway region, can be segmented from the whole image. Thirdly, background subtraction is utilized to localize the FOD target in runway region. Finally, in the detailed small images of FOD target, a new characteristic is discussed and used in target classification. The experiment results show that this algorithm can effectively reduce the computational complexity, satisfy the real-time requirement and possess of high detection and recognition probability.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-12-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-01-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233
Lee, Kyoung Jin; Shin, Seol Hwa; Lee, Jae Hee; Ju, Eun Jin; Park, Yun-Yong; Hwang, Jung Jin; Suh, Young-Ah; Hong, Seung-Mo; Jang, Se Jin; Lee, Jung Shin; Song, Si Yeol; Jeong, Seong-Yun; Choi, Eun Kyung
2017-10-01
Designing nanocarriers with active targeting has been increasingly emphasized as for an ideal delivery mechanism of anti-cancer therapeutic agents, but the actualization has been constrained by lack of reliable strategy ultimately applicable. Here, we designed and verified a strategy to achieve active targeting nanomedicine that works in a living body, utilizing animal models bearing a patient's tumor tissue and subjected to the same treatments that would be used in the clinic. The concept for this strategy was that a novel peptide probe and its counterpart protein, which responded to a therapy, were identified, and then the inherent ability of the peptide to target the designated tumor protein was used for active targeting in vivo. An initial dose of ionizing radiation was locally delivered to the gastric cancer (GC) tumor of a patient-derived xenograft mouse model, and phage-displayed peptide library was intravenously injected. The peptides tightly bound to the tumor were recovered, and the counterpart protein was subsequently identified. Peptide-conjugated liposomal drug showed dramatically improved therapeutic efficacy and possibility of diagnostic imaging with radiation. These results strongly suggested the potential of our strategy to achieve in vivo functional active targeting and to be applied clinically for human cancer treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal Path to a Laser Fusion Energy Power Plant
NASA Astrophysics Data System (ADS)
Bodner, Stephen
2013-10-01
There was a decision in the mid 1990s to attempt ignition using indirect-drive targets. It is now obvious that this decision was unjustified. The target design was too geometrically complex, too inefficient, and too far above plasma instability thresholds. By that same time, the mid 1990s, there had also been major advances in the direct-drive target concept. It also was not yet ready for a major test. Now, finally, because of significant advances in target designs, laser-target experiments, and laser development, the direct-drive fusion concept is ready for significant enhancements in funding, on the path to commercial fusion energy. There are two laser contenders. A KrF laser is attractive because of its shortest wavelength, broad bandwidth, and superb beam uniformity. A frequency-converted DPSSL has the disadvantage of inherently narrow bandwidth and longer wavelength, but by combining many beams in parallel one might be able to produce at the target the equivalent of an ultra-broad bandwidth. One or both of these lasers may also meet all of the engineering and economic requirements for a reactor. It is time to further develop and evaluate these two lasers as rep-rate systems, in preparation for a future high-gain fusion test.
Strategies for target identification of antimicrobial natural products.
Farha, Maya A; Brown, Eric D
2016-05-04
Covering: 2000 to 2015Despite a pervasive decline in natural product research at many pharmaceutical companies over the last two decades, natural products have undeniably been a prolific and unsurpassed source for new lead antibacterial compounds. Due to their inherent complexity, natural extracts face several hurdles in high-throughout discovery programs, including target identification. Target identification and validation is a crucial process for advancing hits through the discovery pipeline, but has remained a major bottleneck. In the case of natural products, extremely low yields and limited compound supply further impede the process. Here, we review the wealth of target identification strategies that have been proposed and implemented for the characterization of novel antibacterials. Traditionally, these have included genomic and biochemical-based approaches, which, in recent years, have been improved with modern-day technology and better honed for natural product discovery. Further, we discuss the more recent innovative approaches for uncovering the target of new antibacterial natural products, which have resulted from modern advances in chemical biology tools. Finally, we present unique screening platforms implemented to streamline the process of target identification. The different innovative methods to respond to the challenge of characterizing the mode of action for antibacterial natural products have cumulatively built useful frameworks that may advocate a renovated interest in natural product drug discovery programs.
Collioud, A; Clémence, J F; Sänger, M; Sigrist, H
1993-01-01
Light-dependent oriented and covalent immobilization of target molecules has been achieved by combining two modification procedures: light-dependent coupling of target molecules to inert surfaces and thiol-selective reactions occurring at macromolecule or substrate surfaces. For immobilization purposes the heterobifunctional reagent N-[m-[3-(trifluoromethyl)diazirin-3-yl]phenyl]-4-maleimidobutyr amide was synthesized and chemically characterized. The photosensitivity of the carbene-generating reagent and its reactivity toward thiols were ascertained. Light-induced cross-linking properties of the reagent were documented (i) by reacting first the maleimide function with a thiolated surface, followed by carbene insertion into applied target molecules, (ii) by photochemical coupling of the reagent to an inert support followed by thermochemical reactions with thiol functions, and (iii) by thermochemical modification of target molecules prior to carbene-mediated insertion into surface materials. Procedures mentioned led to light-dependent covalent immobilization of target molecules including amino acids, a synthetic peptide, and antibody-derived F(ab') fragments. Topically selective, light-dependent immobilization was attained with the bifunctional reagent by irradiation of coated surfaces through patterned masks. Glass and polystyrene served as substrates. Molecular orientation is asserted by inherently available or selectively introduced terminal thiol functions in F(ab') fragments and synthetic polypeptides, respectively.
Kang, S; Lu, K; Leelawattanachai, J; Hu, X; Park, S; Park, T; Min, I M; Jin, M M
2013-11-01
Systemic and target-specific delivery of large genetic contents has been difficult to achieve. Although viruses effortlessly deliver kilobase-long genome into cells, its clinical use has been hindered by serious safety concerns and the mismatch between native tropisms and desired targets. Nonviral vectors, in contrast, are limited by low gene transfer efficiency and inherent cytotoxicity. Here we devised virus-mimetic polyplex particles (VMPs) based on electrostatic self-assembly among polyanionic peptide (PAP), cationic polymer polyethyleneimine (PEI) and nucleic acids. We fused PAP to the engineered ligand-binding domain of integrin αLβ2 to target intercellular adhesion molecule-1 (ICAM-1), an inducible marker of inflammation. Fully assembled VMPs packaged large genetic contents, bound specifically to target molecules, elicited receptor-mediated endocytosis and escaped endosomal pathway, resembling intracellular delivery processes of viruses. Unlike conventional PEI-mediated transfection, molecular interaction-dependent gene delivery of VMPs was unaffected by the presence of serum and achieved higher efficiency without toxicity. By targeting overexpressed ICAM-1, VMPs delivered genes specifically to inflamed endothelial cells and macrophages both in vitro and in vivo. Simplicity and versatility of the platform and inflammation-specific delivery may open up opportunities for multifaceted gene therapy that can be translated into the clinic and treat a broad range of debilitating immune and inflammatory diseases.
Development of variable-rate precision spraying systems for tree crop production
USDA-ARS?s Scientific Manuscript database
Excessive pesticides are often applied to target and non-target areas in orchards and nurseries, resulting in greater production costs, worker exposure to unnecessary pesticide risks, and adverse contamination of the environment. To improve spray application efficiency, two types of variable-rate pr...
Detection and identification of human targets in radar data
NASA Astrophysics Data System (ADS)
Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.
2007-04-01
Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.
Kantak, Shailesh S; Sullivan, Katherine J; Fisher, Beth E; Knowlton, Barbara J; Winstein, Carolee J
2011-01-01
The authors investigated how brain activity during motor-memory consolidation contributes to transfer of learning to novel versions of a motor skill following distinct practice structures. They used 1 Hz repetitive Transcranial Magnetic Stimulation (rTMS) immediately after constant or variable practice of an arm movement skill to interfere with primary motor cortex (M1) or dorsolateral prefrontal cortex (DLPFC). The effect of interference was assessed through skill performance on two transfer targets: one within and one outside the range of practiced movement parameters for the variable practice group. For the control (no rTMS) group, variable practice benefited delayed transfer performance more than constant practice. The rTMS effect on delayed transfer performance differed for the two transfer targets. For the within-range target, rTMS interference had no significant affect on the delayed transfer after either practice structure. However, for the outside-range target, rTMS interference to DLPFC but not M1 attenuated delayed transfer benefit following variable practice. Additionally, for the outside-range target, rTMS interference to M1 but not DLPFC attenuated delayed transfer following constant practice. This suggests that variable practice may promote reliance on DLPFC for memory consolidation associated with outside-range transfer of learning, whereas constant practice may promote reliance on M1 for consolidation and long-term transfer.
The influence of phonological priming on variability in articulation
NASA Astrophysics Data System (ADS)
Babel, Molly E.; Munson, Benjamin
2004-05-01
Previous research [Sevald and Dell, Cognition 53, 91-127 (1994)] has found that reiterant sequences of CVC words are produced more quickly when the prime word and target word share VC sequences (i.e., sequences like sit sick) than when they are identical (sequences like sick sick). Even slower production rates are found when primes and targets share a CV sequence (sequences like kick sick). These data have been used to support a model of speech production in which lexical items and their constituent phonemes are activated sequentially. The current experiment investigated whether phonological priming also influences variability in the acoustic characteristics of words. Specifically, we examined whether greater variability in the acoustic characteristics of target words was noted in the CV-related prime context than in the identical-prime context, and whether less variability was noted in the VC-related context. Thirty adult subjects with typical speech, language, and hearing ability produced reiterant two-word sequences that varied in their phonological similarity. The duration, first, and second formant frequencies of the target-words' vowels were measured. Preliminary analyses indicate that phonological priming does not have a systematic effect on variability in these acoustic parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carle, S F
Compositional data are represented as vector variables with individual vector components ranging between zero and a positive maximum value representing a constant sum constraint, usually unity (or 100 percent). The earth sciences are flooded with spatial distributions of compositional data, such as concentrations of major ion constituents in natural waters (e.g. mole, mass, or volume fractions), mineral percentages, ore grades, or proportions of mutually exclusive categories (e.g. a water-oil-rock system). While geostatistical techniques have become popular in earth science applications since the 1970s, very little attention has been paid to the unique mathematical properties of geostatistical formulations involving compositional variables.more » The book 'Geostatistical Analysis of Compositional Data' by Vera Pawlowsky-Glahn and Ricardo Olea (Oxford University Press, 2004), unlike any previous book on geostatistics, directly confronts the mathematical difficulties inherent to applying geostatistics to compositional variables. The book righteously justifies itself with prodigious referencing to previous work addressing nonsensical ranges of estimated values and error, spurious correlation, and singular cross-covariance matrices.« less
On the inherent competition between valid and spurious inductive inferences in Boolean data
NASA Astrophysics Data System (ADS)
Andrecut, M.
Inductive inference is the process of extracting general rules from specific observations. This problem also arises in the analysis of biological networks, such as genetic regulatory networks, where the interactions are complex and the observations are incomplete. A typical task in these problems is to extract general interaction rules as combinations of Boolean covariates, that explain a measured response variable. The inductive inference process can be considered as an incompletely specified Boolean function synthesis problem. This incompleteness of the problem will also generate spurious inferences, which are a serious threat to valid inductive inference rules. Using random Boolean data as a null model, here we attempt to measure the competition between valid and spurious inductive inference rules from a given data set. We formulate two greedy search algorithms, which synthesize a given Boolean response variable in a sparse disjunct normal form, and respectively a sparse generalized algebraic normal form of the variables from the observation data, and we evaluate numerically their performance.
DWPF Melter Off-Gas Flammability Assessment for Sludge Batch 9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, A. S.
2016-07-11
The slurry feed to the Defense Waste Processing Facility (DWPF) melter contains several organic carbon species that decompose in the cold cap and produce flammable gases that could accumulate in the off-gas system and create potential flammability hazard. To mitigate such a hazard, DWPF has implemented a strategy to impose the Technical Safety Requirement (TSR) limits on all key operating variables affecting off-gas flammability and operate the melter within those limits using both hardwired/software interlocks and administrative controls. The operating variables that are currently being controlled include; (1) total organic carbon (TOC), (2) air purges for combustion and dilution, (3)more » melter vapor space temperature, and (4) feed rate. The safety basis limits for these operating variables are determined using two computer models, 4-stage cold cap and Melter Off-Gas (MOG) dynamics models, under the baseline upset scenario - a surge in off-gas flow due to the inherent cold cap instabilities in the slurry-fed melter.« less
Perturbing engine performance measurements to determine optimal engine control settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan
Methods and systems for optimizing a performance of a vehicle engine are provided. The method includes determining an initial value for a first engine control parameter based on one or more detected operating conditions of the vehicle engine, determining a value of an engine performance variable, and artificially perturbing the determined value of the engine performance variable. The initial value for the first engine control parameter is then adjusted based on the perturbed engine performance variable causing the engine performance variable to approach a target engine performance variable. Operation of the vehicle engine is controlled based on the adjusted initialmore » value for the first engine control parameter. These acts are repeated until the engine performance variable approaches the target engine performance variable.« less
Snäll, Tord; Lehtomäki, Joona; Arponen, Anni; Elith, Jane; Moilanen, Atte
2016-02-01
There is high-level political support for the use of green infrastructure (GI) across Europe, to maintain viable populations and to provide ecosystem services (ES). Even though GI is inherently a spatial concept, the modern tools for spatial planning have not been recognized, such as in the recent European Environment Agency (EEA) report. We outline a toolbox of methods useful for GI design that explicitly accounts for biodiversity and ES. Data on species occurrence, habitats, and environmental variables are increasingly available via open-access internet platforms. Such data can be synthesized by statistical species distribution modeling, producing maps of biodiversity features. These, together with maps of ES, can form the basis for GI design. We argue that spatial conservation prioritization (SCP) methods are effective tools for GI design, as the overall SCP goal is cost-effective allocation of conservation efforts. Corridors are currently promoted by the EEA as the means for implementing GI design, but they typically target the needs of only a subset of the regional species pool. SCP methods would help to ensure that GI provides a balanced solution for the requirements of many biodiversity features (e.g., species, habitat types) and ES simultaneously in a cost-effective manner. Such tools are necessary to make GI into an operational concept for combating biodiversity loss and promoting ES.
Effective peer education in HIV: defining factors that maximise success.
Lambert, Steven M; Debattista, Joseph; Bodiroza, Aleksandar; Martin, Jack; Staunton, Shaun; Walker, Rebecca
2013-08-01
Background Peer education is considered an effective health promotion and education strategy, particularly to populations traditionally resistant to conventional forms of health information dissemination. This has made it very applicable to HIV education and prevention, where those who are affected or at risk are often amongst the most vulnerable in society. However, there still remains uncertainty as to the reasons for its effectiveness, what constitutes an effective methodology and why a consistent methodology can often result in widely variable outcomes. Between 2008 and 2010, three separate reviews of peer education were undertaken across more than 30 countries in three distinct geographical regions across the globe. The reviews sought to identify determinants of the strengths and weaknesses inherent in approaches to peer education, particularly targeting young people and the most at-risk populations. By assessing the implementation of peer education programs across a variety of social environments, it was possible to develop a contextual understanding for peer education's effectiveness and provide a picture of the social, cultural, political, legal and geographic enablers and disablers to effective peer education. Several factors were significant contributors to program success, not as strategies of methodology, but as elements of the social, cultural, political and organisational context in which peer education was situated. Contextual elements create environments supportive of peer education. Consequently, adherence to a methodology or strategy without proper regard to its situational context rarely contributes to effective peer education.
NASA Astrophysics Data System (ADS)
Snäll, Tord; Lehtomäki, Joona; Arponen, Anni; Elith, Jane; Moilanen, Atte
2016-02-01
There is high-level political support for the use of green infrastructure (GI) across Europe, to maintain viable populations and to provide ecosystem services (ES). Even though GI is inherently a spatial concept, the modern tools for spatial planning have not been recognized, such as in the recent European Environment Agency (EEA) report. We outline a toolbox of methods useful for GI design that explicitly accounts for biodiversity and ES. Data on species occurrence, habitats, and environmental variables are increasingly available via open-access internet platforms. Such data can be synthesized by statistical species distribution modeling, producing maps of biodiversity features. These, together with maps of ES, can form the basis for GI design. We argue that spatial conservation prioritization (SCP) methods are effective tools for GI design, as the overall SCP goal is cost-effective allocation of conservation efforts. Corridors are currently promoted by the EEA as the means for implementing GI design, but they typically target the needs of only a subset of the regional species pool. SCP methods would help to ensure that GI provides a balanced solution for the requirements of many biodiversity features (e.g., species, habitat types) and ES simultaneously in a cost-effective manner. Such tools are necessary to make GI into an operational concept for combating biodiversity loss and promoting ES.
Drug nano-reservoirs synthesized using layer-by-layer technologies.
Costa, Rui R; Alatorre-Meda, Manuel; Mano, João F
2015-11-01
The pharmaceutical industry has been able to tackle the emergence of new microorganisms and diseases by synthesizing new specialized drugs to counter them. Their administration must ensure that a drug is effectively encapsulated and protected until it reaches its target, and that it is released in a controlled way. Herein, the potential of layer-by-layer (LbL) structures to act as drug reservoirs is presented with an emphasis to "nano"-devices of various geometries, from planar coatings to fibers and capsules. The inherent versatile nature of this technique allows producing carriers resorting to distinct classes of materials, variable geometry and customized release profiles that fit within adequate criteria required for disease treatment or for novel applications in the tissue engineering field. The production methods of LbL reservoirs are varied and allow for different kinds of molecules to be incorporated, such as antibiotics, growth factors and biosensing substances, not limited to water-soluble molecules but including hydrophobic drugs. We will also debate the future of LbL in the pharmaceutical industry. Currently, multilayered structures are yet to be covered by the regulatory guidelines that govern the fabrication of nanotechnology products. However, as they stand now, LbL nanodevices have already shown usefulness for antifouling applications, gene therapy, nanovaccines and the formation of de novo tissues. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann
2011-10-07
The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.
Rich, Ryan M.; Stankowska, Dorota L.; Maliwal, Badri P.; Sørensen, Thomas Just; Laursen, Bo W.; Krishnamoorthy, Raghu R.; Gryczynski, Zygmunt; Borejdo, Julian
2013-01-01
Sample autofluorescence (fluorescence of inherent components of tissue and fixative-induced fluorescence) is a significant problem in direct imaging of molecular processes in biological samples. A large variety of naturally occurring fluorescent components in tissue results in broad emission that overlaps the emission of typical fluorescent dyes used for tissue labeling. In addition, autofluorescence is characterized by complex fluorescence intensity decay composed of multiple components whose lifetimes range from sub-nanoseconds to a few nanoseconds. For these reasons, the real fluorescence signal of the probe is difficult to separate from the unwanted autofluorescence. Here we present a method for reducing the autofluorescence problem by utilizing an azadioxatriangulenium (ADOTA) dye with a fluorescence lifetime of approximately 15 ns, much longer than those of most of the components of autofluorescence. A probe with such a long lifetime enables us to use time-gated intensity imaging to separate the signal of the targeting dye from the autofluorescence. We have shown experimentally that by discarding photons detected within the first 20 ns of the excitation pulse, the signal-to-background ratio is improved fivefold. This time-gating eliminates over 96 % of autofluorescence. Analysis using a variable time-gate may enable quantitative determination of the bound probe without the contributions from the background. PMID:23254457
Sadras, Teresa; Heatley, Susan L; Kok, Chung H; McClure, Barbara J; Yeung, David; Hughes, Timothy P; Sutton, Rosemary; Ziegler, David S; White, Deborah L
2017-10-01
We report a novel somatic mutation in the kinase domain of JAK2 (R938Q) in a high-risk pediatric case of B-cell acute lymphoblastic leukemia (ALL). The patient developed on-therapy relapse at 12 months, and interestingly, the JAK2 locus acquired loss of heterozygosity during treatment resulting in 100% mutation load. Furthermore, we show that primary ALL mononuclear cells harboring the JAK2 R938Q mutation display reduced sensitivity to the JAK1/2 ATP-competitive inhibitor ruxolitinib in vitro, compared to ALL cells that carry a more common JAK2 pseudokinase domain mutation. Our findings are in line with previous reports that demonstrate that mutations within the kinase domain of JAK2 are associated with resistance to type I JAK inhibitors. Importantly, given the recent inclusion of ruxolitinib in trial protocols for children with JAK pathway alterations, we predict that inter-patient genetic variability may result in suboptimal responses to JAK inhibitor therapy in a subset of cases. The need for alternate targeted and/or combination therapies for patients who display inherent or developed resistance to JAK inhibitor therapy will be warranted, and we propose that kinase-mutants less sensitive to type I JAK inhibitors may present a currently unexplored platform for investigation of improved therapies. Copyright © 2017. Published by Elsevier Inc.
Luks, Lisanne; Maier, Marcia Y; Sacchi, Silvia; Pollegioni, Loredano; Dietrich, Daniel R
2017-11-01
Proper subcellular trafficking is essential to prevent protein mislocalization and aggregation. Transport of the peroxisomal enzyme D-amino acid oxidase (DAAO) appears dysregulated by specific pharmaceuticals, e.g., the anti-overactive bladder drug propiverine or a norepinephrine/serotonin reuptake inhibitor (NSRI), resulting in massive cytosolic and nuclear accumulations in rat kidney. To assess the underlying molecular mechanism of the latter, we aimed to characterize the nature of peroxisomal and cyto-nuclear shuttling of human and rat DAAO overexpressed in three cell lines using confocal microscopy. Indeed, interference with peroxisomal transport via deletion of the PTS1 signal or PEX5 knockdown resulted in induced nuclear DAAO localization. Having demonstrated the absence of active nuclear import and employing variably sized mCherry- and/or EYFP-fusion proteins of DAAO and catalase, we showed that peroxisomal proteins ≤134 kDa can passively diffuse into mammalian cell nuclei-thereby contradicting the often-cited 40 kDa diffusion limit. Moreover, their inherent nuclear presence and nuclear accumulation subsequent to proteasome inhibition or abrogated peroxisomal transport suggests that nuclear localization is a characteristic in the lifecycle of peroxisomal proteins. Based on this molecular trafficking analysis, we suggest that pharmaceuticals like propiverine or an NSRI may interfere with peroxisomal protein targeting and import, consequently resulting in massive nuclear protein accumulation in vivo.
Jarnuczak, Andrew F; Lee, Dave C H; Lawless, Craig; Holman, Stephen W; Eyers, Claire E; Hubbard, Simon J
2016-09-02
Quantitative mass spectrometry-based proteomics of complex biological samples remains challenging in part due to the variability and charge competition arising during electrospray ionization (ESI) of peptides and the subsequent transfer and detection of ions. These issues preclude direct quantification from signal intensity alone in the absence of a standard. A deeper understanding of the governing principles of peptide ionization and exploitation of the inherent ionization and detection parameters of individual peptides is thus of great value. Here, using the yeast proteome as a model system, we establish the concept of peptide F-factor as a measure of detectability, closely related to ionization efficiency. F-factor is calculated by normalizing peptide precursor ion intensity by absolute abundance of the parent protein. We investigated F-factor characteristics in different shotgun proteomics experiments, including across multiple ESI-based LC-MS platforms. We show that F-factors mirror previously observed physicochemical predictors as peptide detectability but demonstrate a nonlinear relationship between hydrophobicity and peptide detectability. Similarly, we use F-factors to show how peptide ion coelution adversely affects detectability and ionization. We suggest that F-factors have great utility for understanding peptide detectability and gas-phase ion chemistry in complex peptide mixtures, selection of surrogate peptides in targeted MS studies, and for calibration of peptide ion signal in label-free workflows. Data are available via ProteomeXchange with identifier PXD003472.
Streamtube expansion effects on the Darrieus wind turbine
NASA Astrophysics Data System (ADS)
Paraschivoiu, I.; Fraunie, P.; Beguier, C.
1985-04-01
The purpose of the work described in this paper was to determine the aerodynamic loads and performance of a Darrieus wind turbine by including the expansion effects of the streamtubes through the rotor. The double-multiple streamtube model with variable interference factors was used to estimate the induced velocities with a modified CARDAAV computer code. Comparison with measured data and predictions shows that the stream-tube expansion effects are relatively significant at high tip-speed ratios, allowing a more realistic modeling of the upwind/downwind flowfield asymmetries inherent in the Darrieus rotor.
The conception of the alternative and the decision to divorce.
Kalb, M
1983-07-01
Despite soaring divorce rates and the effect of divorce on the individual, family, and society, professional scientific literature examining the factors governing the decision to divorce has been scant. The author suggests that the key variable affecting the decision to divorce can best be understood through an exploration of the individual's conception of the alternative. The factors that comprise the conception of the alternative are discussed and the problems inherent in its valid construction by the patient are examined. The therapeutic implementation of this conception is outlined.
NASA Technical Reports Server (NTRS)
Kolyer, J. M.; Mann, N. R.
1977-01-01
Methods of accelerated and abbreviated testing were developed and applied to solar cell encapsulants. These encapsulants must provide protection for as long as 20 years outdoors at different locations within the United States. Consequently, encapsulants were exposed for increasing periods of time to the inherent climatic variables of temperature, humidity, and solar flux. Property changes in the encapsulants were observed. The goal was to predict long term behavior of encapsulants based upon experimental data obtained over relatively short test periods.
Cell-based Assays for Assessing Toxicity: A Basic Guide.
Parboosing, Raveen; Mzobe, Gugulethu; Chonco, Louis; Moodley, Indres
2016-01-01
Assessment of toxicity is an important component of the drug discovery process. Cellbased assays are a popular choice for assessing cytotoxicity. However, these assays are complex because of the wide variety of formats and methods that are available, lack of standardization, confusing terminology and the inherent variability of biological systems and measurement. This review is intended as a guide on how to take these factors into account when planning, conducting and/or interpreting cell based toxicity assays. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Competition in a Social Structure
NASA Astrophysics Data System (ADS)
Legara, Erika Fille; Longjas, Anthony; Batac, Rene
Complex adaptive agents develop strategies in the presence of competition. In modern human societies, there is an inherent sense of locality when describing inter-agent dynamics because of its network structure. One then wonders whether the traditional advertising schemes that are globally publicized and target random individuals are as effective in attracting a larger portion of the population as those that take advantage of local neighborhoods, such as "word-of-mouth" marketing schemes. Here, we demonstrate using a differential equation model that schemes targeting local cliques within the network are more successful at gaining a larger share of the population than those that target users randomly at a global scale (e.g., television commercials, print ads, etc.). This suggests that success in the competition is dependent not only on the number of individuals in the population but also on how they are connected in the network. We further show that the model is general in nature by considering examples of competition dynamics, particularly those of business competition and language death.
On the Effectiveness of Security Countermeasures for Critical Infrastructures.
Hausken, Kjell; He, Fei
2016-04-01
A game-theoretic model is developed where an infrastructure of N targets is protected against terrorism threats. An original threat score is determined by the terrorist's threat against each target and the government's inherent protection level and original protection. The final threat score is impacted by the government's additional protection. We investigate and verify the effectiveness of countermeasures using empirical data and two methods. The first is to estimate the model's parameter values to minimize the sum of the squared differences between the government's additional resource investment predicted by the model and the empirical data. The second is to develop a multivariate regression model where the final threat score varies approximately linearly relative to the original threat score, sectors, and threat scenarios, and depends nonlinearly on the additional resource investment. The model and method are offered as tools, and as a way of thinking, to determine optimal resource investments across vulnerable targets subject to terrorism threats. © 2014 Society for Risk Analysis.
Exploratory Spectroscopy of Magnetic Cataclysmic Variables Candidates and Other Variable Objects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira, A. S.; Palhares, M. S.; Rodrigues, C. V.
2017-04-01
The increasing number of synoptic surveys made by small robotic telescopes, such as the photometric Catalina Real-Time Transient Survey (CRTS), provides a unique opportunity to discover variable sources and improves the statistical samples of such classes of objects. Our goal is the discovery of magnetic Cataclysmic Variables (mCVs). These are rare objects that probe interesting accretion scenarios controlled by the white-dwarf magnetic field. In particular, improved statistics of mCVs would help to address open questions on their formation and evolution. We performed an optical spectroscopy survey to search for signatures of magnetic accretion in 45 variable objects selected mostly from themore » CRTS. In this sample, we found 32 CVs, 22 being mCV candidates, 13 of which were previously unreported as such. If the proposed classifications are confirmed, it would represent an increase of 4% in the number of known polars and 12% in the number of known IPs. A fraction of our initial sample was classified as extragalactic sources or other types of variable stars by the inspection of the identification spectra. Despite the inherent complexity in identifying a source as an mCV, variability-based selection, followed by spectroscopic snapshot observations, has proved to be an efficient strategy for their discoveries, being a relatively inexpensive approach in terms of telescope time.« less
Deconstructed transverse mass variables
Ismail, Ahmed; Schwienhorst, Reinhard; Virzi, Joseph S.; ...
2015-04-02
Traditional searches for R-parity conserving natural supersymmetry (SUSY) require large transverse mass and missing energy cuts to separate the signal from large backgrounds. SUSY models with compressed spectra inherently produce signal events with small amounts of missing energy that are hard to explore. We use this difficulty to motivate the construction of "deconstructed" transverse mass variables which are designed preserve information on both the norm and direction of the missing momentum. Here, we demonstrate the effectiveness of these variables in searches for the pair production of supersymmetric top-quark partners which subsequently decay into a final state with an isolated lepton,more » jets and missing energy. We show that the use of deconstructed transverse mass variables extends the accessible compressed spectra parameter space beyond the region probed by traditional methods. The parameter space can further be expanded to neutralino masses that are larger than the difference between the stop and top masses. In addition, we also discuss how these variables allow for novel searches of single stop production, in order to directly probe unconstrained stealth stops in the small stop-and neutralino-mass regime. We also demonstrate the utility of these variables for generic gluino and stop searches in all-hadronic final states. Overall, we demonstrate that deconstructed transverse variables are essential to any search wanting to maximize signal separation from the background when the signal has undetected particles in the final state.« less
Late Holocene sea level variability and Atlantic Meridional Overturning Circulation
Cronin, Thomas M.; Farmer, Jesse R.; Marzen, R. E.; Thomas, E.; Varekamp, J.C.
2014-01-01
Pre-twentieth century sea level (SL) variability remains poorly understood due to limits of tide gauge records, low temporal resolution of tidal marsh records, and regional anomalies caused by dynamic ocean processes, notably multidecadal changes in Atlantic Meridional Overturning Circulation (AMOC). We examined SL and AMOC variability along the eastern United States over the last 2000 years, using a SL curve constructed from proxy sea surface temperature (SST) records from Chesapeake Bay, and twentieth century SL-sea surface temperature (SST) relations derived from tide gauges and instrumental SST. The SL curve shows multidecadal-scale variability (20–30 years) during the Medieval Climate Anomaly (MCA) and Little Ice Age (LIA), as well as the twentieth century. During these SL oscillations, short-term rates ranged from 2 to 4 mm yr−1, roughly similar to those of the last few decades. These oscillations likely represent internal modes of climate variability related to AMOC variability and originating at high latitudes, although the exact mechanisms remain unclear. Results imply that dynamic ocean changes, in addition to thermosteric, glacio-eustatic, or glacio-isostatic processes are an inherent part of SL variability in coastal regions, even during millennial-scale climate oscillations such as the MCA and LIA and should be factored into efforts that use tide gauges and tidal marsh sediments to understand global sea level rise.
Non-Bayesian Inference: Causal Structure Trumps Correlation
ERIC Educational Resources Information Center
Bes, Benedicte; Sloman, Steven; Lucas, Christopher G.; Raufaste, Eric
2012-01-01
The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more…
NASA Technical Reports Server (NTRS)
Kim, Edward
2011-01-01
Passive microwave remote sensing at L-band (1.4 GHz) is sensitive to soil moisture and sea surface salinity, both important climate variables. Science studies involving these variables can now take advantage of new satellite L-band observations. The first mission with regular global passive microwave observations at L-band is the European Space Agency's Soil Moisture and Ocean Salinity (SMOS), launched November, 2009. A second mission, NASA's Aquarius, was launched June, 201 I. A third mission, NASA's Soil Moisture Active Passive (SMAP) is scheduled to launch in 2014. Together, these three missions may provide a decade-long data record-provided that they are intercalibrated. The intercalibration is best performed at the radiance (brightness temperature) level, and Antarctica is proving to be a key calibration target. However, Antarctica has thus far not been fully characterized as a potential target. This paper will present evaluations of Antarctica as a microwave calibration target for the above satellite missions. Preliminary analyses have identified likely target areas, such as the vicinity of Dome-C and larger areas within East Antarctica. Physical sources of temporal and spatial variability of polar firn are key to assessing calibration uncertainty. These sources include spatial variability of accumulation rate, compaction, surface characteristics (dunes, micro-topography), wind patterns, and vertical profiles of density and temperature. Using primarily SMOS data, variability is being empirically characterized and attempts are being made to attribute observed variability to physical sources. One expected outcome of these studies is the potential discovery of techniques for remotely sensing--over all of Antarctica-parameters such as surface temperature.
An Evaluation of Antarctica as a Calibration Target for Passive Microwave Satellite Missions
NASA Technical Reports Server (NTRS)
Kim, Edward
2012-01-01
Passive microwave remote sensing at L-band (1.4 GHz) is sensitive to soil moisture and sea surface salinity, both important climate variables. Science studies involving these variables can now take advantage of new satellite L-band observations. The first mission with regular global passive microwave observations at L-band is the European Space Agency's Soil Moisture and Ocean Salinity (SMOS), launched November, 2009. A second mission, NASA's Aquarius, was launched June, 201l. A third mission, NASA's Soil Moisture Active Passive (SMAP) is scheduled to launch in 2014. Together, these three missions may provide a decade-long data record -- provided that they are intercalibrated. The intercalibration is best performed at the radiance (brightness temperature) level, and Antarctica is proving to be a key calibration target. However, Antarctica has thus far not been fully characterized as a potential target. This paper will present evaluations of Antarctica as a microwave calibration target for the above satellite missions. Preliminary analyses have identified likely target areas, such as the vicinity of Dome-C and larger areas within East Antarctica. Physical sources of temporal and spatial variability of polar firn are key to assessing calibration uncertainty. These sources include spatial variability of accumulation rate, compaction, surface characteristics (dunes, micro-topography), wind patterns, and vertical profiles of density and temperature. Using primarily SMOS data, variability is being empirically characterized and attempts are being made to attribute observed variability to physical sources. One expected outcome of these studies is the potential discovery of techniques for remotely sensing--over all of Antarctica--parameters such as surface temperature.
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
The Advanced Solid Rocket Motor
NASA Technical Reports Server (NTRS)
Mitchell, Royce E.
1992-01-01
The Advanced Solid Rocket Motor will utilize improved design features and automated manufacturing methods to produce an inherently safer propulsive system for the Space Shuttle and future launch systems. This second-generation motor will also provide an additional 12,000 pounds of payload to orbit, enhancing the utility and efficiency of the Shuttle system. The new plant will feature strip-wound, asbestos-free insulation; propellant continuous mixing and casting; and extensive robotic systems. Following a series of static tests at the Stennis Space Center, MS flights are targeted to begin in early 1997.
The mathematical theory of signal processing and compression-designs
NASA Astrophysics Data System (ADS)
Feria, Erlan H.
2006-05-01
The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.
Wang, Shan-Chun; Zeng, Li-Li; Ding, Yu-Yang; Zeng, Shao-Gao; Song, Hong-Rui; Hu, Wen-Hui; Xie, Hui
2014-01-01
Though all the marketed drugs of dipeptidyl peptidase IV inhibitors are structurally different, their inherent correlation is worthy of further investigation. Herein we rapidly discovered a novel DPP-IV inhibitor 8g (IC50 = 4.9 nmol.L-1) which exhibits as good activity and selectivity as the market drugs through scaffold hopping and drug splicing strategies based on alogliptin and linagliptin. This study demonstrated that the employment of classic medicinal chemistry strategy to the marketed drugs with specific target is an efficient approach to discover novel bioactive molecules.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-05-06
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.
[Feminists' approach to population problems: new paradigm or Utopia?].
Kono, S
1997-05-01
The author first notes that, partly because of events occurring at the International Conference on Population and Development that took place in Cairo in 1994, a consensus has emerged that population programs based on a philosophy of empowering women and focusing on reproductive health are more likely to be effective than programs that focus on providing family planning services and achieving demographic targets. Some reservations about this consensus are then expressed. The author points out the difficulties inherent in widening the mandate of family planning programs in an era of diminished resources for international assistance, the past success of such programs in reducing fertility with limited resources, and the inherent contradictions in following a laissez-faire attitude toward reproduction in such regions as Sub-Saharan Africa, where economies and political systems are often in crisis, health services are minimal, and desired levels of fertility both way above current levels and far above the replacement level. While not challenging the value of the Cairo philosophy, the need to move from rhetoric to reality in the face of the world's current population problems is stressed.
Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.
Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui
2017-01-01
Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays
NASA Astrophysics Data System (ADS)
Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko
The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.
A scoring mechanism for the rank aggregation of network robustness
NASA Astrophysics Data System (ADS)
Yazdani, Alireza; Dueñas-Osorio, Leonardo; Li, Qilin
2013-10-01
To date, a number of metrics have been proposed to quantify inherent robustness of network topology against failures. However, each single metric usually only offers a limited view of network vulnerability to different types of random failures and targeted attacks. When applied to certain network configurations, different metrics rank network topology robustness in different orders which is rather inconsistent, and no single metric fully characterizes network robustness against different modes of failure. To overcome such inconsistency, this work proposes a multi-metric approach as the basis of evaluating aggregate ranking of network topology robustness. This is based on simultaneous utilization of a minimal set of distinct robustness metrics that are standardized so to give way to a direct comparison of vulnerability across networks with different sizes and configurations, hence leading to an initial scoring of inherent topology robustness. Subsequently, based on the inputs of initial scoring a rank aggregation method is employed to allocate an overall ranking of robustness to each network topology. A discussion is presented in support of the presented multi-metric approach and its applications to more realistically assess and rank network topology robustness.
Performance improvement CME for quality: challenges inherent to the process.
Vakani, Farhan Saeed; O'Beirne, Ronan
2015-01-01
The purpose of this paper is to discuss the perspective debates upon the real-time challenges for a three-staged Performance Improvement Continuing Medical Education (PI-CME) model, an innovative and potential approach for future CME, to inform providers to think, prepare and to act proactively. In this discussion, the challenges associated for adopting the American Medical Association's three-staged PI-CME model are reported. Not many institutions in USA are using a three-staged performance improvement model and then customizing it to their own healthcare context for the specific targeted audience. They integrate traditional CME methods with performance and quality initiatives, and linking with CME credits. Overall the US health system is interested in a structured PI-CME model with the potential to improve physicians practicing behaviors. Knowing the dearth of evidence for applying this structured performance improvement methodology into the design of CME activities, and the lack of clarity on challenges inherent to the process that learners and providers encounter. This paper establishes all-important first step to render the set of challenges for a three-staged PI-CME model.
The Use of Match Statistics that Discriminate Between Successful and Unsuccessful Soccer Teams
Castellano, Julen; Casamichana, David; Lago, Carlos
2012-01-01
Three soccer World Cups were analysed with the aim of identifying the match statistics which best discriminated between winning, drawing and losing teams. The analysis was based on 177 matches played during the three most recent World Cup tournaments: Korea/Japan 2002 (59), Germany 2006 (59) and South Africa 2010 (59). Two categories of variables were studied: 1) those related to attacking play: goals scored, total shots, shots on target, shots off target, ball possession, number of off-sides committed, fouls received and corners; and 2) those related to defence: total shots received, shots on target received, shots off target received, off-sides received, fouls committed, corners against, yellow cards and red cards. Discriminant analysis of these matches revealed the following: (a) the variables related to attacking play that best differentiated between winning, drawing and losing teams were total shots, shots on target and ball possession; and (b) the most discriminating variables related to defence were total shots received and shots on target received. These results suggest that winning, drawing and losing national teams may be discriminated from one another on the basis of variables such as ball possession and the effectiveness of their attacking play. This information may be of benefit to both coaches and players, adding to their knowledge about soccer performance indicators and helping to guide the training process. PMID:23487020
The precision problem in conservation and restoration
Hiers, J. Kevin; Jackson, Stephen T.; Hobbs, Richard J.; Bernhardt, Emily S.; Valentine, Leonie E.
2016-01-01
Within the varied contexts of environmental policy, conservation of imperilled species populations, and restoration of damaged habitats, an emphasis on idealized optimal conditions has led to increasingly specific targets for management. Overly-precise conservation targets can reduce habitat variability at multiple scales, with unintended consequences for future ecological resilience. We describe this dilemma in the context of endangered species management, stream restoration, and climate-change adaptation. Inappropriate application of conservation targets can be expensive, with marginal conservation benefit. Reduced habitat variability can limit options for managers trying to balance competing objectives with limited resources. Conservation policies should embrace habitat variability, expand decision-space appropriately, and support adaptation to local circumstances to increase ecological resilience in a rapidly changing world.
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
Dickey, C; Santella, R M; Hattis, D; Tang, D; Hsu, Y; Cooper, T; Young, T L; Perera, F P
1997-10-01
Biomarkers such as DNA adducts have significant potential to improve quantitative risk assessment by characterizing individual differences in metabolism of genotoxins and DNA repair and accounting for some of the factors that could affect interindividual variation in cancer risk. Inherent uncertainty in laboratory measurements and within-person variability of DNA adduct levels over time are putatively unrelated to cancer risk and should be subtracted from observed variation to better estimate interindividual variability of response to carcinogen exposure. A total of 41 volunteers, both smokers and nonsmokers, were asked to provide a peripheral blood sample every 3 weeks for several months in order to specifically assess intraindividual variability of polycyclic aromatic hydrocarbon (PAH)-DNA adduct levels. The intraindividual variance in PAH-DNA adduct levels, together with measurement uncertainty (laboratory variability and unaccounted for differences in exposure), constituted roughly 30% of the overall variance. An estimated 70% of the total variance was contributed by interindividual variability and is probably representative of the true biologic variability of response to carcinogenic exposure in lymphocytes. The estimated interindividual variability in DNA damage after subtracting intraindividual variability and measurement uncertainty was 24-fold. Inter-individual variance was higher (52-fold) in persons who constitutively lack the Glutathione S-Transferase M1 (GSTM1) gene which is important in the detoxification pathway of PAH. Risk assessment models that do not consider the variability of susceptibility to DNA damage following carcinogen exposure may underestimate risks to the general population, especially for those people who are most vulnerable.
Dripps, W.R.; Bradbury, K.R.
2010-01-01
Recharge varies spatially and temporally as it depends on a wide variety of factors (e.g. vegetation, precipitation, climate, topography, geology, and soil type), making it one of the most difficult, complex, and uncertain hydrologic parameters to quantify. Despite its inherent variability, groundwater modellers, planners, and policy makers often ignore recharge variability and assume a single average recharge value for an entire watershed. Relatively few attempts have been made to quantify or incorporate spatial and temporal recharge variability into water resource planning or groundwater modelling efforts. In this study, a simple, daily soil-water balance model was developed and used to estimate the spatial and temporal distribution of groundwater recharge of the Trout Lake basin of northern Wisconsin for 1996-2000 as a means to quantify recharge variability. For the 5 years of study, annual recharge varied spatially by as much as 18 cm across the basin; vegetation was the predominant control on this variability. Recharge also varied temporally with a threefold annual difference over the 5-year period. Intra-annually, recharge was limited to a few isolated events each year and exhibited a distinct seasonal pattern. The results suggest that ignoring recharge variability may not only be inappropriate, but also, depending on the application, may invalidate model results and predictions for regional and local water budget calculations, water resource management, nutrient cycling, and contaminant transport studies. Recharge is spatially and temporally variable, and should be modelled as such. Copyright ?? 2009 John Wiley & Sons, Ltd.
Modeling variability in porescale multiphase flow experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ling, Bowen; Bao, Jie; Oostrom, Mart
Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulationsmore » are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.« less
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
77 FR 45381 - Cash Account Trust, et al.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-31
..., DWS Market Trust, DWS Money Funds, DWS Money Market Trust, DWS Municipal Trust, DWS Portfolio Trust, DWS Securities Trust, DWS State Tax-Free Income Series, DWS Target Date Series, DWS Target Fund, DWS Tax Free Trust, DWS Value Series, Inc., DWS Variable Series I, DWS Variable Series II, Investors Cash...
USDA-ARS?s Scientific Manuscript database
Sensors that can accurately measure canopy structures are prerequisites for development of advanced variable-rate sprayers. A 270° radial range laser sensor was evaluated for its accuracy to measure dimensions of target surfaces with complex shapes and sizes. An algorithm for data acquisition and 3-...
Sustainability of a Targeted Intervention Package: First Step to Success in Oregon
ERIC Educational Resources Information Center
Loman, Sheldon L.; Rodriguez, Billie Jo; Horner, Robert H.
2010-01-01
Variables affecting the sustained implementation of evidence-based practices are receiving increased attention. A descriptive analysis of the variables associated with sustained implementation of First Step to Success (FSS), a targeted intervention for young students at risk for behavior disorders, is provided. Measures based on a conceptual model…
NASA Technical Reports Server (NTRS)
1975-01-01
Results are discussed of a study to define a radar and antenna system which best suits the space shuttle rendezvous requirements. Topics considered include antenna characteristics and antenna size tradeoffs, fundamental sources of measurement errors inherent in the target itself, backscattering crosssection models of the target and three basic candidate radar types. Antennas up to 1.5 meters in diameter are within specified installation constraints, however, a 1 meter diameter paraboloid and a folding, four slot backfeed on a two gimbal mount implemented for a spiral acquisition scan is recommended. The candidate radar types discussed are: (1) noncoherent pulse radar (2) coherent pulse radar and (3) pulse Doppler radar with linear FM ranging. The radar type recommended is a pulse Doppler with linear FM ranging. Block diagrams of each radar system are shown.
Specific features of goal setting in road traffic safety
NASA Astrophysics Data System (ADS)
Kolesov, V. I.; Danilov, O. F.; Petrov, A. I.
2017-10-01
Road traffic safety (RTS) management is inherently a branch of cybernetics and therefore requires clear formalization of the task. The paper aims at identification of the specific features of goal setting in RTS management under the system approach. The paper presents the results of cybernetic modeling of the cause-to-effect mechanism of a road traffic accident (RTA); in here, the mechanism itself is viewed as a complex system. A designed management goal function is focused on minimizing the difficulty in achieving the target goal. Optimization of the target goal has been performed using the Lagrange principle. The created working algorithms have passed the soft testing. The key role of the obtained solution in the tactical and strategic RTS management is considered. The dynamics of the management effectiveness indicator has been analyzed based on the ten-year statistics for Russia.
A survey of landmine detection using hyperspectral imaging
NASA Astrophysics Data System (ADS)
Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo
2017-02-01
Hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas, such as agriculture, mapping, target detection, food quality monitoring, etc. This technique gives the ability to remotely identify the composition of each pixel of the image. Therefore, it is a natural candidate for the purpose of landmine detection, thanks to its inherent safety and fast response time. In this paper, we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection, discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection. Our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work, in order to achieve a better detection in real-time operation mode.
Bramsen, Jesper B.; Pakula, Malgorzata M.; Hansen, Thomas B.; Bus, Claus; Langkjær, Niels; Odadzic, Dalibor; Smicius, Romualdas; Wengel, Suzy L.; Chattopadhyaya, Jyoti; Engels, Joachim W.; Herdewijn, Piet; Wengel, Jesper; Kjems, Jørgen
2010-01-01
Small interfering RNAs (siRNAs) are now established as the preferred tool to inhibit gene function in mammalian cells yet trigger unintended gene silencing due to their inherent miRNA-like behavior. Such off-target effects are primarily mediated by the sequence-specific interaction between the siRNA seed regions (position 2–8 of either siRNA strand counting from the 5′-end) and complementary sequences in the 3′UTR of (off-) targets. It was previously shown that chemical modification of siRNAs can reduce off-targeting but only very few modifications have been tested leaving more to be identified. Here we developed a luciferase reporter-based assay suitable to monitor siRNA off-targeting in a high throughput manner using stable cell lines. We investigated the impact of chemically modifying single nucleotide positions within the siRNA seed on siRNA function and off-targeting using 10 different types of chemical modifications, three different target sequences and three siRNA concentrations. We found several differently modified siRNAs to exercise reduced off-targeting yet incorporation of the strongly destabilizing unlocked nucleic acid (UNA) modification into position 7 of the siRNA most potently reduced off-targeting for all tested sequences. Notably, such position-specific destabilization of siRNA–target interactions did not significantly reduce siRNA potency and is therefore well suited for future siRNA designs especially for applications in vivo where siRNA concentrations, expectedly, will be low. PMID:20453030
Biomechanical models for radial distance determination by the rat vibrissal system.
Birdwell, J Alexander; Solomon, Joseph H; Thajchayapong, Montakan; Taylor, Michael A; Cheely, Matthew; Towal, R Blythe; Conradt, Jorg; Hartmann, Mitra J Z
2007-10-01
Rats use active, rhythmic movements of their whiskers to acquire tactile information about three-dimensional object features. There are no receptors along the length of the whisker; therefore all tactile information must be mechanically transduced back to receptors at the whisker base. This raises the question: how might the rat determine the radial contact position of an object along the whisker? We developed two complementary biomechanical models that show that the rat could determine radial object distance by monitoring the rate of change of moment (or equivalently, the rate of change of curvature) at the whisker base. The first model is used to explore the effects of taper and inherent whisker curvature on whisker deformation and used to predict the shapes of real rat whiskers during deflections at different radial distances. Predicted shapes closely matched experimental measurements. The second model describes the relationship between radial object distance and the rate of change of moment at the base of a tapered, inherently curved whisker. Together, these models can account for recent recordings showing that some trigeminal ganglion (Vg) neurons encode closer radial distances with increased firing rates. The models also suggest that four and only four physical variables at the whisker base -- angular position, angular velocity, moment, and rate of change of moment -- are needed to describe the dynamic state of a whisker. We interpret these results in the context of our evolving hypothesis that neural responses in Vg can be represented using a state-encoding scheme that includes combinations of these four variables.
Robust Statistical Fusion of Image Labels
Landman, Bennett A.; Asman, Andrew J.; Scoggins, Andrew G.; Bogovic, John A.; Xing, Fangxu; Prince, Jerry L.
2011-01-01
Image labeling and parcellation (i.e. assigning structure to a collection of voxels) are critical tasks for the assessment of volumetric and morphometric features in medical imaging data. The process of image labeling is inherently error prone as images are corrupted by noise and artifacts. Even expert interpretations are subject to subjectivity and the precision of the individual raters. Hence, all labels must be considered imperfect with some degree of inherent variability. One may seek multiple independent assessments to both reduce this variability and quantify the degree of uncertainty. Existing techniques have exploited maximum a posteriori statistics to combine data from multiple raters and simultaneously estimate rater reliabilities. Although quite successful, wide-scale application has been hampered by unstable estimation with practical datasets, for example, with label sets with small or thin objects to be labeled or with partial or limited datasets. As well, these approaches have required each rater to generate a complete dataset, which is often impossible given both human foibles and the typical turnover rate of raters in a research or clinical environment. Herein, we propose a robust approach to improve estimation performance with small anatomical structures, allow for missing data, account for repeated label sets, and utilize training/catch trial data. With this approach, numerous raters can label small, overlapping portions of a large dataset, and rater heterogeneity can be robustly controlled while simultaneously estimating a single, reliable label set and characterizing uncertainty. The proposed approach enables many individuals to collaborate in the construction of large datasets for labeling tasks (e.g., human parallel processing) and reduces the otherwise detrimental impact of rater unavailability. PMID:22010145
Scattered Light Polarimetry of Exoplanets
NASA Astrophysics Data System (ADS)
Wiktorowicz, S.
2014-12-01
The last decade has witnessed an explosion in atmospheric characterization of spatially unresolved exoplanets using transmission spectra of transiting planets, but understanding has been hampered by degeneracies resolvable through blue optical observations. Here, scattered light is more important than the Wien tail of re-radiated thermal emission. Therefore, the next frontier in exoplanet characterization lies in the direct detection of scattered light. The polarization state of starlight scattered by a planetary atmosphere distinguishes it from the direct light from the host star, and the inherently differential nature of polarimetry reduces systematic effects to the point where ground-based detections are possible. Furthermore, polarimetry is uniquely sensitive to the size distribution, shape, and chemical composition of atmospheric cloud particles as well as to the scattering optical depth. I will review the current state of exoplanet polarimetry, which is dominated not by photon noise but by non-Gaussian systematic effects. Ground-based detection of order ten exoplanet photons relative to the host star's million requires dense orbital phase coverage and therefore long observing programs. However, variability inherent in the host star, interstellar medium, Earth's atmosphere, the telescope, and the instrument at all timescales must be measured and subtracted in order to definitively uncover scattered light from the exoplanet. While polarimetry is in principle sensitive to exoplanets regardless of orbital inclination, repeated observations of transiting exoplanet systems during secondary eclipse events are required to measure the polarimetric variability of the system that cannot be due to the planet. The emergence of scattered light polarimetry as a robust tool for the study of exoplanet atmospheres, and eventually surfaces, therefore requires diligent attention to the role of systematic effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Mark Christopher
2015-07-01
This report details the initial comparison of mechanical strength properties between the cylindrical nuclear-grade graphite specimens irradiated in the second Advanced Graphite Creep (AGC-2) experiment with the established baseline, or unirradiated, mechanical properties compiled in the Baseline Graphite Characterization program. The overall comparative analysis will describe the development of an appropriate test protocol for irradiated specimens, the execution of the mechanical tests on the AGC-2 sample population, and will further discuss the data in terms of developing an accurate irradiated property distribution in the limited amount of irradiated data by leveraging the considerably larger property datasets being captured in themore » Baseline Graphite Characterization program. Integrating information on the inherent variability in nuclear-grade graphite with more complete datasets is one of the goals of the VHTR Graphite Materials program. Between “sister” specimens, or specimens with the same geometry machined from the same sub-block of graphite from which the irradiated AGC specimens were extracted, and the Baseline datasets, a comprehensive body of data will exist that can provide both a direct and indirect indication of the full irradiated property distributions that can be expected of irradiated nuclear-grade graphite while in service in a VHTR system. While the most critical data will remain the actual irradiated property measurements, expansion of this data into accurate distributions based on the inherent variability in graphite properties will be a crucial step in qualifying graphite for nuclear use as a structural material in a VHTR environment.« less
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
NASA Technical Reports Server (NTRS)
Bhatt, Rajendra; Doelling, David R.; Wu, Aisheng; Xiong, Xiaoxiong (Jack); Scarino, Benjamin R.; Haney, Conor O.; Gopalan, Arun
2014-01-01
The latest CERES FM-5 instrument launched onboard the S-NPP spacecraft will use the VIIRS visible radiances from the NASA Land Product Evaluation and Analysis Tool Elements (PEATE) product for retrieving the cloud properties associated with its TOA flux measurement. In order for CERES to provide climate quality TOA flux datasets, the retrieved cloud properties must be consistent throughout the record, which is dependent on the calibration stability of the VIIRS imager. This paper assesses the NASA calibration stability of the VIIRS reflective solar bands using the Libya-4 desert and deep convective clouds (DCC). The invariant targets are first evaluated for temporal natural variability. It is found for visible (VIS) bands that DCC targets have half of the variability of Libya-4. For the shortwave infrared (SWIR) bands, the desert has less variability. The brief VIIRS record and target variability inhibits high confidence in identifying any trends that are less than 0.6yr for most VIS bands, and 2.5yr for SWIR bands. None of the observed invariant target reflective solar band trends exceeded these trend thresholds. Initial assessment results show that the VIIRS data have been consistently calibrated and that the VIIRS instrument stability is similar to or better than the MODIS instrument.
Effects of body lean and visual information on the equilibrium maintenance during stance.
Duarte, Marcos; Zatsiorsky, Vladimir M
2002-09-01
Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.
Speaking-rate-induced variability in F2 trajectories.
Tjaden, K; Weismer, G
1998-10-01
This study examined speaking-rate-induced spectral and temporal variability of F2 formant trajectories for target words produced in a carrier phrase at speaking rates ranging from fast to slow. F2 onset frequency measured at the first glottal pulse following the stop consonant release in target words was used to quantify the extent to which adjacent consonantal and vocalic gestures overlapped; F2 target frequency was operationally defined as the first occurrence of a frequency minimum or maximum following F2 onset frequency. Regression analyses indicated 70% of functions relating F2 onset and vowel duration were statistically significant. The strength of the effect was variable, however, and the direction of significant functions often differed from that predicted by a simple model of overlapping, sliding gestures. Results of a partial correlation analysis examining interrelationships among F2 onset, F2 target frequency, and vowel duration across the speaking rate range indicated that covariation of F2 target with vowel duration may obscure the relationship between F2 onset and vowel duration across rate. The results further suggested that a sliding based model of acoustic variability associated with speaking rate change only partially accounts for the present data, and that such a view accounts for some speakers' data better than others.
A Random Variable Approach to Nuclear Targeting and Survivability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Undem, Halvor A.
We demonstrate a common mathematical formalism for analyzing problems in nuclear survivability and targeting. This formalism, beginning with a random variable approach, can be used to interpret past efforts in nuclear-effects analysis, including targeting analysis. It can also be used to analyze new problems brought about by the post Cold War Era, such as the potential effects of yield degradation in a permanently untested nuclear stockpile. In particular, we illustrate the formalism through four natural case studies or illustrative problems, linking these to actual past data, modeling, and simulation, and suggesting future uses. In the first problem, we illustrate themore » case of a deterministically modeled weapon used against a deterministically responding target. Classic "Cookie Cutter" damage functions result. In the second problem, we illustrate, with actual target test data, the case of a deterministically modeled weapon used against a statistically responding target. This case matches many of the results of current nuclear targeting modeling and simulation tools, including the result of distance damage functions as complementary cumulative lognormal functions in the range variable. In the third problem, we illustrate the case of a statistically behaving weapon used against a deterministically responding target. In particular, we show the dependence of target damage on weapon yield for an untested nuclear stockpile experiencing yield degradation. Finally, and using actual unclassified weapon test data, we illustrate in the fourth problem the case of a statistically behaving weapon used against a statistically responding target.« less
Impact of region contouring variability on image-based focal therapy evaluation
NASA Astrophysics Data System (ADS)
Gibson, Eli; Donaldson, Ian A.; Shah, Taimur T.; Hu, Yipeng; Ahmed, Hashim U.; Barratt, Dean C.
2016-03-01
Motivation: Focal therapy is an emerging low-morbidity treatment option for low-intermediate risk prostate cancer; however, challenges remain in accurately delivering treatment to specified targets and determining treatment success. Registered multi-parametric magnetic resonance imaging (MPMRI) acquired before and after treatment can support focal therapy evaluation and optimization; however, contouring variability, when defining the prostate, the clinical target volume (CTV) and the ablation region in images, reduces the precision of quantitative image-based focal therapy evaluation metrics. To inform the interpretation and clarify the limitations of such metrics, we investigated inter-observer contouring variability and its impact on four metrics. Methods: Pre-therapy and 2-week-post-therapy standard-of-care MPMRI were acquired from 5 focal cryotherapy patients. Two clinicians independently contoured, on each slice, the prostate (pre- and post-treatment) and the dominant index lesion CTV (pre-treatment) in the T2-weighted MRI, and the ablated region (post-treatment) in the dynamic-contrast- enhanced MRI. For each combination of clinician contours, post-treatment images were registered to pre-treatment images using a 3D biomechanical-model-based registration of prostate surfaces, and four metrics were computed: the proportion of the target tissue region that was ablated and the target:ablated region volume ratio for each of two targets (the CTV and an expanded planning target volume). Variance components analysis was used to measure the contribution of each type of contour to the variance in the therapy evaluation metrics. Conclusions: 14-23% of evaluation metric variance was attributable to contouring variability (including 6-12% from ablation region contouring); reducing this variability could improve the precision of focal therapy evaluation metrics.
How to find what you don't know: Visualising variability in 3D geological models
NASA Astrophysics Data System (ADS)
Lindsay, Mark; Wellmann, Florian; Jessell, Mark; Ailleres, Laurent
2014-05-01
Uncertainties in input data can have compounding effects on the predictive reliability of three-dimensional (3D) geological models. Resource exploration, tectonic studies and environmental modelling can be compromised by using 3D models that misrepresent the target geology, and drilling campaigns that attempt to intersect particular geological units guided by 3D models are at risk of failure if the exploration geologist is unaware of inherent uncertainties. In addition, the visual inspection of 3D models is often the first contact decision makers have with the geology, thus visually communicating the presence and magnitude of uncertainties contained within geological 3D models is critical. Unless uncertainties are presented early in the relationship between decision maker and model, the model will be considered more truthful than the uncertainties allow with each subsequent viewing. We present a selection of visualisation techniques that provide the viewer with an insight to the location and amount of uncertainty contained within a model, and the geological characteristics which are most affected. A model of the Gippsland Basin, southeastern Australia is used as a case study to demonstrate the concepts of information entropy, stratigraphic variability and geodiversity. Central to the techniques shown here is the creation of a model suite, performed by creating similar (but not the same) version of the original model through perturbation of the input data. Specifically, structural data in the form of strike and dip measurements is perturbed in the creation of the model suite. The visualisation techniques presented are: (i) information entropy; (ii) stratigraphic variability and (iii) geodiversity. Information entropy is used to analyse uncertainty in a spatial context, combining the empirical probability distributions of multiple outcomes with a single quantitative measure. Stratigraphic variability displays the number of possible lithologies that may exist at a given point within the model volume. Geodiversity analyses various model characteristics (or 'geodiveristy metrics'), including the depth, volume of unit, the curvature of an interface, the geological complexity of a contact and the contact relationships units have with each other. Principal component analysis, a multivariate statistical technique, is used to simultaneously examine each of the geodiveristy metrics to determine the boundaries of model space, and identify which metrics contribute most to model uncertainty. The combination of information entropy, stratigraphic variability and geodiversity analysis provides a descriptive and thorough representation of uncertainty with effective visualisation techniques that clearly communicate the geological uncertainty contained within the geological model.