Determination of nitrogen balance in agroecosystems
USDA-ARS?s Scientific Manuscript database
Nitrogen balance in agroecosystems provides a quantitative framework of N inputs and outputs and retention in the soil that examine sustainability of agricultural productivity and soil and environmental quality. Nitrogen inputs include N additions from manures and fertilizers, atmospheric deposition...
Huff, G.F.
2004-01-01
The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.
Field Research Facility Data Integration Framework Data Management Plan: Survey Lines Dataset
2016-08-01
CHL and its District partners. The beach morphology surveys on which this report focuses provide quantitative measures of the dynamic nature of...topography • volume change 1.4 Data description The morphology surveys are conducted over a series of 26 shore- perpendicular profile lines spaced 50...dataset input data and products. Table 1. FRF survey lines dataset input data and products. Input Data FDIF Product Description ASCII LARC survey text
Spotsizer: High-throughput quantitative analysis of microbial growth.
Bischof, Leanne; Převorovský, Martin; Rallis, Charalampos; Jeffares, Daniel C; Arzhaeva, Yulia; Bähler, Jürg
2016-10-01
Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license.
Input and Output in Code Switching: A Case Study of a Japanese-Chinese Bilingual Infant
ERIC Educational Resources Information Center
Meng, Hairong; Miyamoto, Tadao
2012-01-01
Code switching (CS) (or language mixing) generally takes place in bilingual children's utterances, even if their parents adhere to the "one parent-one language" principle. The present case study of a Japanese-Chinese bilingual infant provides both quantitative and qualitative analyses on the impact of input on output, as manifested in CS. The…
The Inter-Sectoral Impact Model Intercomparison Project (ISI–MIP): Project framework
Warszawski, Lila; Frieler, Katja; Huber, Veronika; Piontek, Franziska; Serdeczny, Olivia; Schewe, Jacob
2014-01-01
The Inter-Sectoral Impact Model Intercomparison Project offers a framework to compare climate impact projections in different sectors and at different scales. Consistent climate and socio-economic input data provide the basis for a cross-sectoral integration of impact projections. The project is designed to enable quantitative synthesis of climate change impacts at different levels of global warming. This report briefly outlines the objectives and framework of the first, fast-tracked phase of Inter-Sectoral Impact Model Intercomparison Project, based on global impact models, and provides an overview of the participating models, input data, and scenario set-up. PMID:24344316
Ex Priori: Exposure-based Prioritization across Chemical Space
EPA's Exposure Prioritization (Ex Priori) is a simplified, quantitative visual dashboard that makes use of data from various inputs to provide rank-ordered internalized dose metric. This complements other high throughput screening by viewing exposures within all chemical space si...
Report of the Field and Laboratory Utilization Study Group. Appendix
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1975-12-01
These appendices (ERDA organization and management, summary of other resources, and FLU study considerations/inputs) provide detailed and quantitative information in support of the findings and recommendations presented in the report of the field and laboratory utilization study group. (RWR)
QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices
NASA Technical Reports Server (NTRS)
Hess, R. A.; Henderson, D. K.
1996-01-01
A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.
Marra, Kristen R.; Charpentier, Ronald R.; Schenk, Christopher J.; Lewan, Michael D.; Leathers-Miller, Heidi M.; Klett, Timothy R.; Gaswirth, Stephanie B.; Le, Phuong A.; Mercier, Tracey J.; Pitman, Janet K.; Tennyson, Marilyn E.
2016-07-15
In 2015, the U.S. Geological Survey (USGS) released an updated assessment of undiscovered, technically recoverable shale gas and shale oil resources of the Mississippian Barnett Shale in north-central Texas (Marra and others, 2015). The Barnett Shale was assessed using the standard continuous (unconventional) methodology established by the USGS for two assessment units (AUs): (1) Barnett Continuous Gas AU, and (2) Barnett Mixed Continuous Gas and Oil AU. A third assessment unit, the Western Barnett Continuous Oil AU, was also defined but was not quantitatively assessed because of limited data within the extent of the AU. The purpose of this report is to provide supplemental documentation of the quantitative input parameters applied in the Barnett Shale assessment.
Industrial ecology: Quantitative methods for exploring a lower carbon future
NASA Astrophysics Data System (ADS)
Thomas, Valerie M.
2015-03-01
Quantitative methods for environmental and cost analyses of energy, industrial, and infrastructure systems are briefly introduced and surveyed, with the aim of encouraging broader utilization and development of quantitative methods in sustainable energy research. Material and energy flow analyses can provide an overall system overview. The methods of engineering economics and cost benefit analysis, such as net present values, are the most straightforward approach for evaluating investment options, with the levelized cost of energy being a widely used metric in electricity analyses. Environmental lifecycle assessment has been extensively developed, with both detailed process-based and comprehensive input-output approaches available. Optimization methods provide an opportunity to go beyond engineering economics to develop detailed least-cost or least-impact combinations of many different choices.
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The paper describes an angiographic analysis system which uses a video disk for recording and playback, a light-pen for data input, minicomputer processing, and an electrostatic printer/plotter for hardcopy output. The method is applied to quantitative analysis of ventricular volumes, sequential ventriculography for assessment of physiologic and pharmacologic interventions, analysis of instantaneous time sequence of ventricular systolic and diastolic events, and quantitation of segmental abnormalities. The system is shown to provide the capability for computation of ventricular volumes and other measurements from operator-defined margins by greatly reducing the tedium and errors associated with manual planimetry.
Dyal, Jonathan; Akampurira, Andrew; Rhein, Joshua; Morawski, Bozena M; Kiggundu, Reuben; Nabeta, Henry W; Musubire, Abdu K; Bahr, Nathan C; Williams, Darlisha A; Bicanic, Tihana; Larsen, Robert A; Meya, David B; Boulware, David R
2016-05-01
Quantitative cerebrospinal fluid (CSF) cultures provide a measure of disease severity in cryptococcal meningitis. The fungal clearance rate by quantitative cultures has become a primary endpoint for phase II clinical trials. This study determined the inter-assay accuracy of three different quantitative culture methodologies. Among 91 participants with meningitis symptoms in Kampala, Uganda, during August-November 2013, 305 CSF samples were prospectively collected from patients at multiple time points during treatment. Samples were simultaneously cultured by three methods: (1) St. George's 100 mcl input volume of CSF with five 1:10 serial dilutions, (2) AIDS Clinical Trials Group (ACTG) method using 1000, 100, 10 mcl input volumes, and two 1:100 dilutions with 100 and 10 mcl input volume per dilution on seven agar plates; and (3) 10 mcl calibrated loop of undiluted and 1:100 diluted CSF (loop). Quantitative culture values did not statistically differ between St. George-ACTG methods (P= .09) but did for St. George-10 mcl loop (P< .001). Repeated measures pairwise correlation between any of the methods was high (r≥0.88). For detecting sterility, the ACTG-method had the highest negative predictive value of 97% (91% St. George, 60% loop), but the ACTG-method had occasional (∼10%) difficulties in quantification due to colony clumping. For CSF clearance rate, St. George-ACTG methods did not differ overall (mean -0.05 ± 0.07 log10CFU/ml/day;P= .14) on a group level; however, individual-level clearance varied. The St. George and ACTG quantitative CSF culture methods produced comparable but not identical results. Quantitative cultures can inform treatment management strategies. © The Author 2016. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
Quantitative myocardial perfusion from static cardiac and dynamic arterial CT
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.
2018-05-01
Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.
Geisler, B P; Widerberg, K F; Berghöfer, A; Willich, S N
2010-01-01
This paper's aim is to identify existing and developing new concepts of organization, management, and leadership at a large European university hospital; and to evaluate whether mixed qualitative-quantitative methods with both internal and external input can provide helpful views of the possible future of large health care providers. Using the Delphi method in semi-structured, semi-quantitative interviews, with managers and employees as experts, the authors performed a vertical and a horizontal internal analysis. In addition, input from innovative faculties in other countries was obtained through structured power questions. These two sources were used to create three final scenarios, which evaluated using traditional strategic planning methods. There is found a collaboration scenario in which faculty and hospital are separated; a split scenario which divides the organization into three independent hospitals; and a corporation scenario in which corporate activities are bundled in three separate entities. In complex mergers of knowledge-driven organizations, the employees of the own organization (in addition to external consultants) might be tapped as a knowledge resource to successful future business models. The paper uses a real world consulting case to present a new set of methods for strategic planning in large health care provider organizations.
Wotton, Karl R; Jiménez-Guri, Eva; Crombach, Anton; Janssens, Hilde; Alcaine-Colet, Anna; Lemke, Steffen; Schmidt-Ott, Urs; Jaeger, Johannes
2015-01-01
The segmentation gene network in insects can produce equivalent phenotypic outputs despite differences in upstream regulatory inputs between species. We investigate the mechanistic basis of this phenomenon through a systems-level analysis of the gap gene network in the scuttle fly Megaselia abdita (Phoridae). It combines quantification of gene expression at high spatio-temporal resolution with systematic knock-downs by RNA interference (RNAi). Initiation and dynamics of gap gene expression differ markedly between M. abdita and Drosophila melanogaster, while the output of the system converges to equivalent patterns at the end of the blastoderm stage. Although the qualitative structure of the gap gene network is conserved, there are differences in the strength of regulatory interactions between species. We term such network rewiring ‘quantitative system drift’. It provides a mechanistic explanation for the developmental hourglass model in the dipteran lineage. Quantitative system drift is likely to be a widespread mechanism for developmental evolution. DOI: http://dx.doi.org/10.7554/eLife.04785.001 PMID:25560971
Target-Based Maintenance of Privacy Preserving Association Rules
ERIC Educational Resources Information Center
Ahluwalia, Madhu V.
2011-01-01
In the context of association rule mining, the state-of-the-art in privacy preserving data mining provides solutions for categorical and Boolean association rules but not for quantitative association rules. This research fills this gap by describing a method based on discrete wavelet transform (DWT) to protect input data privacy while preserving…
ERIC Educational Resources Information Center
Levitt-Merin, Marta; Sutter, Sharon Kingdon
This final report provides a descriptive overview of three approaches which the Hawaii Demonstration Project initiated to reduce unintended teenage pregnancies. Project evaluation findings are summarized; both qualitative and quantitative data are presented for a comprehensive picture of the project and its input. Project limitations and successes…
Factors Facilitating Implicit Learning: The Case of the Sesotho Passive
ERIC Educational Resources Information Center
Kline, Melissa; Demuth, Katherine
2010-01-01
Researchers have long debated the mechanisms underlying the learning of syntactic structure. Of significant interest has been the fact that passive constructions appear to be learned earlier in Sesotho than English. This paper provides a comprehensive, quantitative analysis of the passive input Sesotho-speaking children hear, how it differs from…
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
NASA Technical Reports Server (NTRS)
Partridge, William P.; Laurendeau, Normand M.
1997-01-01
We have experimentally assessed the quantitative nature of planar laser-induced fluorescence (PLIF) measurements of NO concentration in a unique atmospheric pressure, laminar, axial inverse diffusion flame (IDF). The PLIF measurements were assessed relative to a two-dimensional array of separate laser saturated fluorescence (LSF) measurements. We demonstrated and evaluated several experimentally-based procedures for enhancing the quantitative nature of PLIF concentration images. Because these experimentally-based PLIF correction schemes require only the ability to make PLIF and LSF measurements, they produce a more broadly applicable PLIF diagnostic compared to numerically-based correction schemes. We experimentally assessed the influence of interferences on both narrow-band and broad-band fluorescence measurements at atmospheric and high pressures. Optimum excitation and detection schemes were determined for the LSF and PLIF measurements. Single-input and multiple-input, experimentally-based PLIF enhancement procedures were developed for application in test environments with both negligible and significant quench-dependent error gradients. Each experimentally-based procedure provides an enhancement of approximately 50% in the quantitative nature of the PLIF measurements, and results in concentration images nominally as quantitative as LSF point measurements. These correction procedures can be applied to other species, including radicals, for which no experimental data are available from which to implement numerically-based PLIF enhancement procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.
1979-09-01
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.
The application of remote sensing to the development and formulation of hydrologic planning models
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1976-01-01
A hydrologic planning model is developed based on remotely sensed inputs. Data from LANDSAT 1 are used to supply the model's quantitative parameters and coefficients. The use of LANDSAT data as information input to all categories of hydrologic models requiring quantitative surface parameters for their effects functioning is also investigated.
Rurkhamet, Busagarin; Nanthavanij, Suebsak
2004-12-01
One important factor that leads to the development of musculoskeletal disorders (MSD) and cumulative trauma disorders (CTD) among visual display terminal (VDT) users is their work posture. While operating a VDT, a user's body posture is strongly influenced by the task, VDT workstation settings, and layout of computer accessories. This paper presents an analytic and rule-based decision support tool called EQ-DeX (an ergonomics and quantitative design expert system) that is developed to provide valid and practical recommendations regarding the adjustment of a VDT workstation and the arrangement of computer accessories. The paper explains the structure and components of EQ-DeX, input data, rules, and adjustment and arrangement algorithms. From input information such as gender, age, body height, task, etc., EQ-DeX uses analytic and rule-based algorithms to estimate quantitative settings of a computer table and a chair, as well as locations of computer accessories such as monitor, document holder, keyboard, and mouse. With the input and output screens that are designed using the concept of usability, the interactions between the user and EQ-DeX are convenient. Examples are also presented to demonstrate the recommendations generated by EQ-DeX.
Hamood, Albert W.; Haddad, Sara A.; Otopalik, Adriane G.; Rosenbaum, Philipp
2015-01-01
Abstract The crustacean stomatogastric ganglion (STG) receives descending neuromodulatory inputs from three anterior ganglia: the paired commissural ganglia (CoGs), and the single esophageal ganglion (OG). In this paper, we provide the first detailed and quantitative analyses of the short- and long-term effects of removal of these descending inputs (decentralization) on the pyloric rhythm of the STG. Thirty minutes after decentralization, the mean frequency of the pyloric rhythm dropped from 1.20 Hz in control to 0.52 Hz. Whereas the relative phase of pyloric neuron activity was approximately constant across frequency in the controls, after decentralization this changed markedly. Nine control preparations kept for 5–6 d in vitro maintained pyloric rhythm frequencies close to their initial values. Nineteen decentralized preparations kept for 5–6 d dropped slightly in frequency from those seen at 30 min following decentralization, but then displayed stable activity over 6 d. Bouts of higher frequency activity were intermittently seen in both control and decentralized preparations, but the bouts began earlier and were more frequent in the decentralized preparations. Although the bouts may indicate that the removal of the modulatory inputs triggered changes in neuronal excitability, these changes did not produce obvious long-lasting changes in the frequency of the decentralized preparations. PMID:25914899
A sensitivity analysis of regional and small watershed hydrologic models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.
1975-01-01
Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.
Detection and segmentation of multiple touching product inspection items
NASA Astrophysics Data System (ADS)
Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David
1996-12-01
X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.
Motion-gated acquisition for in vivo optical imaging
Gioux, Sylvain; Ashitate, Yoshitomo; Hutteman, Merlijn; Frangioni, John V.
2009-01-01
Wide-field continuous wave fluorescence imaging, fluorescence lifetime imaging, frequency domain photon migration, and spatially modulated imaging have the potential to provide quantitative measurements in vivo. However, most of these techniques have not yet been successfully translated to the clinic due to challenging environmental constraints. In many circumstances, cardiac and respiratory motion greatly impair image quality and∕or quantitative processing. To address this fundamental problem, we have developed a low-cost, field-programmable gate array–based, hardware-only gating device that delivers a phase-locked acquisition window of arbitrary delay and width that is derived from an unlimited number of pseudo-periodic and nonperiodic input signals. All device features can be controlled manually or via USB serial commands. The working range of the device spans the extremes of mouse electrocardiogram (1000 beats per minute) to human respiration (4 breaths per minute), with timing resolution ⩽0.06%, and jitter ⩽0.008%, of the input signal period. We demonstrate the performance of the gating device, including dramatic improvements in quantitative measurements, in vitro using a motion simulator and in vivo using near-infrared fluorescence angiography of beating pig heart. This gating device should help to enable the clinical translation of promising new optical imaging technologies. PMID:20059276
,
2015-10-20
From 2000 to 2011, the U.S. Geological Survey conducted 139 quantitative assessments of continuous (unconventional) oil and gas accumulations within the United States. This report documents those assessments more fully than previously done by providing detailed documentation of both the assessment input and output. This report also compiles the data into spreadsheet tables that can be more readily used to provide analogs for future assessments, especially for hypothetical continuous accumulations.
An analysis of the input-output properties of neuroprosthetic hand grasps.
Memberg, W D; Crago, P E
2000-01-01
We measured the input-output properties of the hand grasps of 14 individuals with tetraplegia at the C5/C6 level who had received an implanted upper limb neuroprosthesis. The data provide a quantitative description of grasp-opening and grasp-force control with neuroprosthetic hand grasp systems. Static properties were estimated by slowly ramping the command (input) from 0 to 100%. A hand-held sensor monitored the outputs: grasp force and grasp opening. Trials were performed at different wrist positions, with two different-sized objects being held, and with both grasp modes (lateral and palmar grasps). Larger forces were produced when grasping larger objects, and greater opening was achieved with the wrist in flexion. Although active grasp force increased with wrist extension, it was not significant statistically. Lateral grasp produced larger forces than the palmar grasp. The command range can be divided into a portion that controls grasp opening and a portion that controls grasp force. The portion controlling force increased with spacer size, but did not depend significantly on grasp mode or wrist position. The force-command relationships were more linear than the position-command relationships. Grasp opening decreased significantly over a one-year period, while no significant change in grasp force was observed. These quantitative descriptions of neuroprosthetic hand grasps under varying conditions provide useful information about output capabilities that can be used to gauge the effectiveness of different control schemes and to design future control systems.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1977-01-01
Methods for the reduction of remotely sensed data and its application in hydrologic land use assessment, surface water inventory, and soil property studies are presented. LANDSAT data is used to provide quantitative parameters and coefficients to construct watershed transfer functions for a hydrologic planning model aimed at estimating peak outflow from rainfall inputs.
ERIC Educational Resources Information Center
O'Horo, Neal O.
2013-01-01
The purpose of this quantitative survey study was to test the Leontief input/output theory relating the input of IT certification to the output of the English-speaking U.S. human resource professional perceived IT professional job performance. Participants (N = 104) rated their perceptions of IT certified vs. non-IT certified professionals' job…
Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.
Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto
2011-01-01
comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
GeLC-MRM quantitation of mutant KRAS oncoprotein in complex biological samples.
Halvey, Patrick J; Ferrone, Cristina R; Liebler, Daniel C
2012-07-06
Tumor-derived mutant KRAS (v-Ki-ras-2 Kirsten rat sarcoma viral oncogene) oncoprotein is a critical driver of cancer phenotypes and a potential biomarker for many epithelial cancers. Targeted mass spectrometry analysis by multiple reaction monitoring (MRM) enables selective detection and quantitation of wild-type and mutant KRAS proteins in complex biological samples. A recently described immunoprecipitation approach (Proc. Nat. Acad. Sci.2011, 108, 2444-2449) can be used to enrich KRAS for MRM analysis, but requires large protein inputs (2-4 mg). Here, we describe sodium dodecyl sulfate-polyacrylamide gel electrophoresis-based enrichment of KRAS in a low molecular weight (20-25 kDa) protein fraction prior to MRM analysis (GeLC-MRM). This approach reduces background proteome complexity, thus, allowing mutant KRAS to be reliably quantified in low protein inputs (5-50 μg). GeLC-MRM detected KRAS mutant variants (G12D, G13D, G12V, G12S) in a panel of cancer cell lines. GeLC-MRM analysis of wild-type and mutant was linear with respect to protein input and showed low variability across process replicates (CV = 14%). Concomitant analysis of a peptide from the highly similar HRAS and NRAS proteins enabled correction of KRAS-targeted measurements for contributions from these other proteins. KRAS peptides were also quantified in fluid from benign pancreatic cysts and pancreatic cancers at concentrations from 0.08 to 1.1 fmol/μg protein. GeLC-MRM provides a robust, sensitive approach to quantitation of mutant proteins in complex biological samples.
ERIC Educational Resources Information Center
Chapman, Jarrett Michael
2014-01-01
Teachers are expected to improve their planning, instruction, and assessment as they progress through their career. An important component to teachers knowing what to modify in their teaching style is being able to solicit meaningful feedback from students. This mixed-methods study was conducted to provide teachers with a quantitative method to…
Software for Automated Image-to-Image Co-registration
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.; Hughes, Heidi
2007-01-01
The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
Assessment of input-output properties and control of neuroprosthetic hand grasp.
Hines, A E; Owens, N E; Crago, P E
1992-06-01
Three tests have been developed to evaluate rapidly and quantitatively the input-output properties and patient control of neuroprosthetic hand grasp. Each test utilizes a visual pursuit tracking task during which the subject controls the grasp force and grasp opening (position) of the hand. The first test characterizes the static input-output properties of the hand grasp, where the input is a slowly changing patient generated command signal and the outputs are grasp force and grasp opening. Nonlinearities and inappropriate slopes have been documented in these relationships, and in some instances the need for system returning has been indicated. For each subject larger grasp forces were produced when grasping larger objects, and for some subjects the shapes of the relationships also varied with object size. The second test quantifies the ability of the subject to control the hand grasp outputs while tracking steps and ramps. Neuroprosthesis users had rms errors two to three times larger when tracking steps versus ramps, and had rms errors four to five times larger than normals when tracking ramps. The third test provides an estimate of the frequency response of the hand grasp system dynamics, from input and output data collected during a random tracking task. Transfer functions were estimated by spectral analysis after removal of the static input-output nonlinearities measured in the first test. The dynamics had low-pass filter characteristics with 3 dB cutoff frequencies from 1.0 to 1.4 Hz. The tests developed in this study provide a rapid evaluation of both the system and the user. They provide information to 1) help interpret subject performance of functional tasks, 2) evaluate the efficacy of system features such as closed-loop control, and 3) screen the neuroprosthesis to indicate the need for retuning.
Getting quantitative about consequences of cross-ecosystem resource subsidies on recipient consumers
Richardson, John S.; Wipfli, Mark S.
2016-01-01
Most studies of cross-ecosystem resource subsidies have demonstrated positive effects on recipient consumer populations, often with very large effect sizes. However, it is important to move beyond these initial addition–exclusion experiments to consider the quantitative consequences for populations across gradients in the rates and quality of resource inputs. In our introduction to this special issue, we describe at least four potential models that describe functional relationships between subsidy input rates and consumer responses, most of them asymptotic. Here we aim to advance our quantitative understanding of how subsidy inputs influence recipient consumers and their communities. In the papers following, fish were either the recipient consumers or the subsidy as carcasses of anadromous species. Advancing general, predictive models will enable us to further consider what other factors are potentially co-limiting (e.g., nutrients, other population interactions, physical habitat, etc.) and better integrate resource subsidies into consumer–resource, biophysical dynamics models.
Marder, Eve
2015-01-01
For decades, the episodic gastric rhythm of the crustacean stomatogastric nervous system (STNS) has served as an important model system for understanding the generation of rhythmic motor behaviors. Here we quantitatively describe many features of the gastric rhythm of the crab Cancer borealis under several conditions. First, we analyzed spontaneous gastric rhythms produced by freshly dissected preparations of the STNS, including the cycle frequency and phase relationships among gastric units. We find that phase is relatively conserved across frequency, similar to the pyloric rhythm. We also describe relationships between these two rhythms, including a significant gastric/pyloric frequency correlation. We then performed continuous, days-long extracellular recordings of gastric activity from preparations of the STNS in which neuromodulatory inputs to the stomatogastric ganglion were left intact and also from preparations in which these modulatory inputs were cut (decentralization). This allowed us to provide quantitative descriptions of variability and phase conservation within preparations across time. For intact preparations, gastric activity was more variable than pyloric activity but remained relatively stable across 4–6 days, and many significant correlations were found between phase and frequency within animals. Decentralized preparations displayed fewer episodes of gastric activity, with altered phase relationships, lower frequencies, and reduced coordination both among gastric units and between the gastric and pyloric rhythms. Together, these results provide insight into the role of neuromodulation in episodic pattern generation and the extent of animal-to-animal variability in features of spontaneously occurring gastric rhythms. PMID:26156388
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
Huang, Chi-Cheng; Wu, Chun-Hu; Huang, Ya-Yao; Tzen, Kai-Yuan; Chen, Szu-Fu; Tsai, Miao-Ling; Wu, Hsiao-Ming
2017-04-01
Performing quantitative small-animal PET with an arterial input function has been considered technically challenging. Here, we introduce a catheterization procedure that keeps a rat physiologically stable for 1.5 mo. We demonstrated the feasibility of quantitative small-animal 18 F-FDG PET in rats by performing it repeatedly to monitor the time course of variations in the cerebral metabolic rate of glucose (CMR glc ). Methods: Aseptic surgery was performed on 2 rats. Each rat underwent catheterization of the right femoral artery and left femoral vein. The catheters were sealed with microinjection ports and then implanted subcutaneously. Over the next 3 wk, each rat underwent 18 F-FDG quantitative small-animal PET 6 times. The CMR glc of each brain region was calculated using a 3-compartment model and an operational equation that included a k* 4 Results: On 6 mornings, we completed 12 18 F-FDG quantitative small-animal PET studies on 2 rats. The rats grew steadily before and after the 6 quantitative small-animal PET studies. The CMR glc of the conscious brain (e.g., right parietal region, 99.6 ± 10.2 μmol/100 g/min; n = 6) was comparable to that for 14 C-deoxyglucose autoradiographic methods. Conclusion: Maintaining good blood patency in catheterized rats is not difficult. Longitudinal quantitative small-animal PET imaging with an arterial input function can be performed routinely. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Hybrid, experimental and computational, investigation of mechanical components
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1996-07-01
Computational and experimental methodologies have unique features for the analysis and solution of a wide variety of engineering problems. Computations provide results that depend on selection of input parameters such as geometry, material constants, and boundary conditions which, for correct modeling purposes, have to be appropriately chosen. In addition, it is relatively easy to modify the input parameters in order to computationally investigate different conditions. Experiments provide solutions which characterize the actual behavior of the object of interest subjected to specific operating conditions. However, it is impractical to experimentally perform parametric investigations. This paper discusses the use of a hybrid, computational and experimental, approach for study and optimization of mechanical components. Computational techniques are used for modeling the behavior of the object of interest while it is experimentally tested using noninvasive optical techniques. Comparisons are performed through a fringe predictor program used to facilitate the correlation between both techniques. In addition, experimentally obtained quantitative information, such as displacements and shape, can be applied in the computational model in order to improve this correlation. The result is a validated computational model that can be used for performing quantitative analyses and structural optimization. Practical application of the hybrid approach is illustrated with a representative example which demonstrates the viability of the approach as an engineering tool for structural analysis and optimization.
McLeod, James; Othman, Maazuza Z; Parthasarathy, Rajarathinam
2018-05-26
The relationship between mixing energy input and biogas production was investigated by anaerobically digesting sewage sludge in lab scale, hydraulically mixed, batch mode digesters at six different specific energy inputs. The goal was to identify how mixing energy influenced digestion performance at quantitative levels to help explain the varying results in other published works. The results showed that digester homogeneity was largely uninfluenced by energy input, whereas cumulative biogas production and solids destruction were. With similar solids distributions between conditions, the observed differences were attributed to shear forces disrupting substrate-microbe flocs rather than the formation of temperature and/or concentration gradients. Disruption of the substrate-microbe flocs produced less favourable conditions for hydrolytic bacteria, resulting in less production of biomass and more biogas. Overall, this hypothesis explains the current body of research including the inhibitory conditions reported at extreme mixing power inputs. However, further work is required to definitively prove it. Copyright © 2018 Elsevier Ltd. All rights reserved.
Flexible Coding of Visual Working Memory Representations during Distraction.
Lorenc, Elizabeth S; Sreenivasan, Kartik K; Nee, Derek E; Vandenbroucke, Annelinde R E; D'Esposito, Mark
2018-06-06
Visual working memory (VWM) recruits a broad network of brain regions, including prefrontal, parietal, and visual cortices. Recent evidence supports a "sensory recruitment" model of VWM, whereby precise visual details are maintained in the same stimulus-selective regions responsible for perception. A key question in evaluating the sensory recruitment model is how VWM representations persist through distracting visual input, given that the early visual areas that putatively represent VWM content are susceptible to interference from visual stimulation.To address this question, we used a functional magnetic resonance imaging inverted encoding model approach to quantitatively assess the effect of distractors on VWM representations in early visual cortex and the intraparietal sulcus (IPS), another region previously implicated in the storage of VWM information. This approach allowed us to reconstruct VWM representations for orientation, both before and after visual interference, and to examine whether oriented distractors systematically biased these representations. In our human participants (both male and female), we found that orientation information was maintained simultaneously in early visual areas and IPS in anticipation of possible distraction, and these representations persisted in the absence of distraction. Importantly, early visual representations were susceptible to interference; VWM orientations reconstructed from visual cortex were significantly biased toward distractors, corresponding to a small attractive bias in behavior. In contrast, IPS representations did not show such a bias. These results provide quantitative insight into the effect of interference on VWM representations, and they suggest a dynamic tradeoff between visual and parietal regions that allows flexible adaptation to task demands in service of VWM. SIGNIFICANCE STATEMENT Despite considerable evidence that stimulus-selective visual regions maintain precise visual information in working memory, it remains unclear how these representations persist through subsequent input. Here, we used quantitative model-based fMRI analyses to reconstruct the contents of working memory and examine the effects of distracting input. Although representations in the early visual areas were systematically biased by distractors, those in the intraparietal sulcus appeared distractor-resistant. In contrast, early visual representations were most reliable in the absence of distraction. These results demonstrate the dynamic, adaptive nature of visual working memory processes, and provide quantitative insight into the ways in which representations can be affected by interference. Further, they suggest that current models of working memory should be revised to incorporate this flexibility. Copyright © 2018 the authors 0270-6474/18/385267-10$15.00/0.
The Assay Development Working Group (ADWG) of the CPTAC Program is currently drafting a document to propose best practices for generation, quantification, storage, and handling of peptide standards used for mass spectrometry-based assays, as well as interpretation of quantitative proteomic data based on peptide standards. The ADWG is seeking input from commercial entities that provide peptide standards for mass spectrometry-based assays or that perform amino acid analysis.
What can posturography tell us about vestibular function?
NASA Technical Reports Server (NTRS)
Black, F. O.
2001-01-01
Patients with balance disorders want answers to the following basic questions: (1) What is causing my problem? and (2) What can be done about my problem? Information to fully answer these questions must include status of both sensory and motor components of the balance control systems. Computerized dynamic posturography (CDP) provides quantitative assessment of both sensory and motor components of postural control along with how the sensory inputs to the brain interact. This paper reviews the scientific basis and clinical applications of CDP. Specifically, studies describing the integration of vestibular inputs with other sensory systems for postural control are briefly summarized. Clinical applications, including assessment, rehabilitation, and management are presented. Effects of aging on postural control along with prevention and management strategies are discussed.
Wang, Lijuan; Zhao, He; Robinson, Brian E.
2017-01-01
With the increases of cropland area and fertilizer nitrogen (N) application rate, general N balance characteristics in regional agroecosystems have been widely documented. However, few studies have quantitatively analyzed the drivers of spatial changes in the N budget. We constructed a mass balance model of the N budget at the soil surface using a database of county-level agricultural statistics to analyze N input, output, and proportional contribution of various factors to the overall N input changes in croplands during 2000–2010 in the Yangtze River Basin, the largest basin and the main agricultural production region in China. Over the period investigated, N input increased by 9%. Of this 87% was from fertilizer N input. In the upper and middle reaches of the basin, the increased synthetic fertilizer N application rate accounted for 84% and 76% of the N input increase, respectively, mainly due to increased N input in the cropland that previously had low synthetic fertilizer N application rate. In lower reaches of the basin, mainly due to urbanization, the decrease in cropland area and synthetic fertilizer N application rate nearly equally contributed to decreases in N input. Quantifying spatial N inputs can provide critical managerial information needed to optimize synthetic fertilizer N application rate and monitor the impacts of urbanization on agricultural production, helping to decrease agricultural environment risk and maintain sustainable agricultural production in different areas. PMID:28678841
Wang, Lijuan; Zheng, Hua; Zhao, He; Robinson, Brian E
2017-01-01
With the increases of cropland area and fertilizer nitrogen (N) application rate, general N balance characteristics in regional agroecosystems have been widely documented. However, few studies have quantitatively analyzed the drivers of spatial changes in the N budget. We constructed a mass balance model of the N budget at the soil surface using a database of county-level agricultural statistics to analyze N input, output, and proportional contribution of various factors to the overall N input changes in croplands during 2000-2010 in the Yangtze River Basin, the largest basin and the main agricultural production region in China. Over the period investigated, N input increased by 9%. Of this 87% was from fertilizer N input. In the upper and middle reaches of the basin, the increased synthetic fertilizer N application rate accounted for 84% and 76% of the N input increase, respectively, mainly due to increased N input in the cropland that previously had low synthetic fertilizer N application rate. In lower reaches of the basin, mainly due to urbanization, the decrease in cropland area and synthetic fertilizer N application rate nearly equally contributed to decreases in N input. Quantifying spatial N inputs can provide critical managerial information needed to optimize synthetic fertilizer N application rate and monitor the impacts of urbanization on agricultural production, helping to decrease agricultural environment risk and maintain sustainable agricultural production in different areas.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
Quantitative determination of Auramine O by terahertz spectroscopy with 2DCOS-PLSR model
NASA Astrophysics Data System (ADS)
Zhang, Huo; Li, Zhi; Chen, Tao; Qin, Binyi
2017-09-01
Residues of harmful dyes such as Auramine O (AO) in herb and food products threaten the health of people. So, fast and sensitive detection techniques of the residues are needed. As a powerful tool for substance detection, terahertz (THz) spectroscopy was used for the quantitative determination of AO by combining with an improved partial least-squares regression (PLSR) model in this paper. Absorbance of herbal samples with different concentrations was obtained by THz-TDS in the band between 0.2THz and 1.6THz. We applied two-dimensional correlation spectroscopy (2DCOS) to improve the PLSR model. This method highlighted the spectral differences of different concentrations, provided a clear criterion of the input interval selection, and improved the accuracy of detection result. The experimental result indicated that the combination of the THz spectroscopy and 2DCOS-PLSR is an excellent quantitative analysis method.
Developing a database for pedestrians' earthquake emergency evacuation in indoor scenarios.
Zhou, Junxue; Li, Sha; Nie, Gaozhong; Fan, Xiwei; Tan, Jinxian; Li, Huayue; Pang, Xiaoke
2018-01-01
With the booming development of evacuation simulation software, developing an extensive database in indoor scenarios for evacuation models is imperative. In this paper, we conduct a qualitative and quantitative analysis of the collected videotapes and aim to provide a complete and unitary database of pedestrians' earthquake emergency response behaviors in indoor scenarios, including human-environment interactions. Using the qualitative analysis method, we extract keyword groups and keywords that code the response modes of pedestrians and construct a general decision flowchart using chronological organization. Using the quantitative analysis method, we analyze data on the delay time, evacuation speed, evacuation route and emergency exit choices. Furthermore, we study the effect of classroom layout on emergency evacuation. The database for indoor scenarios provides reliable input parameters and allows the construction of real and effective constraints for use in software and mathematical models. The database can also be used to validate the accuracy of evacuation models.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Charpentier, R.R.; Gautier, D.L.
2011-01-01
The USGS has assessed undiscovered petroleum resources in the Arctic through geological mapping, basin analysis and quantitative assessment. The new map compilation provided the base from which geologists subdivided the Arctic for burial history modelling and quantitative assessment. The CARA was a probabilistic, geologically based study that used existing USGS methodology, modified somewhat for the circumstances of the Arctic. The assessment relied heavily on analogue modelling, with numerical input as lognormal distributions of sizes and numbers of undiscovered accumulations. Probabilistic results for individual assessment units were statistically aggregated taking geological dependencies into account. Fourteen papers in this Geological Society volume present summaries of various aspects of the CARA. ?? 2011 The Geological Society of London.
Dynamical Adaptation in Photoreceptors
Clark, Damon A.; Benichou, Raphael; Meister, Markus; Azeredo da Silveira, Rava
2013-01-01
Adaptation is at the heart of sensation and nowhere is it more salient than in early visual processing. Light adaptation in photoreceptors is doubly dynamical: it depends upon the temporal structure of the input and it affects the temporal structure of the response. We introduce a non-linear dynamical adaptation model of photoreceptors. It is simple enough that it can be solved exactly and simulated with ease; analytical and numerical approaches combined provide both intuition on the behavior of dynamical adaptation and quantitative results to be compared with data. Yet the model is rich enough to capture intricate phenomenology. First, we show that it reproduces the known phenomenology of light response and short-term adaptation. Second, we present new recordings and demonstrate that the model reproduces cone response with great precision. Third, we derive a number of predictions on the response of photoreceptors to sophisticated stimuli such as periodic inputs, various forms of flickering inputs, and natural inputs. In particular, we demonstrate that photoreceptors undergo rapid adaptation of response gain and time scale, over ∼ 300 ms—i. e., over the time scale of the response itself—and we confirm this prediction with data. For natural inputs, this fast adaptation can modulate the response gain more than tenfold and is hence physiologically relevant. PMID:24244119
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioning.
Kalmbach, Brian E; Ohyama, Tatsuya; Mauk, Michael D
2010-08-01
Trace eyelid conditioning is a form of associative learning that requires several forebrain structures and cerebellum. Previous work suggests that at least two conditioned stimulus (CS)-driven signals are available to the cerebellum via mossy fiber inputs during trace conditioning: one driven by and terminating with the tone and a second driven by medial prefrontal cortex (mPFC) that persists through the stimulus-free trace interval to overlap in time with the unconditioned stimulus (US). We used electric stimulation of mossy fibers to determine whether this pattern of dual inputs is necessary and sufficient for cerebellar learning to express normal trace eyelid responses. We find that presenting the cerebellum with one input that mimics persistent activity observed in mPFC and the lateral pontine nuclei during trace eyelid conditioning and another that mimics tone-elicited mossy fiber activity is sufficient to produce responses whose properties quantitatively match trace eyelid responses using a tone. Probe trials with each input delivered separately provide evidence that the cerebellum learns to respond to the mPFC-like input (that overlaps with the US) and learns to suppress responding to the tone-like input (that does not). This contributes to precisely timed responses and the well-documented influence of tone offset on the timing of trace responses. Computer simulations suggest that the underlying cerebellar mechanisms involve activation of different subsets of granule cells during the tone and during the stimulus-free trace interval. These results indicate that tone-driven and mPFC-like inputs are necessary and sufficient for the cerebellum to learn well-timed trace conditioned responses.
Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie
2015-01-01
Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation. PMID:26180536
Guo, Rui; Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie
2015-01-01
Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation.
An, Selena J; George, Asha S; LeFevre, Amnesty E; Mpembeni, Rose; Mosha, Idda; Mohan, Diwakar; Yang, Ann; Chebet, Joy; Lipingu, Chrisostom; Baqui, Abdullah H; Killewo, Japhet; Winch, Peter J; Kilewo, Charles
2015-10-04
Integration of HIV into RMNCH (reproductive, maternal, newborn and child health) services is an important process addressing the disproportionate burden of HIV among mothers and children in sub-Saharan Africa. We assess the structural inputs and processes of care that support HIV testing and counselling in routine antenatal care to understand supply-side dynamics critical to scaling up further integration of HIV into RMNCH services prior to recent changes in HIV policy in Tanzania. This study, as a part of a maternal and newborn health program evaluation in Morogoro Region, Tanzania, drew from an assessment of health centers with 18 facility checklists, 65 quantitative and 57 qualitative provider interviews, and 203 antenatal care observations. Descriptive analyses were performed with quantitative data using Stata 12.0, and qualitative data were analyzed thematically with data managed by Atlas.ti. Limitations in structural inputs, such as infrastructure, supplies, and staffing, constrain the potential for integration of HIV testing and counselling into routine antenatal care services. While assessment of infrastructure, including waiting areas, appeared adequate, long queues and small rooms made private and confidential HIV testing and counselling difficult for individual women. Unreliable stocks of HIV test kits, essential medicines, and infection prevention equipment also had implications for provider-patient relationships, with reported decreases in women's care seeking at health centers. In addition, low staffing levels were reported to increase workloads and lower motivation for health workers. Despite adequate knowledge of counselling messages, antenatal counselling sessions were brief with incomplete messages conveyed to pregnant women. In addition, coping mechanisms, such as scheduling of clinical activities on different days, limited service availability. Antenatal care is a strategic entry point for the delivery of critical tests and counselling messages and the framing of patient-provider relations, which together underpin care seeking for the remaining continuum of care. Supply-side deficiencies in structural inputs and processes of delivering HIV testing and counselling during antenatal care indicate critical shortcomings in the quality of care provided. These must be addressed if integrating HIV testing and counselling into antenatal care is to result in improved maternal and newborn health outcomes.
Haines, Seth S.; Diffendorfer, Jay E.; Balistrieri, Laurie; ...
2013-05-15
Natural resource planning at all scales demands methods for assessing the impacts of resource development and use, and in particular it requires standardized methods that yield robust and unbiased results. Building from existing probabilistic methods for assessing the volumes of energy and mineral resources, we provide an algorithm for consistent, reproducible, quantitative assessment of resource development impacts. The approach combines probabilistic input data with Monte Carlo statistical methods to determine probabilistic outputs that convey the uncertainties inherent in the data. For example, one can utilize our algorithm to combine data from a natural gas resource assessment with maps of sagemore » grouse leks and pinon-juniper woodlands in the same area to estimate possible future habitat impacts due to possible future gas development. As another example: one could combine geochemical data and maps of lynx habitat with data from a mineral deposit assessment in the same area to determine possible future mining impacts on water resources and lynx habitat. The approach can be applied to a broad range of positive and negative resource development impacts, such as water quantity or quality, economic benefits, or air quality, limited only by the availability of necessary input data and quantified relationships among geologic resources, development alternatives, and impacts. In conclusion, the framework enables quantitative evaluation of the trade-offs inherent in resource management decision-making, including cumulative impacts, to address societal concerns and policy aspects of resource development.« less
Haines, Seth S.; Diffendorfer, James; Balistrieri, Laurie S.; Berger, Byron R.; Cook, Troy A.; Gautier, Donald L.; Gallegos, Tanya J.; Gerritsen, Margot; Graffy, Elisabeth; Hawkins, Sarah; Johnson, Kathleen; Macknick, Jordan; McMahon, Peter; Modde, Tim; Pierce, Brenda; Schuenemeyer, John H.; Semmens, Darius; Simon, Benjamin; Taylor, Jason; Walton-Day, Katherine
2013-01-01
Natural resource planning at all scales demands methods for assessing the impacts of resource development and use, and in particular it requires standardized methods that yield robust and unbiased results. Building from existing probabilistic methods for assessing the volumes of energy and mineral resources, we provide an algorithm for consistent, reproducible, quantitative assessment of resource development impacts. The approach combines probabilistic input data with Monte Carlo statistical methods to determine probabilistic outputs that convey the uncertainties inherent in the data. For example, one can utilize our algorithm to combine data from a natural gas resource assessment with maps of sage grouse leks and piñon-juniper woodlands in the same area to estimate possible future habitat impacts due to possible future gas development. As another example: one could combine geochemical data and maps of lynx habitat with data from a mineral deposit assessment in the same area to determine possible future mining impacts on water resources and lynx habitat. The approach can be applied to a broad range of positive and negative resource development impacts, such as water quantity or quality, economic benefits, or air quality, limited only by the availability of necessary input data and quantified relationships among geologic resources, development alternatives, and impacts. The framework enables quantitative evaluation of the trade-offs inherent in resource management decision-making, including cumulative impacts, to address societal concerns and policy aspects of resource development.
An Orbital "Virtual Radar" from TRMM Passive Microwave and Lightning Observations
NASA Technical Reports Server (NTRS)
Boccippio, Dennis J.
2004-01-01
The retrieval of vertical structure from joint passive microwave and lightning observations is demonstrated. Three years of data from the TRMM (Tropical Rainfall Measuring Mission) are used as a training dataset for regression and classification neural networks; the TMI (TRMM Microwave Imager) and LIS (Lightning Imaging Sensor) provide the inputs, the PR (Precipitation Radar) provides the training targets. Both vertical reflectivity profile categorization (into 9 convective, 7 stratiform, 2 mixed and 6 anvil types) and geophysical parameters (surface rainfall, vertically integrated liquid (VIL), ice water content (IWC) and echo tops) are retrieved. Retrievals are successful over both land and ocean surfaces. The benefit of using lightning observations as inputs to these retrievals is quantitatively demonstrated; lightning essentially provides an additional convective/stratiform discriminator, and is most important for isolation of midlevel (tops in the mixed phase region) convective profile types (this is because high frequency passive microwave observations already provide good convective/stratiform discrimination for deep convective profiles). This is highly relevant as midlevel convective profiles account for an extremely large fraction of tropical rainfall, and yet are most difficult to discriminate from comparable-depth stratiform profile types using passive microwave observations alone.
NASA Astrophysics Data System (ADS)
Rothman, D. S.; Siraj, A.; Hughes, B.
2013-12-01
The international research community is currently in the process of developing new scenarios for climate change research. One component of these scenarios are the Shared Socio-economic Pathways (SSPs), which describe a set of possible future socioeconomic conditions. These are presented in narrative storylines with associated quantitative drivers. The core quantitative drivers include total population, average GDP per capita, educational attainment, and urbanization at the global, regional, and national levels. At the same time there have been calls, particularly by the IAV community, for the SSPs to include additional quantitative information on other key social factors, such as income inequality, governance, health, and access to key infrastructures, which are discussed in the narratives. The International Futures system (IFs), based at the Pardee Center at the University of Denver, is able to provide forecasts of many of these indicators. IFs cannot use the SSP drivers as exogenous inputs, but we are able to create development pathways that closely reproduce the core quantitative drivers defined by the different SSPs, as well as incorporating assumptions on other key driving factors described in the qualitative narratives. In this paper, we present forecasts for additional quantitative indicators based upon the implementation of the SSP development pathways in IFs. These results will be of value to many researchers.
Kha, Hung; Tuble, Sigrid C; Kalyanasundaram, Shankar; Williamson, Richard E
2010-02-01
We understand few details about how the arrangement and interactions of cell wall polymers produce the mechanical properties of primary cell walls. Consequently, we cannot quantitatively assess if proposed wall structures are mechanically reasonable or assess the effectiveness of proposed mechanisms to change mechanical properties. As a step to remedying this, we developed WallGen, a Fortran program (available on request) building virtual cellulose-hemicellulose networks by stochastic self-assembly whose mechanical properties can be predicted by finite element analysis. The thousands of mechanical elements in the virtual wall are intended to have one-to-one spatial and mechanical correspondence with their real wall counterparts of cellulose microfibrils and hemicellulose chains. User-defined inputs set the properties of the two polymer types (elastic moduli, dimensions of microfibrils and hemicellulose chains, hemicellulose molecular weight) and their population properties (microfibril alignment and volume fraction, polymer weight percentages in the network). This allows exploration of the mechanical consequences of variations in nanostructure that might occur in vivo and provides estimates of how uncertainties regarding certain inputs will affect WallGen's mechanical predictions. We summarize WallGen's operation and the choice of values for user-defined inputs and show that predicted values for the elastic moduli of multinet walls subject to small displacements overlap measured values. "Design of experiment" methods provide systematic exploration of how changed input values affect mechanical properties and suggest that changing microfibril orientation and/or the number of hemicellulose cross-bridges could change wall mechanical anisotropy.
User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures
Eberl, D.D.
2008-01-01
HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.
Quantitative Characterization of Nanostructured Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Frank
The two-and-a-half day symposium on the "Quantitative Characterization of Nanostructured Materials" will be the first comprehensive meeting on this topic held under the auspices of a major U.S. professional society. Spring MRS Meetings provide a natural venue for this symposium as they attract a broad audience of researchers that represents a cross-section of the state-of-the-art regarding synthesis, structure-property relations, and applications of nanostructured materials. Close interactions among the experts in local structure measurements and materials researchers will help both to identify measurement needs pertinent to real-world materials problems and to familiarize the materials research community with the state-of-the-art local structuremore » measurement techniques. We have chosen invited speakers that reflect the multidisciplinary and international nature of this topic and the need to continually nurture productive interfaces among university, government and industrial laboratories. The intent of the symposium is to provide an interdisciplinary forum for discussion and exchange of ideas on the recent progress in quantitative characterization of structural order in nanomaterials using different experimental techniques and theory. The symposium is expected to facilitate discussions on optimal approaches for determining atomic structure at the nanoscale using combined inputs from multiple measurement techniques.« less
Viral-genetic tracing of the input-output organization of a central noradrenaline circuit.
Schwarz, Lindsay A; Miyamichi, Kazunari; Gao, Xiaojing J; Beier, Kevin T; Weissbourd, Brandon; DeLoach, Katherine E; Ren, Jing; Ibanes, Sandy; Malenka, Robert C; Kremer, Eric J; Luo, Liqun
2015-08-06
Deciphering how neural circuits are anatomically organized with regard to input and output is instrumental in understanding how the brain processes information. For example, locus coeruleus noradrenaline (also known as norepinephrine) (LC-NE) neurons receive input from and send output to broad regions of the brain and spinal cord, and regulate diverse functions including arousal, attention, mood and sensory gating. However, it is unclear how LC-NE neurons divide up their brain-wide projection patterns and whether different LC-NE neurons receive differential input. Here we developed a set of viral-genetic tools to quantitatively analyse the input-output relationship of neural circuits, and applied these tools to dissect the LC-NE circuit in mice. Rabies-virus-based input mapping indicated that LC-NE neurons receive convergent synaptic input from many regions previously identified as sending axons to the locus coeruleus, as well as from newly identified presynaptic partners, including cerebellar Purkinje cells. The 'tracing the relationship between input and output' method (or TRIO method) enables trans-synaptic input tracing from specific subsets of neurons based on their projection and cell type. We found that LC-NE neurons projecting to diverse output regions receive mostly similar input. Projection-based viral labelling revealed that LC-NE neurons projecting to one output region also project to all brain regions we examined. Thus, the LC-NE circuit overall integrates information from, and broadcasts to, many brain regions, consistent with its primary role in regulating brain states. At the same time, we uncovered several levels of specificity in certain LC-NE sub-circuits. These tools for mapping output architecture and input-output relationship are applicable to other neuronal circuits and organisms. More broadly, our viral-genetic approaches provide an efficient intersectional means to target neuronal populations based on cell type and projection pattern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Dan; Bush, Brian; Inman, Danny
Data used by the National Renewable Energy Laboratory (NREL) in energy analysis are often produced by industry and licensed or purchased for analysis. While this practice provides needed flexibility in selecting data for analysis it presents challenges in understanding the differences among multiple, ostensibly similar, datasets. As options for source data become more varied, it is important to be able to articulate why certain datasets were chosen and to ensure those include the data that best meet the boundaries and/or limitations of a particular analysis. This report represents the first of three phases of research intended to develop methods tomore » quantitatively assess and compare both input datasets and the results of analyses performed at NREL. This capability is critical to identifying tipping points in the costs or benefits of achieving high spatial and temporal resolution of input data.« less
A Novel Approach to Noise-Filtering Based on a Gain-Scheduling Neural Network Architecture
NASA Technical Reports Server (NTRS)
Troudet, T.; Merrill, W.
1994-01-01
A gain-scheduling neural network architecture is proposed to enhance the noise-filtering efficiency of feedforward neural networks, in terms of both nominal performance and robustness. The synergistic benefits of the proposed architecture are demonstrated and discussed in the context of the noise-filtering of signals that are typically encountered in aerospace control systems. The synthesis of such a gain-scheduled neurofiltering provides the robustness of linear filtering, while preserving the nominal performance advantage of conventional nonlinear neurofiltering. Quantitative performance and robustness evaluations are provided for the signal processing of pitch rate responses to typical pilot command inputs for a modern fighter aircraft model.
Quantification of Plasma miRNAs by Digital PCR for Cancer Diagnosis
Ma, Jie; Li, Ning; Guarnera, Maria; Jiang, Feng
2013-01-01
Analysis of plasma microRNAs (miRNAs) by quantitative polymerase chain reaction (qPCR) provides a potential approach for cancer diagnosis. However, absolutely quantifying low abundant plasma miRNAs is challenging with qPCR. Digital PCR offers a unique means for assessment of nucleic acids presenting at low levels in plasma. This study aimed to evaluate the efficacy of digital PCR for quantification of plasma miRNAs and the potential utility of this technique for cancer diagnosis. We used digital PCR to quantify the copy number of plasma microRNA-21-5p (miR-21–5p) and microRNA-335–3p (miR-335–3p) in 36 lung cancer patients and 38 controls. Digital PCR showed a high degree of linearity and quantitative correlation with miRNAs in a dynamic range from 1 to 10,000 copies/μL of input, with high reproducibility. qPCR exhibited a dynamic range from 100 to 1×107 copies/μL of input. Digital PCR had a higher sensitivity to detect copy number of the miRNAs compared with qPCR. In plasma, digital PCR could detect copy number of both miR-21–5p and miR-335–3p, whereas qPCR was only able to assess miR-21–5p. Quantification of the plasma miRNAs by digital PCR provided 71.8% sensitivity and 80.6% specificity in distinguishing lung cancer patients from cancer-free subjects. PMID:24277982
Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data
Zhaoping, Li; Zhe, Li
2015-01-01
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341
Survey of Quantitative Research Metrics to Assess Pilot Performance in Upset Recovery
NASA Technical Reports Server (NTRS)
Le Vie, Lisa R.
2016-01-01
Accidents attributable to in-flight loss of control are the primary cause for fatal commercial jet accidents worldwide. The National Aeronautics and Space Administration (NASA) conducted a literature review to determine and identify the quantitative standards for assessing upset recovery performance. This review contains current recovery procedures for both military and commercial aviation and includes the metrics researchers use to assess aircraft recovery performance. Metrics include time to first input, recognition time and recovery time and whether that input was correct or incorrect. Other metrics included are: the state of the autopilot and autothrottle, control wheel/sidestick movement resulting in pitch and roll, and inputs to the throttle and rudder. In addition, airplane state measures, such as roll reversals, altitude loss/gain, maximum vertical speed, maximum/minimum air speed, maximum bank angle and maximum g loading are reviewed as well.
Interdependence of PRECIS Role Operators: A Quantitative Analysis of Their Associations.
ERIC Educational Resources Information Center
Mahapatra, Manoranjan; Biswas, Subal Chandra
1986-01-01
Analyzes associations among different role operators quantitatively by taking input strings from 200 abstracts, each related to subject fields of taxation, genetic psychology, and Shakespearean drama, and subjecting them to the Chi-square test. Significant associations by other differencing operators and connectives are discussed. A schema of role…
NASA Astrophysics Data System (ADS)
Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro
2007-04-01
Cerebral metabolic rate of oxygen (CMRO2), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating 15O-labelled water (H152O) and oxygen (15O2). Conventionally, those images are measured with separate scans for three tracers C15O for CBV, H152O for CBF and 15O2 for CMRO2, and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO2 rapidly by sequentially administrating H152O and 15O2 within a short time. Because quantitative CBF and CMRO2 values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO2 were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO2 values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the second tracer was larger than half of the first one. Bias and deviation in those values were also compatible to that of the conventional method, when noise was imposed on the arterial TAC. We concluded that the present calculation based methods could be of use for quantitatively calculating CBF and CMRO2 with the DARG approach.
Using a quantitative risk register to promote learning from a patient safety reporting system.
Mansfield, James G; Caplan, Robert A; Campos, John S; Dreis, David F; Furman, Cathie
2015-02-01
Patient safety reporting systems are now used in most health care delivery organizations. These systems, such as the one in use at Virginia Mason (Seattle) since 2002, can provide valuable reports of risk and harm from the front lines of patient care. In response to the challenge of how to quantify and prioritize safety opportunities, a risk register system was developed and implemented. Basic risk register concepts were refined to provide a systematic way to understand risks reported by staff. The risk register uses a comprehensive taxonomy of patient risk and algorithmically assigns each patient safety report to 1 of 27 risk categories in three major domains (Evaluation, Treatment, and Critical Interactions). For each category, a composite score was calculated on the basis of event rate, harm, and cost. The composite scores were used to identify the "top five" risk categories, and patient safety reports in these categories were analyzed in greater depth to find recurrent patterns of risk and associated opportunities for improvement. The top five categories of risk were easy to identify and had distinctive "profiles" of rate, harm, and cost. The ability to categorize and rank risks across multiple dimensions yielded insights not previously available. These results were shared with leadership and served as input for planning quality and safety initiatives. This approach provided actionable input for the strategic planning process, while at the same time strengthening the Virginia Mason culture of safety. The quantitative patient safety risk register serves as one solution to the challenge of extracting valuable safety lessons from large numbers of incident reports and could profitably be adopted by other organizations.
Modelling home televisiting services using systems dynamic theory.
Valero, M A; Arredondo, M T; del Nogal, F; Gallar, P; Insausti, J; Del Pozo, F
2001-01-01
A quantitative model was developed to study the provision of a home televisiting service. Systems dynamic theory was used to describe the relationships between quality of care, accessibility and cost-effectiveness. Input information was gathered from the telemedicine literature, as well as from over 75 sessions of a televisiting service provided by the Severo Ochoa Hospital to 18 housebound patients from three different medical specialties. The model allowed the Severo Ochoa Hospital to estimate the equipment needed to support increased medical contacts for intensive cardiac and other patients.
Tool for Ranking Research Options
NASA Technical Reports Server (NTRS)
Ortiz, James N.; Scott, Kelly; Smith, Harold
2005-01-01
Tool for Research Enhancement Decision Support (TREDS) is a computer program developed to assist managers in ranking options for research aboard the International Space Station (ISS). It could likely also be adapted to perform similar decision-support functions in industrial and academic settings. TREDS provides a ranking of the options, based on a quantifiable assessment of all the relevant programmatic decision factors of benefit, cost, and risk. The computation of the benefit for each option is based on a figure of merit (FOM) for ISS research capacity that incorporates both quantitative and qualitative inputs. Qualitative inputs are gathered and partly quantified by use of the time-tested analytical hierarchical process and used to set weighting factors in the FOM corresponding to priorities determined by the cognizant decision maker(s). Then by use of algorithms developed specifically for this application, TREDS adjusts the projected benefit for each option on the basis of levels of technical implementation, cost, and schedule risk. Based partly on Excel spreadsheets, TREDS provides screens for entering cost, benefit, and risk information. Drop-down boxes are provided for entry of qualitative information. TREDS produces graphical output in multiple formats that can be tailored by users.
Preparation of metagenomic libraries from naturally occurring marine viruses.
Solonenko, Sergei A; Sullivan, Matthew B
2013-01-01
Microbes are now well recognized as major drivers of the biogeochemical cycling that fuels the Earth, and their viruses (phages) are known to be abundant and important in microbial mortality, horizontal gene transfer, and modulating microbial metabolic output. Investigation of environmental phages has been frustrated by an inability to culture the vast majority of naturally occurring diversity coupled with the lack of robust, quantitative, culture-independent methods for studying this uncultured majority. However, for double-stranded DNA phages, a quantitative viral metagenomic sample-to-sequence workflow now exists. Here, we review these advances with special emphasis on the technical details of preparing DNA sequencing libraries for metagenomic sequencing from environmentally relevant low-input DNA samples. Library preparation steps broadly involve manipulating the sample DNA by fragmentation, end repair and adaptor ligation, size fractionation, and amplification. One critical area of future research and development is parallel advances for alternate nucleic acid types such as single-stranded DNA and RNA viruses that are also abundant in nature. Combinations of recent advances in fragmentation (e.g., acoustic shearing and tagmentation), ligation reactions (adaptor-to-template ratio reference table availability), size fractionation (non-gel-sizing), and amplification (linear amplification for deep sequencing and linker amplification protocols) enhance our ability to generate quantitatively representative metagenomic datasets from low-input DNA samples. Such datasets are already providing new insights into the role of viruses in marine systems and will continue to do so as new environments are explored and synergies and paradigms emerge from large-scale comparative analyses. © 2013 Elsevier Inc. All rights reserved.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev
2017-06-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G.; Khanna, Sanjeev
2017-01-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings. PMID:29151821
Connecting Taxon-Specific Microbial Activities to Carbon Cycling in the Rhizosphere
NASA Astrophysics Data System (ADS)
Hungate, B. A.; Morrissey, E.; Schwartz, E.; Dijkstra, P.; Blazewicz, S.; Pett-Ridge, J.; Koch, G. W.; Marks, J.; Koch, B.; McHugh, T. A.; Mau, R. L.; Hayer, M.
2016-12-01
Plant carbon inputs influence microbial growth in the rhizosphere, but the quantitative details of these effects are not well understood, nor are their consequences for carbon cycling in the rhizosphere. With a new pulse of carbon input to soil, which microbial taxa increase their growth rates, and by how much? Do any microbial taxa respond negatively? And how does the extra carbon addition alter the utilization of other resources, including other carbon sources, as well as inorganic nitrogen? This talk will present new research using quantitative stable isotope probing that reveals the distribution of growth responses among microbial taxa, from positive to neutral to negative, and how these growth responses are associated with various substrates. For example, decomposition of soil C in response to added labile carbon occurred as a phylogenetically-diverse majority of taxa shifted toward soil C use for growth. In contrast, bacteria with suppressed growth or that relied directly on glucose for growth clustered strongly by phylogeny. These results suggest that priming is a prototypical response of bacteria to sustained labile C addition, consistent with the widespread occurrence of the priming effect in nature. These results also illustrate the potential power of molecular tools and models that seek to estimate metrics directly relevant to quantitative ecology and biogeochemistry, moreso than is the standard currently in microbial ecology. Tools that estimate growth rate, mortality rate, and rates of substrate use - all quantified with the taxonomic precision afforded by modern sequencing - provide a foundation for quantifying the biogeochemical significance of microbial biodiversity, and a more complete understanding of the rich ecosystem of the rhizosphere.
Prokushkin, S G; Prokushkin, A S; Sorokin, N D
2014-01-01
Based on the results of long-term investigations, quantitative assessment ofphytodetrite mineralization rates is provided. Their role in the biological cycle of larch stands growing in the permafrost zone of Central Evenkia is discussed. It is demonstrated that their destruction in the subshrub-sphagnum and cowberry-green moss larch stands is extremely slow, the plant litter contains the most cecalcitrant organic matter demonstrating the lowest decomposition coefficient of 0.03-0.04 year(-1), whereas fresh components of the plant litter have 3- to 4-fold higher values. An insignificant input of N and C from the analyzed mortmass to the soil has been registered. It has been revealed that the changes in N and C in the decomposition components are closely related to the quantitative dynamics (biomass) of microorganisms, such as hydrolytics and, especially, micromicetes.
Du, Yan; Han, Xu; Wang, Chenxu; Li, Yunhui; Li, Bingling; Duan, Hongwei
2018-01-26
Recently, molecular keypad locks have received increasing attention. As a new subgroup of smart biosensors, they show great potential for protecting information as a molecular security data processor, rather than merely molecular recognition and quantitation. Herein, label-free electrochemically transduced Ag + and cysteine (Cys) sensors were developed. A molecular keypad lock model with reset function was successfully realized based on the balanced interaction of metal ion with its nucleic acid and chemical ligands. The correct input of "1-2-3" (i.e., "Ag + -Cys-cDNA") is the only password of such molecular keypad lock. Moreover, the resetting process of either correct or wrong input order could be easily made by Cys, buffer, and DI water treatment. Therefore, our system provides an even smarter system of molecular keypad lock, which could inhibit illegal access of unauthorized users, holding great promise in information protection at the molecular level.
MATLAB Stability and Control Toolbox Trim and Static Stability Module
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis
2012-01-01
MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.
Quantitative analysis of the anti-noise performance of an m-sequence in an electromagnetic method
NASA Astrophysics Data System (ADS)
Yuan, Zhe; Zhang, Yiming; Zheng, Qijia
2018-02-01
An electromagnetic method with a transmitted waveform coded by an m-sequence achieved better anti-noise performance compared to the conventional manner with a square-wave. The anti-noise performance of the m-sequence varied with multiple coding parameters; hence, a quantitative analysis of the anti-noise performance for m-sequences with different coding parameters was required to optimize them. This paper proposes the concept of an identification system, with the identified Earth impulse response obtained by measuring the system output with the input of the voltage response. A quantitative analysis of the anti-noise performance of the m-sequence was achieved by analyzing the amplitude-frequency response of the corresponding identification system. The effects of the coding parameters on the anti-noise performance are summarized by numerical simulation, and their optimization is further discussed in our conclusions; the validity of the conclusions is further verified by field experiment. The quantitative analysis method proposed in this paper provides a new insight into the anti-noise mechanism of the m-sequence, and could be used to evaluate the anti-noise performance of artificial sources in other time-domain exploration methods, such as the seismic method.
Robust, Decoupled, Flight Control Design with Rate Saturating Actuators
NASA Technical Reports Server (NTRS)
Snell, S. A.; Hess, R. A.
1997-01-01
Techniques for the design of control systems for manually controlled, high-performance aircraft must provide the following: (1) multi-input, multi-output (MIMO) solutions, (2) acceptable handling qualities including no tendencies for pilot-induced oscillations, (3) a tractable approach for compensator design, (4) performance and stability robustness in the presence of significant plant uncertainty, and (5) performance and stability robustness in the presence actuator saturation (particularly rate saturation). A design technique built upon Quantitative Feedback Theory is offered as a candidate methodology which can provide flight control systems meeting these requirements, and do so over a considerable part of the flight envelope. An example utilizing a simplified model of a supermaneuverable fighter aircraft demonstrates the proposed design methodology.
Technical Management in an Age of Openness: The Political, Public, and Environmental Forest Ranger
ERIC Educational Resources Information Center
Anderson, Sarah E.; Hodges, Heather E.; Anderson, Terry L.
2013-01-01
Modern bureaucracy faces trade-offs between public and congressional input and agency expertise. The U.S. Forest Service offers an opportunity to quantitatively analyze whether an agency that is required to be more open to the public and congressional input will be forced to ignore its technical expertise in managing resources. This study uses…
Akam, Thomas E.; Kullmann, Dimitri M.
2012-01-01
The ‘communication through coherence’ (CTC) hypothesis proposes that selective communication among neural networks is achieved by coherence between firing rate oscillation in a sending region and gain modulation in a receiving region. Although this hypothesis has stimulated extensive work, it remains unclear whether the mechanism can in principle allow reliable and selective information transfer. Here we use a simple mathematical model to investigate how accurately coherent gain modulation can filter a population-coded target signal from task-irrelevant distracting inputs. We show that selective communication can indeed be achieved, although the structure of oscillatory activity in the target and distracting networks must satisfy certain previously unrecognized constraints. Firstly, the target input must be differentiated from distractors by the amplitude, phase or frequency of its oscillatory modulation. When distracting inputs oscillate incoherently in the same frequency band as the target, communication accuracy is severely degraded because of varying overlap between the firing rate oscillations of distracting inputs and the gain modulation in the receiving region. Secondly, the oscillatory modulation of the target input must be strong in order to achieve a high signal-to-noise ratio relative to stochastic spiking of individual neurons. Thus, whilst providing a quantitative demonstration of the power of coherent oscillatory gain modulation to flexibly control information flow, our results identify constraints imposed by the need to avoid interference between signals, and reveal a likely organizing principle for the structure of neural oscillations in the brain. PMID:23144603
The biological function of consciousness
Earl, Brian
2014-01-01
This research is an investigation of whether consciousness—one's ongoing experience—influences one's behavior and, if so, how. Analysis of the components, structure, properties, and temporal sequences of consciousness has established that, (1) contrary to one's intuitive understanding, consciousness does not have an active, executive role in determining behavior; (2) consciousness does have a biological function; and (3) consciousness is solely information in various forms. Consciousness is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. The FRM generates responses by manipulating information and, to function effectively, its data input must be restricted to task-relevant information. The properties of consciousness correspond to the various input requirements of the FRM; and when important information is missing from consciousness, functions of the FRM are adversely affected; both of which indicate that consciousness is the input data to the FRM. Qualitative and quantitative information (shape, size, location, etc.) are incorporated into the input data by a qualia array of colors, sounds, and so on, which makes the input conscious. This view of the biological function of consciousness provides an explanation why we have experiences; why we have emotional and other feelings, and why their loss is associated with poor decision-making; why blindsight patients do not spontaneously initiate responses to events in their blind field; why counter-habitual actions are only possible when the intended action is in mind; and the reason for inattentional blindness. PMID:25140159
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
Teaching Fundamental Skills in Microsoft Excel to First-Year Students in Quantitative Analysis
ERIC Educational Resources Information Center
Rubin, Samuel J.; Abrams, Binyomin
2015-01-01
Despite their technological savvy, most students entering university lack the necessary computer skills to succeed in a quantitative analysis course, in which they are often expected to input, analyze, and plot results of experiments without any previous formal education in Microsoft Excel or similar programs. This lack of formal education results…
NASA Astrophysics Data System (ADS)
Lan, Ganhui; Tu, Yuhai
2016-05-01
Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.
Lan, Ganhui; Tu, Yuhai
2016-05-01
Living systems have to constantly sense their external environment and adjust their internal state in order to survive and reproduce. Biological systems, from as complex as the brain to a single E. coli cell, have to process these data in order to make appropriate decisions. How do biological systems sense external signals? How do they process the information? How do they respond to signals? Through years of intense study by biologists, many key molecular players and their interactions have been identified in different biological machineries that carry out these signaling functions. However, an integrated, quantitative understanding of the whole system is still lacking for most cellular signaling pathways, not to say the more complicated neural circuits. To study signaling processes in biology, the key thing to measure is the input-output relationship. The input is the signal itself, such as chemical concentration, external temperature, light (intensity and frequency), and more complex signals such as the face of a cat. The output can be protein conformational changes and covalent modifications (phosphorylation, methylation, etc), gene expression, cell growth and motility, as well as more complex output such as neuron firing patterns and behaviors of higher animals. Due to the inherent noise in biological systems, the measured input-output dependence is often noisy. These noisy data can be analysed by using powerful tools and concepts from information theory such as mutual information, channel capacity, and the maximum entropy hypothesis. This information theory approach has been successfully used to reveal the underlying correlations between key components of biological networks, to set bounds for network performance, and to understand possible network architecture in generating observed correlations. Although the information theory approach provides a general tool in analysing noisy biological data and may be used to suggest possible network architectures in preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also study the thermodynamic costs of adaptation for cells to maintain an accurate memory. The statistical physics based approach described here should be useful in understanding design principles for cellular biochemical circuits in general.
Emerging Technologies for Environmental Remediation: Integrating Data and Judgment.
Bates, Matthew E; Grieger, Khara D; Trump, Benjamin D; Keisler, Jeffrey M; Plourde, Kenton J; Linkov, Igor
2016-01-05
Emerging technologies present significant challenges to researchers, decision-makers, industry professionals, and other stakeholder groups due to the lack of quantitative risk, benefit, and cost data associated with their use. Multi-criteria decision analysis (MCDA) can support early decisions for emerging technologies when data is too sparse or uncertain for traditional risk assessment. It does this by integrating expert judgment with available quantitative and qualitative inputs across multiple criteria to provide relative technology scores. Here, an MCDA framework provides preliminary insights on the suitability of emerging technologies for environmental remediation by comparing nanotechnology and synthetic biology to conventional remediation methods. Subject matter experts provided judgments regarding the importance of criteria used in the evaluations and scored the technologies with respect to those criteria. The results indicate that synthetic biology may be preferred over nanotechnology and conventional methods for high expected benefits and low deployment costs but that conventional technology may be preferred over emerging technologies for reduced risks and development costs. In the absence of field data regarding the risks, benefits, and costs of emerging technologies, structuring evidence-based expert judgment through a weighted hierarchy of topical questions may be helpful to inform preliminary risk governance and guide emerging technology development and policy.
SplicePlot: a utility for visualizing splicing quantitative trait loci.
Wu, Eric; Nance, Tracy; Montgomery, Stephen B
2014-04-01
RNA sequencing has provided unprecedented resolution of alternative splicing and splicing quantitative trait loci (sQTL). However, there are few tools available for visualizing the genotype-dependent effects of splicing at a population level. SplicePlot is a simple command line utility that produces intuitive visualization of sQTLs and their effects. SplicePlot takes mapped RNA sequencing reads in BAM format and genotype data in VCF format as input and outputs publication-quality Sashimi plots, hive plots and structure plots, enabling better investigation and understanding of the role of genetics on alternative splicing and transcript structure. Source code and detailed documentation are available at http://montgomerylab.stanford.edu/spliceplot/index.html under Resources and at Github. SplicePlot is implemented in Python and is supported on Linux and Mac OS. A VirtualBox virtual machine running Ubuntu with SplicePlot already installed is also available.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Evidence for ice-ocean albedo feedback in the Arctic Ocean shifting to a seasonal ice zone.
Kashiwase, Haruhiko; Ohshima, Kay I; Nihashi, Sohey; Eicken, Hajo
2017-08-15
Ice-albedo feedback due to the albedo contrast between water and ice is a major factor in seasonal sea ice retreat, and has received increasing attention with the Arctic Ocean shifting to a seasonal ice cover. However, quantitative evaluation of such feedbacks is still insufficient. Here we provide quantitative evidence that heat input through the open water fraction is the primary driver of seasonal and interannual variations in Arctic sea ice retreat. Analyses of satellite data (1979-2014) and a simplified ice-upper ocean coupled model reveal that divergent ice motion in the early melt season triggers large-scale feedback which subsequently amplifies summer sea ice anomalies. The magnitude of divergence controlling the feedback has doubled since 2000 due to a more mobile ice cover, which can partly explain the recent drastic ice reduction in the Arctic Ocean.
Energy structure of MHD flow coupling with outer resistance circuit
NASA Astrophysics Data System (ADS)
Huang, Z. Y.; Liu, Y. J.; Chen, Y. Q.; Peng, Z. L.
2015-08-01
Energy structure of MHD flow coupling with outer resistance circuit is studied to illuminate qualitatively and quantitatively the energy relation of this basic MHD flow system with energy input and output. Energy structure are analytically derived based on the Navier-Stocks equations for two-dimensional fully-developed flow and generalized Ohm's Law. The influences of applied magnetic field, Hall parameter and conductivity on energy structure are discussed based on the analytical results. Associated energies in MHD flow are deduced and validated by energy conservation. These results reveal that energy structure consists of two sub structures: electrical energy structure and internal energy structure. Energy structure and its sub structures provide an integrated theoretical energy path of the MHD system. Applied magnetic field and conductivity decrease the input energy, dissipation by fluid viscosity and internal energy but increase the ratio of electrical energy to input energy, while Hall parameter has the opposite effects. These are caused by their different effects on Bulk velocity, velocity profiles, voltage and current in outer circuit. Understanding energy structure helps MHD application designers to actively adjust the allocation of different parts of energy so that it is more reasonable and desirable.
Kim, Sangyong; Moon, Joon-Ho; Shin, Yoonseok; Kim, Gwang-Hee; Seo, Deok-Seok
2013-01-01
The objective of this research is to quantitatively measure and compare the environmental load and construction cost of different structural frame types. Construction cost also accounts for the costs of CO₂ emissions of input materials. The choice of structural frame type is a major consideration in construction, as this element represents about 33% of total building construction costs. In this research, four constructed buildings were analyzed, with these having either reinforced concrete (RC) or steel (S) structures. An input-output framework analysis was used to measure energy consumption and CO₂ emissions of input materials for each structural frame type. In addition, the CO₂ emissions cost was measured using the trading price of CO₂ emissions on the International Commodity Exchange. This research revealed that both energy consumption and CO₂ emissions were, on average, 26% lower with the RC structure than with the S structure, and the construction costs (including the CO₂ emissions cost) of the RC structure were about 9.8% lower, compared to the S structure. This research provides insights through which the construction industry will be able to respond to the carbon market, which is expected to continue to grow in the future.
Determination of nitrogen balance in agroecosystems.
Sainju, Upendra M
2017-01-01
Nitrogen balance in agroecosystems provides a quantitative framework of N inputs and outputs and retention in the soil that examines the sustainability of agricultural productivity and soil and environmental quality. Nitrogen inputs include N additions from manures and fertilizers, atmospheric depositions including wet and dry depositions, irrigation water, and biological N fixation. Nitrogen outputs include N removal in crop grain and biomass and N losses through leaching, denitrification, volatilization, surface runoff, erosion, gas emissions, and plant senescence. Nitrogen balance, which is the difference between N inputs and outputs, can be reflected in changes in soil total (organic + inorganic) N during the course of the experiment duration due to N immobilization and mineralization. While increased soil N retention and mineralization can enhance crop yields and decrease N fertilization rate, reduced N losses through N leaching and gas emissions (primarily NH 4 and NO x emissions, out of which N 2 O is a potent greenhouse gas) can improve water and air quality. •This paper discusses measurements and estimations (for non-measurable parameters due to complexity) of all inputs and outputs of N as well as changes in soil N storage during the course of the experiment to calculate N balance.•The method shows N flows, retention in the soil, and losses to the environment from agroecosystems.•The method can be used to measure agroecosystem performance and soil and environmental quality from agricultural practices.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Chen, Lian; Zhou, Shenglu; Wu, Shaohua; Wang, Chunhui; Li, Baojie; Li, Yan; Wang, Junxiao
2018-08-01
Two quantitative methods (emission inventory and isotope ratio analysis) were combined to apportion source contributions of heavy metals entering agricultural soils in the Lihe River watershed (Taihu region, east China). Source apportionment based on the emission inventory method indicated that for Cd, Cr, Cu, Pb, and Zn, the mean percentage input from atmospheric deposition was highest (62-85%), followed by irrigation (12-27%) and fertilization (1-14%). Thus, the heavy metals were derived mainly from industrial activities and traffic emissions. For Ni the combined percentage input from irrigation and fertilization was approximately 20% higher than that from atmospheric deposition, indicating that Ni was mainly derived from agricultural activities. Based on isotope ratio analysis, atmospheric deposition accounted for 57-93% of Pb entering soil, with the mean value of 69.3%, which indicates that this was the major source of Pb entering soil in the study area. The mean contributions of irrigation and fertilization to Pb pollution of soil ranged from 0% to 10%, indicating that they played only a marginally important role. Overall, the results obtained using the two methods were similar. This study provides a reliable approach for source apportionment of heavy metals entering agricultural soils in the study area, and clearly have potential application for future studies in other regions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Summary of Quantitative Interpretation of Image Far Ultraviolet Auroral Data
NASA Technical Reports Server (NTRS)
Frey, H. U.; Immel, T. J.; Mende, S. B.; Gerard, J.-C.; Hubert, B.; Habraken, S.; Span, J.; Gladstone, G. R.; Bisikalo, D. V.; Shematovich, V. I.;
2002-01-01
Direct imaging of the magnetosphere by instruments on the IMAGE spacecraft is supplemented by simultaneous observations of the global aurora in three far ultraviolet (FUV) wavelength bands. The purpose of the multi-wavelength imaging is to study the global auroral particle and energy input from thc magnetosphere into the atmosphere. This paper describes provides the method for quantitative interpretation of FUV measurements. The Wide-Band Imaging Camera (WIC) provides broad band ultraviolet images of the aurora with maximum spatial and temporal resolution by imaging the nitrogen lines and bands between 140 and 180 nm wavelength. The Spectrographic Imager (SI), a dual wavelength monochromatic instrument, images both Doppler-shifted Lyman alpha emissions produced by precipitating protons, in the SI-12 channel and OI 135.6 nm emissions in the SI-13 channel. From the SI-12 Doppler shifted Lyman alpha images it is possible to obtain the precipitating proton flux provided assumptions are made regarding the mean energy of the protons. Knowledge of the proton (flux and energy) component allows the calculation of the contribution produced by protons in the WIC and SI-13 instruments. Comparison of the corrected WIC and SI-13 signals provides a measure of the electron mean energy, which can then be used to determine the electron energy fluxun-. To accomplish this reliable modeling emission modeling and instrument calibrations are required. In-flight calibration using early-type stars was used to validate the pre-flight laboratory calibrations and determine long-term trends in sensitivity. In general, very reasonable agreement is found between in-situ measurements and remote quantitative determinations.
Spatial cognition and navigation
NASA Technical Reports Server (NTRS)
Aretz, Anthony J.
1989-01-01
An experiment that provides data for the development of a cognitive model of pilot flight navigation is described. The experiment characterizes navigational awareness as the mental alignment of two frames of reference: (1) the ego centered reference frame that is established by the forward view out of the cockpit and (2) the world centered reference frame that is established by the aircraft's location on a map. The data support a model involving at least two components: (1) the perceptual encoding of the navigational landmarks and (2) the mental rotation of the map's world reference frame into alignment with the ego centered reference frame. The quantitative relationships of these two factors are provided as possible inputs for a computational model of spatial cognition during flight navigation.
Pujol, Laure; Johnson, Nicholas Brian; Magras, Catherine; Albert, Isabelle; Membré, Jeanne-Marie
2015-10-15
In a previous study, a quantitative microbial exposure assessment (QMEA) model applied to an aseptic-UHT food process was developed [Pujol, L., Albert, I., Magras, C., Johnson, N. B., Membré, J. M. Probabilistic exposure assessment model to estimate aseptic UHT product failure rate. 2015 International Journal of Food Microbiology. 192, 124-141]. It quantified Sterility Failure Rate (SFR) associated with Bacillus cereus and Geobacillus stearothermophilus per process module (nine modules in total from raw material reception to end-product storage). Previously, the probabilistic model inputs were set by experts (using knowledge and in-house data). However, only the variability dimension was taken into account. The model was then improved using expert elicitation knowledge in two ways. First, the model was refined by adding the uncertainty dimension to the probabilistic inputs, enabling to set a second order Monte Carlo analysis. The eight following inputs, and their impact on SFR, are presented in detail in this present study: D-value for each bacteria of interest (B. cereus and G. stearothermophilus) associated with the inactivation model for the UHT treatment step, i.e., two inputs; log reduction (decimal reduction) number associated with the inactivation model for the packaging sterilization step for each bacterium and each part of the packaging (product container and sealing component), i.e., four inputs; and bacterial spore air load of the aseptic tank and the filler cabinet rooms, i.e., two inputs. Second, the model was improved by leveraging expert knowledge to develop further the existing model. The proportion of bacteria in the product which settled on surface of pipes (between the UHT treatment and the aseptic tank on one hand, and between the aseptic tank and the filler cabinet on the other hand) leading to a possible biofilm formation for each bacterium, was better characterized. It was modeled as a function of the hygienic design level of the aseptic-UHT line: the experts provided the model structure and most of the model parameters values. Mean of SFR was estimated to 10×10(-8) (95% Confidence Interval=[0×10(-8); 350×10(-8)]) and 570×10(-8) (95% CI=[380×10(-8); 820×10(-8)]) for B. cereus and G. stearothermophilus, respectively. These estimations were more accurate (since the confidence interval was provided) than those given by the model with only variability (for which the estimates were 15×10(-8) and 580×10(-8) for B. cereus and G. stearothermophilus, respectively). The updated model outputs were also compared with those obtained when inputs were described by a generic distribution, without specific information related to the case-study. Results showed that using a generic distribution can lead to unrealistic estimations (e.g., 3,181,000 product units contaminated by G. stearothermophilus among 10(8) product units produced) and emphasized the added value of eliciting information from experts from the relevant specialist field knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
Lautz, Jonathan D; Brown, Emily A; VanSchoiack, Alison A Williams; Smith, Stephen E P
2018-05-27
Cells utilize dynamic, network level rearrangements in highly interconnected protein interaction networks to transmit and integrate information from distinct signaling inputs. Despite the importance of protein interaction network dynamics, the organizational logic underlying information flow through these networks is not well understood. Previously, we developed the quantitative multiplex co-immunoprecipitation platform, which allows for the simultaneous and quantitative measurement of the amount of co-association between large numbers of proteins in shared complexes. Here, we adapt quantitative multiplex co-immunoprecipitation to define the activity dependent dynamics of an 18-member protein interaction network in order to better understand the underlying principles governing glutamatergic signal transduction. We first establish that immunoprecipitation detected by flow cytometry can detect activity dependent changes in two known protein-protein interactions (Homer1-mGluR5 and PSD-95-SynGAP). We next demonstrate that neuronal stimulation elicits a coordinated change in our targeted protein interaction network, characterized by the initial dissociation of Homer1 and SynGAP-containing complexes followed by increased associations among glutamate receptors and PSD-95. Finally, we show that stimulation of distinct glutamate receptor types results in different modular sets of protein interaction network rearrangements, and that cells activate both modules in order to integrate complex inputs. This analysis demonstrates that cells respond to distinct types of glutamatergic input by modulating different combinations of protein co-associations among a targeted network of proteins. Our data support a model of synaptic plasticity in which synaptic stimulation elicits dissociation of preexisting multiprotein complexes, opening binding slots in scaffold proteins and allowing for the recruitment of additional glutamatergic receptors. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Forkert, N D; Cheng, B; Kemmling, A; Thomalla, G; Fiehler, J
2014-01-01
The objective of this work is to present the software tool ANTONIA, which has been developed to facilitate a quantitative analysis of perfusion-weighted MRI (PWI) datasets in general as well as the subsequent multi-parametric analysis of additional datasets for the specific purpose of acute ischemic stroke patient dataset evaluation. Three different methods for the analysis of DSC or DCE PWI datasets are currently implemented in ANTONIA, which can be case-specifically selected based on the study protocol. These methods comprise a curve fitting method as well as a deconvolution-based and deconvolution-free method integrating a previously defined arterial input function. The perfusion analysis is extended for the purpose of acute ischemic stroke analysis by additional methods that enable an automatic atlas-based selection of the arterial input function, an analysis of the perfusion-diffusion and DWI-FLAIR mismatch as well as segmentation-based volumetric analyses. For reliability evaluation, the described software tool was used by two observers for quantitative analysis of 15 datasets from acute ischemic stroke patients to extract the acute lesion core volume, FLAIR ratio, perfusion-diffusion mismatch volume with manually as well as automatically selected arterial input functions, and follow-up lesion volume. The results of this evaluation revealed that the described software tool leads to highly reproducible results for all parameters if the automatic arterial input function selection method is used. Due to the broad selection of processing methods that are available in the software tool, ANTONIA is especially helpful to support image-based perfusion and acute ischemic stroke research projects.
New generation of hydraulic pedotransfer functions for Europe
Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G
2015-01-01
A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465
Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E
2014-01-01
Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
Fan, Jiadong; Sun, Zhibin; Zhang, Jian; Huang, Qingjie; Yao, Shengkun; Zong, Yunbing; Kohmura, Yoshiki; Ishikawa, Tetsuya; Liu, Hong; Jiang, Huaidong
2015-06-16
Novel coherent diffraction microscopy provides a powerful lensless imaging method to obtain a better understanding of the microorganism at the nanoscale. Here we demonstrated quantitative imaging of intact unstained magnetotactic bacteria using coherent X-ray diffraction microscopy combined with an iterative phase retrieval algorithm. Although the signal-to-noise ratio of the X-ray diffraction pattern from single magnetotactic bacterium is weak due to low-scattering ability of biomaterials, an 18.6 nm half-period resolution of reconstructed image was achieved by using a hybrid input-output phase retrieval algorithm. On the basis of the quantitative reconstructed images, the morphology and some intracellular structures, such as nucleoid, polyβ-hydroxybutyrate granules, and magnetosomes, were identified, which were also confirmed by scanning electron microscopy and energy dispersive spectroscopy. With the benefit from the quantifiability of coherent diffraction imaging, for the first time to our knowledge, an average density of magnetotactic bacteria was calculated to be ∼1.19 g/cm(3). This technique has a wide range of applications, especially in quantitative imaging of low-scattering biomaterials and multicomponent materials at nanoscale resolution. Combined with the cryogenic technique or X-ray free electron lasers, the method could image cells in a hydrated condition, which helps to maintain their natural structure.
Evaluation of four commercial quantitative real-time PCR kits with inhibited and degraded samples.
Holmes, Amy S; Houston, Rachel; Elwick, Kyleen; Gangitano, David; Hughes-Stamm, Sheree
2018-05-01
DNA quantification is a vital step in forensic DNA analysis to determine the optimal input amount for DNA typing. A quantitative real-time polymerase chain reaction (qPCR) assay that can predict DNA degradation or inhibitors present in the sample prior to DNA amplification could aid forensic laboratories in creating a more streamlined and efficient workflow. This study compares the results from four commercial qPCR kits: (1) Investigator® Quantiplex® Pro Kit, (2) Quantifiler® Trio DNA Quantification Kit, (3) PowerQuant® System, and (4) InnoQuant® HY with high molecular weight DNA, low template samples, degraded samples, and DNA spiked with various inhibitors.The results of this study indicate that all kits were comparable in accurately predicting quantities of high quality DNA down to the sub-picogram level. However, the InnoQuant(R) HY kit showed the highest precision across the DNA concentration range tested in this study. In addition, all kits performed similarly with low concentrations of forensically relevant PCR inhibitors. However, in general, the Investigator® Quantiplex® Pro Kit was the most tolerant kit to inhibitors and provided the most accurate quantification results with higher concentrations of inhibitors (except with salt). PowerQuant® and InnoQuant® HY were the most sensitive to inhibitors, but they did indicate significant levels of PCR inhibition. When quantifying degraded samples, each kit provided different degradation indices (DI), with Investigator® Quantiplex® Pro indicating the largest DI and Quantifiler® Trio indicating the smallest DI. When the qPCR kits were paired with their respective STR kit to genotype highly degraded samples, the Investigator® 24plex QS and GlobalFiler® kits generated more complete profiles when the small target concentrations were used for calculating input amount.
Networked Microgrids Scoping Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backhaus, Scott N.; Dobriansky, Larisa; Glover, Steve
2016-12-05
Much like individual microgrids, the range of opportunities and potential architectures of networked microgrids is very diverse. The goals of this scoping study are to provide an early assessment of research and development needs by examining the benefits of, risks created by, and risks to networked microgrids. At this time there are very few, if any, examples of deployed microgrid networks. In addition, there are very few tools to simulate or otherwise analyze the behavior of networked microgrids. In this setting, it is very difficult to evaluate networked microgrids systematically or quantitatively. At this early stage, this study is relyingmore » on inputs, estimations, and literature reviews by subject matter experts who are engaged in individual microgrid research and development projects, i.e., the authors of this study The initial step of the study gathered input about the potential opportunities provided by networked microgrids from these subject matter experts. These opportunities were divided between the subject matter experts for further review. Part 2 of this study is comprised of these reviews. Part 1 of this study is a summary of the benefits and risks identified in the reviews in Part 2 and synthesis of the research needs required to enable networked microgrids.« less
Badawi, A M; Derbala, A S; Youssef, A M
1999-08-01
Computerized ultrasound tissue characterization has become an objective means for diagnosis of liver diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases are rather confusing and highly dependent upon the sonographer's experience. This often causes a bias effects in the diagnostic procedure and limits its objectivity and reproducibility. Computerized tissue characterization to assist quantitatively the sonographer for the accurate differentiation and to minimize the degree of risk is thus justified. Fuzzy logic has emerged as one of the most active area in classification. In this paper, we present an approach that employs Fuzzy reasoning techniques to automatically differentiate diffuse liver diseases using numerical quantitative features measured from the ultrasound images. Fuzzy rules were generated from over 140 cases consisting of normal, fatty, and cirrhotic livers. The input to the fuzzy system is an eight dimensional vector of feature values: the mean gray level (MGL), the percentile 10%, the contrast (CON), the angular second moment (ASM), the entropy (ENT), the correlation (COR), the attenuation (ATTEN) and the speckle separation. The output of the fuzzy system is one of the three categories: cirrhosis, fatty or normal. The steps done for differentiating the pathologies are data acquisition and feature extraction, dividing the input spaces of the measured quantitative data into fuzzy sets. Based on the expert knowledge, the fuzzy rules are generated and applied using the fuzzy inference procedures to determine the pathology. Different membership functions are developed for the input spaces. This approach has resulted in very good sensitivities and specificity for classifying diffused liver pathologies. This classification technique can be used in the diagnostic process, together with the history information, laboratory, clinical and pathological examinations.
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
MacGregor, Duncan J.; Leng, Gareth
2012-01-01
Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP) generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response. These findings show large differences in information coding between the populations, and apparent functional advantages of asynchronous phasic firing. PMID:23093929
Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.
Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S
2017-05-10
In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.
Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding
2017-01-01
In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated “PIT” units with different input histories (lacking “V2|3” or “V4” input) allowed for comparable levels of object-decoding performance and that removing a large fraction of “PIT” activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary “ventral stream” (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel “bypass” pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. PMID:28416597
2000-10-01
most enlightening sources found on how to approach the problem were as follows: 1. Eric A. Hanushek and Others, Making Schools Work, Improving... Hanushek traces the history of educational inputs and outputs in the United States. Since the 1950s, test scores have not increased, while...important inputs see Eric A. Hanushek and Others, Making Schools Work: Improving Performance and Controlling Costs, The Brookings Institution, 1994 and
Oscar Maturana; Daniele Tonina; James A. McKean; John M. Buffington; Charles H. Luce; Diego Caamano
2013-01-01
It is widely recognized that high supplies of fine sediment, largely sand, can negatively impact the aquatic habitat quality of gravel-bed rivers, but effects of the style of input (chronic vs. pulsed) have not been examined quantitatively. We hypothesize that a continuous (i.e. chronic) supply of sand will be more detrimental to the quality of aquatic habitat than an...
Luján, J L; Crago, P E
2004-11-01
Neuroprosthestic systems can be used to restore hand grasp and wrist control in individuals with C5/C6 spinal cord injury. A computer-based system was developed for the implementation, tuning and clinical assessment of neuroprosthetic controllers, using off-the-shelf hardware and software. The computer system turned a Pentium III PC running Windows NT into a non-dedicated, real-time system for the control of neuroprostheses. Software execution (written using the high-level programming languages LabVIEW and MATLAB) was divided into two phases: training and real-time control. During the training phase, the computer system collected input/output data by stimulating the muscles and measuring the muscle outputs in real-time, analysed the recorded data, generated a set of training data and trained an artificial neural network (ANN)-based controller. During real-time control, the computer system stimulated the muscles using stimulus pulsewidths predicted by the ANN controller in response to a sampled input from an external command source, to provide independent control of hand grasp and wrist posture. System timing was stable, reliable and capable of providing muscle stimulation at frequencies up to 24Hz. To demonstrate the application of the test-bed, an ANN-based controller was implemented with three inputs and two independent channels of stimulation. The ANN controller's ability to control hand grasp and wrist angle independently was assessed by quantitative comparison of the outputs of the stimulated muscles with a set of desired grasp or wrist postures determined by the command signal. Controller performance results were mixed, but the platform provided the tools to implement and assess future controller designs.
How input fluctuations reshape the dynamics of a biological switching system
NASA Astrophysics Data System (ADS)
Hu, Bo; Kessler, David A.; Rappel, Wouter-Jan; Levine, Herbert
2012-12-01
An important task in quantitative biology is to understand the role of stochasticity in biochemical regulation. Here, as an extension of our recent work [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.148101 107, 148101 (2011)], we study how input fluctuations affect the stochastic dynamics of a simple biological switch. In our model, the on transition rate of the switch is directly regulated by a noisy input signal, which is described as a non-negative mean-reverting diffusion process. This continuous process can be a good approximation of the discrete birth-death process and is much more analytically tractable. Within this setup, we apply the Feynman-Kac theorem to investigate the statistical features of the output switching dynamics. Consistent with our previous findings, the input noise is found to effectively suppress the input-dependent transitions. We show analytically that this effect becomes significant when the input signal fluctuates greatly in amplitude and reverts slowly to its mean.
A novel data storage logic in the cloud
Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice
2016-01-01
Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table. PMID:29026521
A novel data storage logic in the cloud.
Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice
2016-01-01
Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.
Multifocus image fusion using phase congruency
NASA Astrophysics Data System (ADS)
Zhan, Kun; Li, Qiaoqiao; Teng, Jicai; Wang, Mingying; Shi, Jinhui
2015-05-01
We address the problem of fusing multifocus images based on the phase congruency (PC). PC provides a sharpness feature of a natural image. The focus measure (FM) is identified as strong PC near a distinctive image feature evaluated by the complex Gabor wavelet. The PC is more robust against noise than other FMs. The fusion image is obtained by a new fusion rule (FR), and the focused region is selected by the FR from one of the input images. Experimental results show that the proposed fusion scheme achieves the fusion performance of the state-of-the-art methods in terms of visual quality and quantitative evaluations.
An experimental approach to identify dynamical models of transcriptional regulation in living cells
NASA Astrophysics Data System (ADS)
Fiore, G.; Menolascina, F.; di Bernardo, M.; di Bernardo, D.
2013-06-01
We describe an innovative experimental approach, and a proof of principle investigation, for the application of System Identification techniques to derive quantitative dynamical models of transcriptional regulation in living cells. Specifically, we constructed an experimental platform for System Identification based on a microfluidic device, a time-lapse microscope, and a set of automated syringes all controlled by a computer. The platform allows delivering a time-varying concentration of any molecule of interest to the cells trapped in the microfluidics device (input) and real-time monitoring of a fluorescent reporter protein (output) at a high sampling rate. We tested this platform on the GAL1 promoter in the yeast Saccharomyces cerevisiae driving expression of a green fluorescent protein (Gfp) fused to the GAL1 gene. We demonstrated that the System Identification platform enables accurate measurements of the input (sugars concentrations in the medium) and output (Gfp fluorescence intensity) signals, thus making it possible to apply System Identification techniques to obtain a quantitative dynamical model of the promoter. We explored and compared linear and nonlinear model structures in order to select the most appropriate to derive a quantitative model of the promoter dynamics. Our platform can be used to quickly obtain quantitative models of eukaryotic promoters, currently a complex and time-consuming process.
Flight Demonstration of Integrated Airport Surface Movement Technologies
NASA Technical Reports Server (NTRS)
Young, Steven D.; Jones, Denise R.
1998-01-01
This document describes operations associated with a set of flight experiments and demonstrations using a Boeing-757-200 research aircraft as part of low visibility landing and surface operations (LVLASO) research activities. To support this experiment, the B-757 performed flight and taxi operations at the Atlanta Hartsfield International Airport in Atlanta, GA. The test aircraft was equipped with experimental displays that were designed to provide flight crews with sufficient information to enable safe, expedient surface operations in any weather condition down to a runway visual range of 300 feet. In addition to flight deck displays and supporting equipment onboard the B-757, there was also a ground-based component of the system that provided for ground controller inputs and surveillance of airport surface movements. Qualitative and quantitative results are discussed.
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
NASA Astrophysics Data System (ADS)
Finzi, A.
2016-12-01
The rhizosphere is a hot spot and hot moment for biogeochemical cycles. Microbial activity, extracellular enzyme activity and element cycles are greatly enhanced by root derived carbon inputs. As such the rhizosphere may be an important driver of ecosystem responses to global changes such as rising temperatures and atmospheric CO2 concentrations. Empirical research on the rhizosphere is extensive but extrapolation of rhizosphere processes to large spatial and temporal scales is largely uninterrogated. Using a combination of field studies, meta-analysis and numerical models we have found good reason to think that scaling is possible. In this talk I discuss the results of this research and focus on the results of a new modeling effort that explicitly links root distribution and architecture with a model of microbial physiology to assess the extent to which rhizosphere processes may affect ecosystem responses to global change. Results to date suggest that root inputs of C and possibly nutrients (ie, nitrogen) impact the fate of new C inputs to the soil (ie, accumulation or loss) in response to warming and enhanced productivity at elevated CO2. The model also provides qualitative guidance on incorporating the known effects of ectomycorrhizal fungi on decomposition and rates of soil C and N cycling.
Kim, Sangyong; Moon, Joon-Ho; Shin, Yoonseok; Kim, Gwang-Hee; Seo, Deok-Seok
2013-01-01
The objective of this research is to quantitatively measure and compare the environmental load and construction cost of different structural frame types. Construction cost also accounts for the costs of CO2 emissions of input materials. The choice of structural frame type is a major consideration in construction, as this element represents about 33% of total building construction costs. In this research, four constructed buildings were analyzed, with these having either reinforced concrete (RC) or steel (S) structures. An input-output framework analysis was used to measure energy consumption and CO2 emissions of input materials for each structural frame type. In addition, the CO2 emissions cost was measured using the trading price of CO2 emissions on the International Commodity Exchange. This research revealed that both energy consumption and CO2 emissions were, on average, 26% lower with the RC structure than with the S structure, and the construction costs (including the CO2 emissions cost) of the RC structure were about 9.8% lower, compared to the S structure. This research provides insights through which the construction industry will be able to respond to the carbon market, which is expected to continue to grow in the future. PMID:24227998
Electric energy costs and firm productivity in the countries of the Pacific Alliance
NASA Astrophysics Data System (ADS)
Camacho, Anamaria
This paper explores the relation between energy as an input of production and firm-level productivity for Chile, Colombia, Mexico and Peru, all country members of the Pacific Alliance economic bloc. The empirical literature, has explored the impact of infrastructure on productivity; however there is limited analysis on the impact of particular infrastructure variables, such as energy, on productivity at the firm level in Latin America. Therefore, this study conducts a quantitative assessment of the responsiveness of productivity to energy cost and quality for Chile, Colombia, Mexico and Peru. For this, the empirical strategy is to estimate a Cobb-Douglas production function using the World Bank's Enterprise Survey to obtain comparable measures of output and inputs of production. This approach provides estimates of input factor elasticities for all of the factors of production including energy. The results indicate that electric energy costs explain cross-country differences in firm level productivity. For the particular case of Colombia, the country exhibits the lowest capital and labor productivity of the PA, and firm output is highly responsive to changes in energy use. As a result, the evidence suggests that policies reducing electric energy costs are an efficient alternative to increase firm performance, particularly in the case of Colombia.
Input-output characterization of fiber reinforced composites by P waves
NASA Technical Reports Server (NTRS)
Renneisen, John D.; Williams, James H., Jr.
1990-01-01
Input-output characterization of fiber composites is studied theoretically by tracing P waves in the media. A new path motion to aid in the tracing of P and the reflection generated SV wave paths in the continuum plate is developed. A theoretical output voltage from the receiving transducer is calculated for a tone burst. The study enhances the quantitative and qualitative understanding of the nondestructive evaluation of fiber composites which can be modeled as transversely isotropic media.
Butler, Blake E; Chabot, Nicole; Lomber, Stephen G
2016-09-01
The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Brewer, Elizabeth; Yarwood, Rockie; Lajtha, Kate; Myrold, David
2013-01-01
One explanation given for the high microbial diversity found in soils is that they contain a large inactive biomass that is able to persist in soils for long periods of time. This persistent microbial fraction may help to buffer the functionality of the soil community during times of low nutrients by providing a reservoir of specialized functions that can be reactivated when conditions improve. A study was designed to test the hypothesis: in soils lacking fresh root or detrital inputs, microbial community composition may persist relatively unchanged. Upon addition of new inputs, this community will be stimulated to grow and break down litter similarly to control soils. Soils from two of the Detrital Input and Removal Treatments (DIRT) at the H. J. Andrews Experimental Forest, the no-input and control treatment plots, were used in a microcosm experiment where Douglas-fir needles were added to soils. After 3 and 151 days of incubation, soil microbial DNA and RNA was extracted and characterized using quantitative PCR (qPCR) and 454 pyrosequencing. The abundance of 16S and 28S gene copies and RNA copies did not vary with soil type or amendment; however, treatment differences were observed in the abundance of archaeal ammonia-oxidizing amoA gene abundance. Analysis of ∼110,000 bacterial sequences showed a significant change in the active (RNA-based) community between day 3 and day 151, but microbial composition was similar between soil types. These results show that even after 12 years of plant litter exclusion, the legacy of community composition was well buffered against a dramatic disturbance. PMID:23263952
Classical Molecular Dynamics Simulation of Nuclear Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devanathan, Ram; Krack, Matthias; Bertolus, Marjorie
2015-10-10
Molecular dynamics simulation is well suited to study primary damage production by irradiation, defect interactions with fission gas atoms, gas bubble nucleation, grain boundary effects on defect and gas bubble evolution in nuclear fuel, and the resulting changes in thermo-mechanical properties. In these simulations, the forces on the ions are dictated by interaction potentials generated by fitting properties of interest to experimental data. The results obtained from the present generation of potentials are qualitatively similar, but quantitatively different. There is a need to refine existing potentials to provide a better representation of the performance of polycrystalline fuel under a varietymore » of operating conditions, and to develop models that are equipped to handle deviations from stoichiometry. In addition to providing insights into fundamental mechanisms governing the behaviour of nuclear fuel, MD simulations can also provide parameters that can be used as inputs for mesoscale models.« less
Pärs, Martti; Gradmann, Michael; Gräf, Katja; Bauer, Peter; Thelakkat, Mukundan; Köhler, Jürgen
2014-01-01
We investigated the capability of molecular triads, consisting of two strong fluorophores that were covalently linked to a photochromic molecule, for optical gating. Therefore we monitored the fluorescence intensity of the fluorophores as a function of the isomeric state of the photoswitch. From the analysis of our data we develop a kinetic model that allows us to predict quantitatively the degree of the fluorescence modulation as a function of the mutual intensities of the lasers that are used to induce the fluorescence and the switching of the photochromic unit. We find that the achievable contrast for the modulation of the fluorescence depends mainly on the intensity ratio of the two light beams and appears to be very robust against absolute changes of these intensities. The latter result provides valuable information for the development of all-optical circuits which would require to handle different signal strengths for the input and output levels. PMID:24614963
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.
A Physiologically Based Model of Orexinergic Stabilization of Sleep and Wake
Fulcher, Ben D.; Phillips, Andrew J. K.; Postnova, Svetlana; Robinson, Peter A.
2014-01-01
The orexinergic neurons of the lateral hypothalamus (Orx) are essential for regulating sleep-wake dynamics, and their loss causes narcolepsy, a disorder characterized by severe instability of sleep and wake states. However, the mechanisms through which Orx stabilize sleep and wake are not well understood. In this work, an explanation of the stabilizing effects of Orx is presented using a quantitative model of important physiological connections between Orx and the sleep-wake switch. In addition to Orx and the sleep-wake switch, which is composed of mutually inhibitory wake-active monoaminergic neurons in brainstem and hypothalamus (MA) and the sleep-active ventrolateral preoptic neurons of the hypothalamus (VLPO), the model also includes the circadian and homeostatic sleep drives. It is shown that Orx stabilizes prolonged waking episodes via its excitatory input to MA and by relaying a circadian input to MA, thus sustaining MA firing activity during the circadian day. During sleep, both Orx and MA are inhibited by the VLPO, and the subsequent reduction in Orx input to the MA indirectly stabilizes sustained sleep episodes. Simulating a loss of Orx, the model produces dynamics resembling narcolepsy, including frequent transitions between states, reduced waking arousal levels, and a normal daily amount of total sleep. The model predicts a change in sleep timing with differences in orexin levels, with higher orexin levels delaying the normal sleep episode, suggesting that individual differences in Orx signaling may contribute to chronotype. Dynamics resembling sleep inertia also emerge from the model as a gradual sleep-to-wake transition on a timescale that varies with that of Orx dynamics. The quantitative, physiologically based model developed in this work thus provides a new explanation of how Orx stabilizes prolonged episodes of sleep and wake, and makes a range of experimentally testable predictions, including a role for Orx in chronotype and sleep inertia. PMID:24651580
LFQuant: a label-free fast quantitative analysis tool for high-resolution LC-MS/MS proteomics data.
Zhang, Wei; Zhang, Jiyang; Xu, Changming; Li, Ning; Liu, Hui; Ma, Jie; Zhu, Yunping; Xie, Hongwei
2012-12-01
Database searching based methods for label-free quantification aim to reconstruct the peptide extracted ion chromatogram based on the identification information, which can limit the search space and thus make the data processing much faster. The random effect of the MS/MS sampling can be remedied by cross-assignment among different runs. Here, we present a new label-free fast quantitative analysis tool, LFQuant, for high-resolution LC-MS/MS proteomics data based on database searching. It is designed to accept raw data in two common formats (mzXML and Thermo RAW), and database search results from mainstream tools (MASCOT, SEQUEST, and X!Tandem), as input data. LFQuant can handle large-scale label-free data with fractionation such as SDS-PAGE and 2D LC. It is easy to use and provides handy user interfaces for data loading, parameter setting, quantitative analysis, and quantitative data visualization. LFQuant was compared with two common quantification software packages, MaxQuant and IDEAL-Q, on the replication data set and the UPS1 standard data set. The results show that LFQuant performs better than them in terms of both precision and accuracy, and consumes significantly less processing time. LFQuant is freely available under the GNU General Public License v3.0 at http://sourceforge.net/projects/lfquant/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DIRECTIONAL CULTURAL CHANGE BY MODIFICATION AND REPLACEMENT OF MEMES
Cardoso, Gonçalo C.; Atwell, Jonathan W.
2017-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations—modification of memes—to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. PMID:20722726
Real-time monitoring of volatile organic compounds using chemical ionization mass spectrometry
Mowry, Curtis Dale; Thornberg, Steven Michael
1999-01-01
A system for on-line quantitative monitoring of volatile organic compounds (VOCs) includes pressure reduction means for carrying a gaseous sample from a first location to a measuring input location maintained at a low pressure, the system utilizing active feedback to keep both the vapor flow and pressure to a chemical ionization mode mass spectrometer constant. A multiple input manifold for VOC and gas distribution permits a combination of calibration gases or samples to be applied to the spectrometer.
Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.
Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z
Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.
Bilinearity in Spatiotemporal Integration of Synaptic Inputs
Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David
2014-01-01
Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832
Highly linear, sensitive analog-to-digital converter
NASA Technical Reports Server (NTRS)
Cox, J.; Finley, W. R.
1969-01-01
Analog-to-digital converter converts 10 volt full scale input signal into 13 bit digital output. Advantages include high sensitivity, linearity, low quantitizing error, high resistance to mechanical shock and vibration loads, and temporary data storage capabilities.
A Monte-Carlo game theoretic approach for Multi-Criteria Decision Making under uncertainty
NASA Astrophysics Data System (ADS)
Madani, Kaveh; Lund, Jay R.
2011-05-01
Game theory provides a useful framework for studying Multi-Criteria Decision Making problems. This paper suggests modeling Multi-Criteria Decision Making problems as strategic games and solving them using non-cooperative game theory concepts. The suggested method can be used to prescribe non-dominated solutions and also can be used as a method to predict the outcome of a decision making problem. Non-cooperative stability definitions for solving the games allow consideration of non-cooperative behaviors, often neglected by other methods which assume perfect cooperation among decision makers. To deal with the uncertainty in input variables a Monte-Carlo Game Theory (MCGT) approach is suggested which maps the stochastic problem into many deterministic strategic games. The games are solved using non-cooperative stability definitions and the results include possible effects of uncertainty in input variables on outcomes. The method can handle multi-criteria multi-decision-maker problems with uncertainty. The suggested method does not require criteria weighting, developing a compound decision objective, and accurate quantitative (cardinal) information as it simplifies the decision analysis by solving problems based on qualitative (ordinal) information, reducing the computational burden substantially. The MCGT method is applied to analyze California's Sacramento-San Joaquin Delta problem. The suggested method provides insights, identifies non-dominated alternatives, and predicts likely decision outcomes.
NASA Astrophysics Data System (ADS)
Gadotti, Dimitri; Timer Team
2017-07-01
We report the serendipitous discovery of ongoing stellar feedback in the star-bursting nuclear ring of a nearby spiral galaxy, as part of the TIMER survey with MUSE. Combining MUSE and ALMA data we show bubbles of ionised gas expanding from the ring and shocking with the cold ISM. We demonstrate how much energy is being released into the ISM corresponding to the star formation observed, how fast the heated ISM is expanding from the centre, and provide a physical description of the shocks happening at the interface between the heated and cold phases of the ISM. Further, we quantitatively show how the exchange of energy between the two phases impacts the dynamics of the cold ISM. Finally, applying a model to the spatially-resolved spectral properties of this system, we find that about 60% of the energy input into the ISM is produced via the direct transfer of momentum from photons scattering onto dust grains, and 27% produced by mass loss in supernova explosions. The remaining energy input is produced via photoionisation heating ( 12%) and stellar winds ( 1%). These analyses provide invaluable measurements against which our theoretical understanding of stellar feedback can be compared, particularly state-of-the-art simulations that aim at reproducing star formation and stellar feedback in galaxies.
Economic valuation of landslide damage in hilly regions: a case study from Flanders, Belgium.
Vranken, Liesbet; Van Turnhout, Pieter; Van Den Eeckhaut, Miet; Vandekerckhove, Liesbeth; Poesen, Jean
2013-03-01
Several regions around the globe are at risk of incurring damage from landslides, but only few studies have concentrated on a quantitative estimate of the overall damage caused by landslides at a regional scale. This study therefore starts with a quantitative economic assessment of the direct and indirect damage caused by landslides in a 2,910 km study area located west of Brussels, a low-relief region susceptible to landslides. Based on focus interviews as well as on semi-structured interviews with homeowners, civil servants and the owners and providers of lifelines such as electricity and sewage, a quantitative damage assessment is provided. For private properties (houses, forest and pasture land) we estimate the real estate and production value losses for different damage scenarios, while for public infrastructure the costs of measures to repair and prevent landslide induced damage are estimated. In addition, the increase in amenity value of forests and grasslands due to the occurrence of landslides is also calculated. The study illustrates that a minority of land (only 2.3%) within the study area is used for dwellings, roads and railway lines, but that these land use types are responsible for the vast majority of the economic damage due to the occurrence of landslides. The annual cost of direct damage due to landsliding amounts to 688,148 €/year out of which 550,740 €/year for direct damage to houses, while the annual indirect damage augments to 3,020,049 €/year out of which 2,007,375 €/year for indirect damage to real estate. Next, the study illustrates that the increase of the amenity value of forests and grasslands outweighs the production value loss. As such the study does not only provide quantitative input data for the estimation of future risks, but also important information for government officials as it clearly informs about the costs associated with certain land use types in landslide areas. Copyright © 2013 Elsevier B.V. All rights reserved.
Auroral photometry from the atmosphere Explorer satellite
NASA Technical Reports Server (NTRS)
Rees, M. H.; Abreu, V. J.
1984-01-01
Attention is given to the ability of remote sensing from space to yield quantitative auroral and ionospheric parametrers, in view of the auroral measurements made during two passes of the Explorer C satellite over the Poker Flat Optical Observatory and the Chatanika Radar Facility. The emission rate of the N2(+) 4278 A band computed from intensity measurements of energetic auroral electrons has tracked the same spetral feature that was measured remotely from the satellite over two decades of intensity, providing a stringent test for the measurement of atmospheric scattering effects. It also verifies the absolute intensity with respect to ground-based photometric measurements. In situ satellite measurments of ion densities and ground based electron density profile radar measurements provide a consistent picture of the ionospheric response to auroral input, while also predicting the observed optical emission rate.
Predicting mesoscale microstructural evolution in electron beam welding
Rodgers, Theron M.; Madison, Jonathan D.; Tikare, Veena; ...
2016-03-16
Using the kinetic Monte Carlo simulator, Stochastic Parallel PARticle Kinetic Simulator, from Sandia National Laboratories, a user routine has been developed to simulate mesoscale predictions of a grain structure near a moving heat source. Here, we demonstrate the use of this user routine to produce voxelized, synthetic, three-dimensional microstructures for electron-beam welding by comparing them with experimentally produced microstructures. When simulation input parameters are matched to experimental process parameters, qualitative and quantitative agreement for both grain size and grain morphology are achieved. The method is capable of simulating both single- and multipass welds. As a result, the simulations provide anmore » opportunity for not only accelerated design but also the integration of simulation and experiments in design such that simulations can receive parameter bounds from experiments and, in turn, provide predictions of a resultant microstructure.« less
Schaeffel, Frank; Simon, Perikles; Feldkaemper, Marita; Ohngemach, Sibylle; Williams, Robert W
2003-09-01
Experiments in animal models of myopia have emphasised the importance of visual input in emmetropisation but it is also evident that the development of human myopia is influenced to some degree by genetic factors. Molecular genetic approaches can help to identify both the genes involved in the control of ocular development and the potential targets for pharmacological intervention. This review covers a variety of techniques that are being used to study the molecular biology of myopia. In the first part, we describe techniques used to analyse visually induced changes in gene expression: Northern Blot, polymerase chain reaction (PCR) and real-time PCR to obtain semi-quantitative and quantitative measures of changes in transcription level of a known gene, differential display reverse transcription PCR (DD-RT-PCR) to search for new genes that are controlled by visual input, rapid amplification of 5' cDNA (5'-RACE) to extend the 5' end of sequences that are regulated by visual input, in situ hybridisation to localise the expression of a given gene in a tissue and oligonucleotide microarray assays to simultaneously test visually induced changes in thousands of transcripts in single experiments. In the second part, we describe techniques that are used to localise regions in the genome that contain genes that are involved in the control of eye growth and refractive errors in mice and humans. These include quantitative trait loci (QTL) mapping, exploiting experimental test crosses of mice and transmission disequilibrium tests (TDT) in humans to find chromosomal intervals that harbour genes involved in myopia development. We review several successful applications of this battery of techniques in myopia research.
Combined RT-qPCR of mRNA and microRNA Targets within One Fluidigm Integrated Fluidic Circuit.
Baldwin, Don A; Horan, Annamarie D; Hesketh, Patrick J; Mehta, Samir
2016-07-01
The ability to profile expression levels of a large number of mRNAs and microRNAs (miRNAs) within the same sample, using a single assay method, would facilitate investigations of miRNA effects on mRNA abundance and streamline biomarker screening across multiple RNA classes. A protocol is described for reverse transcription of long RNA and miRNA targets, followed by preassay amplification of the pooled cDNAs and quantitative PCR (qPCR) detection for a mixed panel of candidate RNA biomarkers. The method provides flexibility for designing custom target panels, is robust over a range of input RNA amounts, and demonstrated a high assay success rate.
Wind Shear/Turbulence Inputs to Flight Simulation and Systems Certification
NASA Technical Reports Server (NTRS)
Bowles, Roland L. (Editor); Frost, Walter (Editor)
1987-01-01
The purpose of the workshop was to provide a forum for industry, universities, and government to assess current status and likely future requirements for application of flight simulators to aviation safety concerns and system certification issues associated with wind shear and atmospheric turbulence. Research findings presented included characterization of wind shear and turbulence hazards based on modeling efforts and quantitative results obtained from field measurement programs. Future research thrusts needed to maximally exploit flight simulators for aviation safety application involving wind shear and turbulence were identified. The conference contained sessions on: Existing wind shear data and simulator implementation initiatives; Invited papers regarding wind shear and turbulence simulation requirements; and Committee working session reports.
Analysis of cellular signal transduction from an information theoretic approach.
Uda, Shinsuke; Kuroda, Shinya
2016-03-01
Signal transduction processes the information of various cellular functions, including cell proliferation, differentiation, and death. The information for controlling cell fate is transmitted by concentrations of cellular signaling molecules. However, how much information is transmitted in signaling pathways has thus far not been investigated. Shannon's information theory paves the way to quantitatively analyze information transmission in signaling pathways. The theory has recently been applied to signal transduction, and mutual information of signal transduction has been determined to be a measure of information transmission. We review this work and provide an overview of how signal transduction transmits informational input and exerts biological output. Copyright © 2015 Elsevier Ltd. All rights reserved.
Probabilistic Mass Growth Uncertainties
NASA Technical Reports Server (NTRS)
Plumer, Eric; Elliott, Darren
2013-01-01
Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.
Miranda, Alcides Silva de; Carvalho, André Luis Bonifácio de; Cavalcante, Caio Garcia Correia Sá
2012-04-01
What do the leaders of the Municipal Health Service (SMS) report and say about the systematic monitoring and evaluation of their own government management? The purpose of this paper is to provide input for the formulation of plausible hypotheses about such institutional processes and practices based on information produced in an exploratory study. This is a multiple case study with quantitative and qualitative analysis of answers to a semi-structured questionnaire given to government officials of a systematic sample of 577 Municipal Health Services (10.4% of the total in Brazil). They were selected and stratified by proportional distribution among states and by the population size of municipalities. In general, it shows that approximately half of the respondents use information from Health Monitoring Evaluations to orient decision-making, planning and other management approaches. This proportion tends to decrease in cities with smaller populations. There are specific and significant gaps in financial, personnel and crisis management. The evidence from the hypotheses highlights the fact that these processes are still at an early stage.
Finite Element-Based Mechanical Assessment of Bone Quality on the Basis of In Vivo Images.
Pahr, Dieter H; Zysset, Philippe K
2016-12-01
Beyond bone mineral density (BMD), bone quality designates the mechanical integrity of bone tissue. In vivo images based on X-ray attenuation, such as CT reconstructions, provide size, shape, and local BMD distribution and may be exploited as input for finite element analysis (FEA) to assess bone fragility. Further key input parameters of FEA are the material properties of bone tissue. This review discusses the main determinants of bone mechanical properties and emphasizes the added value, as well as the important assumptions underlying finite element analysis. Bone tissue is a sophisticated, multiscale composite material that undergoes remodeling but exhibits a rather narrow band of tissue mineralization. Mechanically, bone tissue behaves elastically under physiologic loads and yields by cracking beyond critical strain levels. Through adequate cell-orchestrated modeling, trabecular bone tunes its mechanical properties by volume fraction and fabric. With proper calibration, these mechanical properties may be incorporated in quantitative CT-based finite element analysis that has been validated extensively with ex vivo experiments and has been applied increasingly in clinical trials to assess treatment efficacy against osteoporosis.
TOND1 confers tolerance to nitrogen deficiency in rice
Zhang, Yangjun; Tan, Lubin; Zhu, Zuofeng; Yuan, Lixing; Xie, Daoxin; Sun, Chuanqing
2015-01-01
Nitrogen (N), the most important mineral nutrient for plants, is critical to agricultural production systems. N deficiency severely affects rice growth and decreases rice yields. However, excessive use of N fertilizer has caused severe pollution to agricultural and ecological environments. The necessity of breeding of crops that require lower input of N fertilizer has been recognized. Here we identified a major quantitative trait locus on chromosome 12, Tolerance Of Nitrogen Deficiency 1 (TOND1), that confers tolerance to N deficiency in the indica cultivar Teqing. Sequence verification of 75 indica and 75 japonica cultivars from 18 countries and regions demonstrated that only 27.3% of cultivars (41 indica cultivars) contain TOND1, whereas 72.7% of cultivars, including the remaining 34 indica cultivars and all 75 japonica cultivars, do not harbor the TOND1 allele. Over-expression of TOND1 increased the tolerance to N deficiency in the TOND1-deficient rice cultivars. The identification of TOND1 provides a molecular basis for breeding rice varieties with improved grain yield despite decreased input of N fertilizers. PMID:25439309
A distributed analysis of Human impact on global sediment dynamics
NASA Astrophysics Data System (ADS)
Cohen, S.; Kettner, A.; Syvitski, J. P.
2012-12-01
Understanding riverine sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. Ever increasing human activity during the Anthropocene have affected sediment dynamics in two major ways: (1) an increase is hillslope erosion due to agriculture, deforestation and landscape engineering and (2) trapping of sediment in dams and other man-made reservoirs. The intensity and dynamics between these man-made factors vary widely across the globe and in time and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment flux and water discharge model (WBMsed) to compare a pristine (without human input) and disturbed (with human input) simulations. Using these 50 year simulations we will show and discuss the complex spatial and temporal patterns of human effect on riverine sediment flux and water discharge.
Nielles-Vallespin, Sonia; Kellman, Peter; Hsu, Li-Yueh; Arai, Andrew E
2015-02-17
A low excitation flip angle (α < 10°) steady-state free precession (SSFP) proton-density (PD) reference scan is often used to estimate the B1-field inhomogeneity for surface coil intensity correction (SCIC) of the saturation-recovery (SR) prepared high flip angle (α = 40-50°) SSFP myocardial perfusion images. The different SSFP off-resonance response for these two flip angles might lead to suboptimal SCIC when there is a spatial variation in the background B0-field. The low flip angle SSFP-PD frames are more prone to parallel imaging banding artifacts in the presence of off-resonance. The use of FLASH-PD frames would eliminate both the banding artifacts and the uneven frequency response in the presence of off-resonance in the surface coil inhomogeneity estimate and improve homogeneity of semi-quantitative and quantitative perfusion measurements. B0-field maps, SSFP and FLASH-PD frames were acquired in 10 healthy volunteers to analyze the SSFP off-resonance response. Furthermore, perfusion scans preceded by both FLASH and SSFP-PD frames from 10 patients with no myocardial infarction were analyzed semi-quantitatively and quantitatively (rest n = 10 and stress n = 1). Intra-subject myocardial blood flow (MBF) coefficient of variation (CoV) over the whole left ventricle (LV), as well as intra-subject peak contrast (CE) and upslope (SLP) standard deviation (SD) over 6 LV sectors were investigated. In the 6 out of 10 cases where artifacts were apparent in the LV ROI of the SSFP-PD images, all three variability metrics were statistically significantly lower when using the FLASH-PD frames as input for the SCIC (CoVMBF-FLASH = 0.3 ± 0.1, CoVMBF-SSFP = 0.4 ± 0.1, p = 0.03; SDCE-FLASH = 10 ± 2, SDCE-SSFP = 32 ± 7, p = 0.01; SDSLP-FLASH = 0.02 ± 0.01, SDSLP-SSFP = 0.06 ± 0.02, p = 0.03). Example rest and stress data sets from the patient pool demonstrate that the low flip angle SSFP protocol can exhibit severe ghosting artifacts originating from off-resonance banding artifacts at the edges of the field of view that parallel imaging is not able to unfold. These artifacts lead to errors in the quantitative perfusion maps and the semi-quantitative perfusion indexes, such as false positives. It is shown that this can be avoided by using FLASH-PD frames as input for the SCIC. FLASH-PD images are recommended as input for SCIC of SSFP perfusion images instead of low flip angle SSFP-PD images.
Brenner, Stephan; Muula, Adamson S; Robyn, Paul Jacob; Bärnighausen, Till; Sarker, Malabika; Mathanga, Don P; Bossert, Thomas; De Allegri, Manuela
2014-04-22
In this article we present a study design to evaluate the causal impact of providing supply-side performance-based financing incentives in combination with a demand-side cash transfer component on equitable access to and quality of maternal and neonatal healthcare services. This intervention is introduced to selected emergency obstetric care facilities and catchment area populations in four districts in Malawi. We here describe and discuss our study protocol with regard to the research aims, the local implementation context, and our rationale for selecting a mixed methods explanatory design with a quasi-experimental quantitative component. The quantitative research component consists of a controlled pre- and post-test design with multiple post-test measurements. This allows us to quantitatively measure 'equitable access to healthcare services' at the community level and 'healthcare quality' at the health facility level. Guided by a theoretical framework of causal relationships, we determined a number of input, process, and output indicators to evaluate both intended and unintended effects of the intervention. Overall causal impact estimates will result from a difference-in-difference analysis comparing selected indicators across intervention and control facilities/catchment populations over time.To further explain heterogeneity of quantitatively observed effects and to understand the experiential dimensions of financial incentives on clients and providers, we designed a qualitative component in line with the overall explanatory mixed methods approach. This component consists of in-depth interviews and focus group discussions with providers, service user, non-users, and policy stakeholders. In this explanatory design comprehensive understanding of expected and unexpected effects of the intervention on both access and quality will emerge through careful triangulation at two levels: across multiple quantitative elements and across quantitative and qualitative elements. Combining a traditional quasi-experimental controlled pre- and post-test design with an explanatory mixed methods model permits an additional assessment of organizational and behavioral changes affecting complex processes. Through this impact evaluation approach, our design will not only create robust evidence measures for the outcome of interest, but also generate insights on how and why the investigated interventions produce certain intended and unintended effects and allows for a more in-depth evaluation approach.
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The important features in a clinical system for quantitative angiography were examined. The human interface for data input, whether an electrostatic pen, sonic pen, or light-pen must be engineered to optimize the quality of margin definition. The computer programs which the technician uses for data entry and computation of ventriculographic measurements must be convenient to use on a routine basis in a laboratory performing multiple studies per day. The method used for magnification correction must be continuously monitored.
Automated Quantitative Nuclear Cardiology Methods
Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.
2016-01-01
Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779
Ludlow, Andrew T.; Robin, Jerome D.; Sayed, Mohammed; Litterst, Claudia M.; Shelton, Dawne N.; Shay, Jerry W.; Wright, Woodring E.
2014-01-01
The telomere repeat amplification protocol (TRAP) for the human reverse transcriptase, telomerase, is a PCR-based assay developed two decades ago and is still used for routine determination of telomerase activity. The TRAP assay can only reproducibly detect ∼2-fold differences and is only quantitative when compared to internal standards and reference cell lines. The method generally involves laborious radioactive gel electrophoresis and is not conducive to high-throughput analyzes. Recently droplet digital PCR (ddPCR) technologies have become available that allow for absolute quantification of input deoxyribonucleic acid molecules following PCR. We describe the reproducibility and provide several examples of a droplet digital TRAP (ddTRAP) assay for telomerase activity, including quantitation of telomerase activity in single cells, telomerase activity across several common telomerase positive cancer cells lines and in human primary peripheral blood mononuclear cells following mitogen stimulation. Adaptation of the TRAP assay to digital format allows accurate and reproducible quantification of the number of telomerase-extended products (i.e. telomerase activity; 57.8 ± 7.5) in a single HeLa cell. The tools developed in this study allow changes in telomerase enzyme activity to be monitored on a single cell basis and may have utility in designing novel therapeutic approaches that target telomerase. PMID:24861623
Gandler, W; Shapiro, H
1990-01-01
Logarithmic amplifiers (log amps), which produce an output signal proportional to the logarithm of the input signal, are widely used in cytometry for measurements of parameters that vary over a wide dynamic range, e.g., cell surface immunofluorescence. Existing log amp circuits all deviate to some extent from ideal performance with respect to dynamic range and fidelity to the logarithmic curve; accuracy in quantitative analysis using log amps therefore requires that log amps be individually calibrated. However, accuracy and precision may be limited by photon statistics and system noise when very low level input signals are encountered.
Directional cultural change by modification and replacement of memes.
Cardoso, Gonçalo C; Atwell, Jonathan W
2011-01-01
Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations--modification of memes--to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2017-01-01
Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671
NASA Astrophysics Data System (ADS)
Brown, Robert Douglas
Several components of a system for quantitative application of climatic statistics to landscape planning and design (CLIMACS) have been developed. One component model (MICROSIM) estimated the microclimate at the top of a remote crop using physically-based models and inputs of weather station data. Temperatures at the top of unstressed, uniform crops on flat terrain within 1600 m of a recording weather station were estimated within 1.0 C 96% of the time for a corn crop and 92% of the time for a soybean crop. Crop top winds were estimated within 0.4 m/s 92% of the time for corn and 100% of the time for soybean. This is of sufficient accuracy for application in landscape planning and design models. A physically-based model (COMFA) was developed for the determination of outdoor human thermal comfort from microclimate inputs. Estimated versus measured comfort levels in a wide range of environments agreed with a correlation coefficient of r = 0.91. Using these components, the CLIMACS concept has been applied to a typical planning example. Microclimate data were generated from weather station information using MICROSIM, then input to COMFA and to a house energy consumption model called HOTCAN to derive quantitative climatic justification for design decisions.
Kujur, Alice; Saxena, Maneesha S; Bajaj, Deepak; Laxmi; Parida, Swarup K
2013-12-01
The enormous population growth, climate change and global warming are now considered major threats to agriculture and world's food security. To improve the productivity and sustainability of agriculture, the development of highyielding and durable abiotic and biotic stress-tolerant cultivars and/climate resilient crops is essential. Henceforth, understanding the molecular mechanism and dissection of complex quantitative yield and stress tolerance traits is the prime objective in current agricultural biotechnology research. In recent years, tremendous progress has been made in plant genomics and molecular breeding research pertaining to conventional and next-generation whole genome, transcriptome and epigenome sequencing efforts, generation of huge genomic, transcriptomic and epigenomic resources and development of modern genomics-assisted breeding approaches in diverse crop genotypes with contrasting yield and abiotic stress tolerance traits. Unfortunately, the detailed molecular mechanism and gene regulatory networks controlling such complex quantitative traits is not yet well understood in crop plants. Therefore, we propose an integrated strategies involving available enormous and diverse traditional and modern -omics (structural, functional, comparative and epigenomics) approaches/resources and genomics-assisted breeding methods which agricultural biotechnologist can adopt/utilize to dissect and decode the molecular and gene regulatory networks involved in the complex quantitative yield and stress tolerance traits in crop plants. This would provide clues and much needed inputs for rapid selection of novel functionally relevant molecular tags regulating such complex traits to expedite traditional and modern marker-assisted genetic enhancement studies in target crop species for developing high-yielding stress-tolerant varieties.
Quantitative assessment of Pb sources in isotopic mixtures using a Bayesian mixing model.
Longman, Jack; Veres, Daniel; Ersek, Vasile; Phillips, Donald L; Chauvel, Catherine; Tamas, Calin G
2018-04-18
Lead (Pb) isotopes provide valuable insights into the origin of Pb within a sample, typically allowing for reliable fingerprinting of their source. This is useful for a variety of applications, from tracing sources of pollution-related Pb, to the origins of Pb in archaeological artefacts. However, current approaches investigate source proportions via graphical means, or simple mixing models. As such, an approach, which quantitatively assesses source proportions and fingerprints the signature of analysed Pb, especially for larger numbers of sources, would be valuable. Here we use an advanced Bayesian isotope mixing model for three such applications: tracing dust sources in pre-anthropogenic environmental samples, tracking changing ore exploitation during the Roman period, and identifying the source of Pb in a Roman-age mining artefact. These examples indicate this approach can understand changing Pb sources deposited during both pre-anthropogenic times, when natural cycling of Pb dominated, and the Roman period, one marked by significant anthropogenic pollution. Our archaeometric investigation indicates clear input of Pb from Romanian ores previously speculated, but not proven, to have been the Pb source. Our approach can be applied to a range of disciplines, providing a new method for robustly tracing sources of Pb observed within a variety of environments.
Semi-Automatic Segmentation Software for Quantitative Clinical Brain Glioblastoma Evaluation
Zhu, Y; Young, G; Xue, Z; Huang, R; You, H; Setayesh, K; Hatabu, H; Cao, F; Wong, S.T.
2012-01-01
Rationale and Objectives Quantitative measurement provides essential information about disease progression and treatment response in patients with Glioblastoma multiforme (GBM). The goal of this paper is to present and validate a software pipeline for semi-automatic GBM segmentation, called AFINITI (Assisted Follow-up in NeuroImaging of Therapeutic Intervention), using clinical data from GBM patients. Materials and Methods Our software adopts the current state-of-the-art tumor segmentation algorithms and combines them into one clinically usable pipeline. Both the advantages of the traditional voxel-based and the deformable shape-based segmentation are embedded into the software pipeline. The former provides an automatic tumor segmentation scheme based on T1- and T2-weighted MR brain data, and the latter refines the segmentation results with minimal manual input. Results Twenty six clinical MR brain images of GBM patients were processed and compared with manual results. The results can be visualized using the embedded graphic user interface (GUI). Conclusion Validation results using clinical GBM data showed high correlation between the AFINITI results and manual annotation. Compared to the voxel-wise segmentation, AFINITI yielded more accurate results in segmenting the enhanced GBM from multimodality MRI data. The proposed pipeline could be used as additional information to interpret MR brain images in neuroradiology. PMID:22591720
Airborne electromagnetic mapping of the base of aquifer in areas of western Nebraska
Abraham, Jared D.; Cannia, James C.; Bedrosian, Paul A.; Johnson, Michaela R.; Ball, Lyndsay B.; Sibray, Steven S.
2012-01-01
Airborne geophysical surveys of selected areas of the North and South Platte River valleys of Nebraska, including Lodgepole Creek valley, collected data to map aquifers and bedrock topography and thus improve the understanding of groundwater - surface-water relationships to be used in water-management decisions. Frequency-domain helicopter electromagnetic surveys, using a unique survey flight-line design, collected resistivity data that can be related to lithologic information for refinement of groundwater model inputs. To make the geophysical data useful to multidimensional groundwater models, numerical inversion converted measured data into a depth-dependent subsurface resistivity model. The inverted resistivity model, along with sensitivity analyses and test-hole information, is used to identify hydrogeologic features such as bedrock highs and paleochannels, to improve estimates of groundwater storage. The two- and three-dimensional interpretations provide the groundwater modeler with a high-resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. The new hydrogeologic frameworks improve understanding of the flow-path orientation by refining the location of paleochannels and associated base of aquifer highs. These interpretations provide resource managers high-resolution hydrogeologic frameworks and quantitative estimates of framework uncertainty. The improved base of aquifer configuration represents the hydrogeology at a level of detail not achievable with previously available data.
Ecological periodic tables are an information organizing system. Their elements are categorical habitat types. Their attributes are quantitative, predictably recurring (periodic) properties of a target biotic community. Since they translate habitats as inputs into measures of ...
NASA Astrophysics Data System (ADS)
Wilson, J. P.; Fischer, W. W.
2010-12-01
Fossil plants provide useful proxies of Earth’s climate because plants are closely connected, through physiology and morphology, to the environments in which they lived. Recent advances in quantitative hydraulic models of plant water transport provide new insight into the history of climate by allowing fossils to speak directly to environmental conditions based on preserved internal anatomy. We report results of a quantitative hydraulic model applied to one of the earliest terrestrial plants preserved in three dimensions, the ~396 million-year-old vascular plant Asteroxylon mackei. This model combines equations describing the rate of fluid flow through plant tissues with detailed observations of plant anatomy; this allows quantitative estimates of two critical aspects of plant function. First and foremost, results from these models quantify the supply of water to evaporative surfaces; second, results describe the ability of plant vascular systems to resist tensile damage from extreme environmental events, such as drought or frost. This approach permits quantitative comparisons of functional aspects of Asteroxylon with other extinct and extant plants, informs the quality of plant-based environmental proxies, and provides concrete data that can be input into climate models. Results indicate that despite their small size, water transport cells in Asteroxylon could supply a large volume of water to the plant's leaves--even greater than cells from some later-evolved seed plants. The smallest Asteroxylon tracheids have conductivities exceeding 0.015 m^2 / MPa * s, whereas Paleozoic conifer tracheids do not reach this threshold until they are three times wider. However, this increase in conductivity came at the cost of little to no adaptations for transport safety, placing the plant’s vegetative organs in jeopardy during drought events. Analysis of the thickness-to-span ratio of Asteroxylon’s tracheids suggests that environmental conditions of reduced relative humidity (<20%) combined with elevated temperatures (>25°C) could cause sufficient cavitation to reduce hydraulic conductivity by 50%. This suggests that the Early Devonian environments that supported the earliest vascular plants were not subject to prolonged midseason droughts, or, alternatively, that the growing season was short. This places minimum constraints on water availability (e.g., groundwater hydration, relative humidity) in locations where Asteroxylon fossils are found; these environments must have had high relative humidities, comparable to tropical riparian environments. Given these constraints, biome-scale paleovegetation models that place early vascular plants distal to water sources can be revised to account for reduced drought tolerance. Paleoclimate proxies that treat early terrestrial plants as functionally interchangeable can incorporate physiological differences in a quantitatively meaningful way. Application of hydraulic models to fossil plants provides an additional perspective on the 475 million-year history of terrestrial photosynthetic environments and has potential to corroborate other plant-based paleoclimate proxies.
NASA Astrophysics Data System (ADS)
Novikova, Y.; Zubanov, V.
2018-01-01
The article describes the numerical investigation of the input air irregularity influence of turbofan engine on its characteristics. The investigated fan has a wide-blade, an inlet diameter about 2 meters, a pressure ratio about 1.6 and the bypass ratio about 4.8. The flow irregularity was simulated by the flap input in the fan inlet channel. Input of flap was carried out by an amount of 10 to 22,5% of the input channel diameter with increments of 2,5%. A nonlinear harmonic analysis (NLH-analysis) of NUMECA Fine/Turbo software was used to study the flow irregularity. The behavior of the calculated LPC characteristics repeats the experiment behavior, but there is a quantitative difference: the calculated efficiency and pressure ratio of booster consistent with the experimental data within 3% and 2% respectively, the calculated efficiency and pressure ratio of fan duct - within 4% and 2.5% respectively. An increasing the level of air irregularity in the input stage of the fan reduces the calculated mass flow, maximum pressure ratio and efficiency. With the value of flap input 12.5%, reducing the maximum air flow is 1.44%, lowering the maximum pressure ratio is 2.6%, efficiency decreasing is 3.1%.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
Space-Time Data fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, H.; Cressie, N.
2011-01-01
NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.
Relating interesting quantitative time series patterns with text events and text features
NASA Astrophysics Data System (ADS)
Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.
2013-12-01
In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.
A quantitative theory of the functions of the hippocampal CA3 network in memory
Rolls, Edmund T.
2013-01-01
A quantitative computational theory of the operation of the hippocampal CA3 system as an autoassociation or attractor network used in episodic memory system is described. In this theory, the CA3 system operates as a single attractor or autoassociation network to enable rapid, one-trial, associations between any spatial location (place in rodents, or spatial view in primates) and an object or reward, and to provide for completion of the whole memory during recall from any part. The theory is extended to associations between time and object or reward to implement temporal order memory, also important in episodic memory. The dentate gyrus (DG) performs pattern separation by competitive learning to produce sparse representations suitable for setting up new representations in CA3 during learning, producing for example neurons with place-like fields from entorhinal cortex grid cells. The dentate granule cells produce by the very small number of mossy fiber (MF) connections to CA3 a randomizing pattern separation effect important during learning but not recall that separates out the patterns represented by CA3 firing to be very different from each other, which is optimal for an unstructured episodic memory system in which each memory must be kept distinct from other memories. The direct perforant path (pp) input to CA3 is quantitatively appropriate to provide the cue for recall in CA3, but not for learning. Tests of the theory including hippocampal subregion analyses and hippocampal NMDA receptor knockouts are described, and support the theory. PMID:23805074
NASA Astrophysics Data System (ADS)
Kahveci, Ajda
2010-07-01
In this study, multiple thematically based and quantitative analysis procedures were utilized to explore the effectiveness of Turkish chemistry and science textbooks in terms of their reflection of reform. The themes gender equity, questioning level, science vocabulary load, and readability level provided the conceptual framework for the analyses. An unobtrusive research method, content analysis, was used by coding the manifest content and counting the frequency of words, photographs, drawings, and questions by cognitive level. The context was an undergraduate chemistry teacher preparation program at a large public university in a metropolitan area in northwestern Turkey. Forty preservice chemistry teachers were guided to analyze 10 middle school science and 10 high school chemistry textbooks. Overall, the textbooks included unfair gender representations, a considerably higher number of input and processing than output level questions, and high load of science terminology. The textbooks failed to provide sufficient empirical evidence to be considered as gender equitable and inquiry-based. The quantitative approach employed for evaluation contrasts with a more interpretive approach, and has the potential in depicting textbook profiles in a more reliable way, complementing the commonly employed qualitative procedures. Implications suggest that further work in this line is needed on calibrating the analysis procedures with science textbooks used in different international settings. The procedures could be modified and improved to meet specific evaluation needs. In the Turkish context, next step research may concern the analysis of science textbooks being rewritten for the reform-based curricula to make cross-comparisons and evaluate a possible progression.
Huang, Zhenzhen; Wang, Haonan; Yang, Wensheng
2015-05-06
In this work, a facile colorimetric method is developed for quantitative detection of human serum albumin (HSA) based on the antiaggregation effect of gold nanoparticles (Au NPs) in the presence of HSA. The citrate-capped Au NPs undergo a color change from red to blue when melamine is added as a cross-linker to induce the aggregation of the NPs. Such an aggregation is efficiently suppressed upon the adsorption of HSA on the particle surface. This method provides the advantages of simplicity and cost-efficiency for quantitative detection of HSA with a detection limit of ∼1.4 nM by monitoring the colorimetric changes of the Au NPs with UV-vis spectroscopy. In addition, this approach shows good selectivity for HSA over various amino acids, peptides, and proteins and is qualified for detection of HSA in a biological sample. Such an antiaggregation effect can be further extended to fabricate an INHIBIT logic gate by using HSA and melamine as inputs and the color changes of Au NPs as outputs, which may have application potentials in point-of-care medical diagnosis.
Tractography optimization using quantitative T1 mapping in the human optic radiation.
Schurr, Roey; Duan, Yiran; Norcia, Anthony M; Ogawa, Shumpei; Yeatman, Jason D; Mezer, Aviv A
2018-06-21
Diffusion MRI tractography is essential for reconstructing white-matter projections in the living human brain. Yet tractography results miss some projections and falsely identify others. A challenging example is the optic radiation (OR) that connects the thalamus and the primary visual cortex. Here, we tested whether OR tractography can be optimized using quantitative T1 mapping. Based on histology, we proposed that myelin-sensitive T1 values along the OR should remain consistently low compared with adjacent white matter. We found that complementary information from the T1 map allows for increasing the specificity of the reconstructed OR tract by eliminating falsely identified projections. This T1-filtering outperforms other, diffusion-based tractography filters. These results provide evidence that the smooth microstructural signature along the tract can be used as constructive input for tractography. Finally, we demonstrate that this approach can be generalized to the HCP-available MRI measurements. We conclude that multimodal MRI microstructural information can be used to eliminate spurious tractography results in the case of the OR. Copyright © 2018. Published by Elsevier Inc.
Geoscientific process monitoring with positron emission tomography (GeoPET)
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Gründig, Marion; Zakhnini, Abdelhamid; Lippmann-Pipke, Johanna
2016-08-01
Transport processes in geomaterials can be observed with input-output experiments, which yield no direct information on the impact of heterogeneities, or they can be assessed by model simulations based on structural imaging using µ-CT. Positron emission tomography (PET) provides an alternative experimental observation method which directly and quantitatively yields the spatio-temporal distribution of tracer concentration. Process observation with PET benefits from its extremely high sensitivity together with a resolution that is acceptable in relation to standard drill core sizes. We strongly recommend applying high-resolution PET scanners in order to achieve a resolution on the order of 1 mm. We discuss the particularities of PET applications in geoscientific experiments (GeoPET), which essentially are due to high material density. Although PET is rather insensitive to matrix effects, mass attenuation and Compton scattering have to be corrected thoroughly in order to derive quantitative values. Examples of process monitoring of advection and diffusion processes with GeoPET illustrate the procedure and the experimental conditions, as well as the benefits and limits of the method.
Directions for a Community College.
ERIC Educational Resources Information Center
Turner, Lewis O.
Because desired output goals should be the ruling criteria for the deployment of resources (inputs) and the selection of goal attainment strategies (processes), specific goal expectations and goal achievement evaluative methods must be determined. These output goals may be classified in two categories, quantitative (numbers of graduates, grade…
RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.
Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang
2014-01-01
In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.
Evaluating digital libraries in the health sector. Part 2: measuring impacts and outcomes.
Cullen, Rowena
2004-03-01
This is the second part of a two-part paper which explores methods that can be used to evaluate digital libraries in the health sector. Part 1 focuses on approaches to evaluation that have been proposed for mainstream digital information services. This paper investigates evaluative models developed for some innovative digital library projects, and some major national and international electronic health information projects. The value of ethnographic methods to provide qualitative data to explore outcomes, adding to quantitative approaches based on inputs and outputs is discussed. The paper concludes that new 'post-positivist' models of evaluation are needed to cover all the dimensions of the digital library in the health sector, and some ways of doing this are outlined.
Understanding Cervicogenic Headache
Chua, Nicholas H L; Suijlekom, Hans V; Wilder-Smith, Oliver H; Vissers, Kris C P
2012-01-01
The purported mechanism underlying the development and progression of cervicogenic headache (CEH) is the convergence of sensory inputs at the trigeminocervical nucleus. This mechanism explains the radiation of pain from the neck or the occipitonuchal area and its spread to the oculo-fronto-temporal region; it also explains the recurrent headaches caused by improper neck postures or external pressure to the structures in the neck and the occipital region. These neural connectivity mechanisms involving the trigeminal nucleus are also evident from the eyeblink reflex and findings of quantitative sensory testing (QST). Understanding the mechanisms underlying the development of CEH is important because it will not only provide a better treatment outcome but will also allow practitioners to appreciate the variability of symptomatic presentations in these patients. PMID:24223325
Security of BB84 with weak randomness and imperfect qubit encoding
NASA Astrophysics Data System (ADS)
Zhao, Liang-Yuan; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Fang, Xi; Han, Zheng-Fu; Huang, Wei
2018-03-01
The main threats for the well-known Bennett-Brassard 1984 (BB84) practical quantum key distribution (QKD) systems are that its encoding is inaccurate and measurement device may be vulnerable to particular attacks. Thus, a general physical model or security proof to tackle these loopholes simultaneously and quantitatively is highly desired. Here we give a framework on the security of BB84 when imperfect qubit encoding and vulnerability of measurement device are both considered. In our analysis, the potential attacks to measurement device are generalized by the recently proposed weak randomness model which assumes the input random numbers are partially biased depending on a hidden variable planted by an eavesdropper. And the inevitable encoding inaccuracy is also introduced here. From a fundamental view, our work reveals the potential information leakage due to encoding inaccuracy and weak randomness input. For applications, our result can be viewed as a useful tool to quantitatively evaluate the security of a practical QKD system.
Engineering modular and orthogonal genetic logic gates for robust digital-like synthetic biology.
Wang, Baojun; Kitney, Richard I; Joly, Nicolas; Buck, Martin
2011-10-18
Modular and orthogonal genetic logic gates are essential for building robust biologically based digital devices to customize cell signalling in synthetic biology. Here we constructed an orthogonal AND gate in Escherichia coli using a novel hetero-regulation module from Pseudomonas syringae. The device comprises two co-activating genes hrpR and hrpS controlled by separate promoter inputs, and a σ(54)-dependent hrpL promoter driving the output. The hrpL promoter is activated only when both genes are expressed, generating digital-like AND integration behaviour. The AND gate is demonstrated to be modular by applying new regulated promoters to the inputs, and connecting the output to a NOT gate module to produce a combinatorial NAND gate. The circuits were assembled using a parts-based engineering approach of quantitative characterization, modelling, followed by construction and testing. The results show that new genetic logic devices can be engineered predictably from novel native orthogonal biological control elements using quantitatively in-context characterized parts. © 2011 Macmillan Publishers Limited. All rights reserved.
Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I
2011-01-01
The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562
Characterization of relief printing
NASA Astrophysics Data System (ADS)
Liu, Xing; Chen, Lin; Ortiz-Segovia, Maria-Valezzka; Ferwerda, James; Allebach, Jan
2014-03-01
Relief printing technology developed by Océ allows the superposition of several layers of colorant on different types of media which creates a variation of the surface height defined by the input to the printer. Evaluating the reproduction accuracy of distinct surface characteristics is of great importance to the application of the relief printing system. Therefore, it is necessary to develop quality metrics to evaluate the relief process. In this paper, we focus on the third dimension of relief printing, i.e. height information. To achieve this goal, we define metrics and develop models that aim to evaluate relief prints in two aspects: overall fidelity and surface finish. To characterize the overall fidelity, three metrics are calculated: Modulation Transfer Function (MTF), difference and root-mean-squared error (RMSE) between the input height map and scanned height map, and print surface angle accuracy. For the surface finish property, we measure the surface roughness, generate surface normal maps and develop a light reflection model that serves as a simulation of the differences between ideal prints and real prints that may be perceived by human observers. Three sets of test targets are designed and printed by the Océ relief printer prototypes for the calculation of the above metrics: (i) twisted target, (ii) sinusoidal wave target, and (iii) ramp target. The results provide quantitative evaluations of the printing quality in the third dimension, and demonstrate that the height of relief prints is reproduced accurately with respect to the input design. The factors that affect the printing quality include: printing direction, frequency and amplitude of the input signal, shape of relief prints. Besides the above factors, there are two additional aspects that influence the viewing experience of relief prints: lighting condition and viewing angle.
Bennett, Maxwell R.; Farnell, Les; Gibson, William G.; Lagopoulos, Jim
2015-01-01
Measurements of the cortical metabolic rate of glucose oxidation [CMRglc(ox)] have provided a number of interesting and, in some cases, surprising observations. One is the decline in CMRglc(ox) during anesthesia and non-rapid eye movement (NREM) sleep, and another, the inverse relationship between the resting-state CMRglc(ox) and the transient following input from the thalamus. The recent establishment of a quantitative relationship between synaptic and action potential activity on the one hand and CMRglc(ox) on the other allows neural network models of such activity to probe for possible mechanistic explanations of these phenomena. We have carried out such investigations using cortical models consisting of networks of modules with excitatory and inhibitory neurons, each receiving excitatory inputs from outside the network in addition to intermodular connections. Modules may be taken as regions of cortical interest, the inputs from outside the network as arising from the thalamus, and the intermodular connections as long associational fibers. The model shows that the impulse frequency of different modules can differ from each other by less than 10%, consistent with the relatively uniform CMRglc(ox) observed across different regions of cortex. The model also shows that, if correlations of the average impulse rate between different modules decreases, there is a concomitant decrease in the average impulse rate in the modules, consistent with the observed drop in CMRglc(ox) in NREM sleep and under anesthesia. The model also explains why a transient thalamic input to sensory cortex gives rise to responses with amplitudes inversely dependent on the resting-state frequency, and therefore resting-state CMRglc(ox). PMID:25775588
The Kinetics of Oxygen Atom Recombination in the Presence of Carbon Dioxide
NASA Astrophysics Data System (ADS)
Jamieson, C. S.; Garcia, R. M.; Pejakovic, D.; Kalogerakis, K.
2009-12-01
Understanding processes involving atomic oxygen is crucial for the study and modeling of composition, energy transfer, airglow, and transport dynamics in planetary atmospheres. Significant gaps and uncertainties exist in the understanding of these processes and often the relevant input from laboratory measurements is missing or outdated. We are conducting laboratory experiments to measure the rate coefficient for O + O + CO2 recombination and investigating the O2 excited states produced following the recombination. These measurements will provide key input for a quantitative understanding and reliable modeling of the atmospheres of the CO2 planets and their airglow. An excimer laser providing pulsed output at either 193 nm or 248 nm is employed to produce O atoms by dissociating carbon dioxide, nitrous oxide, or ozone. In an ambient-pressure background of CO2, O atoms recombine in a time scale of a few milliseconds. Detection of laser-induced fluorescence at 845 nm following two-photon excitation near 226 nm monitors the decay of the oxygen atom population. From the temporal evolution of the signal the recombination rate coefficient is extracted. Fluorescence spectroscopy is used to detect the products of O-atom recombination and subsequent relaxation in CO2. This work is supported by the US National Science Foundation’s (NSF) Planetary Astronomy Program. Rosanne Garcia’s participation was funded by the NSF Research Experiences for Undergraduates (REU) Program.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Real Time Calibration Method for Signal Conditioning Amplifiers
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Mata, Carlos T. (Inventor); Eckhoff, Anthony (Inventor); Perotti, Jose (Inventor); Lucena, Angel (Inventor)
2004-01-01
A signal conditioning amplifier receives an input signal from an input such as a transducer. The signal is amplified and processed through an analog to digital converter and sent to a processor. The processor estimates the input signal provided by the transducer to the amplifier via a multiplexer. The estimated input signal is provided as a calibration voltage to the amplifier immediately following the receipt of the amplified input signal. The calibration voltage is amplified by the amplifier and provided to the processor as an amplified calibration voltage. The amplified calibration voltage is compared to the amplified input signal, and if a significant error exists, the gain and/or offset of the amplifier may be adjusted as necessary.
High speed quantitative digital microscopy
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Price, K. H.; Eskenazi, R.; Ovadya, M. M.; Navon, M. A.
1984-01-01
Modern digital image processing hardware makes possible quantitative analysis of microscope images at high speed. This paper describes an application to automatic screening for cervical cancer. The system uses twelve MC6809 microprocessors arranged in a pipeline multiprocessor configuration. Each processor executes one part of the algorithm on each cell image as it passes through the pipeline. Each processor communicates with its upstream and downstream neighbors via shared two-port memory. Thus no time is devoted to input-output operations as such. This configuration is expected to be at least ten times faster than previous systems.
The Case for Open Source Software: The Interactional Discourse Lab
ERIC Educational Resources Information Center
Choi, Seongsook
2016-01-01
Computational techniques and software applications for the quantitative content analysis of texts are now well established, and many qualitative data software applications enable the manipulation of input variables and the visualization of complex relations between them via interactive and informative graphical interfaces. Although advances in…
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
40 CFR 97.76 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... heat input data. 97.76 Section 97.76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Monitoring and Reporting § 97.76 Additional requirements to provide heat input data. The owner or operator of... a flow system shall also monitor and report heat input rate at the unit level using the procedures...
NASA Astrophysics Data System (ADS)
Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido
2012-12-01
Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.
Prioritizing Risks and Uncertainties from Intentional Release of Selected Category A Pathogens
Hong, Tao; Gurian, Patrick L.; Huang, Yin; Haas, Charles N.
2012-01-01
This paper synthesizes available information on five Category A pathogens (Bacillus anthracis, Yersinia pestis, Francisella tularensis, Variola major and Lassa) to develop quantitative guidelines for how environmental pathogen concentrations may be related to human health risk in an indoor environment. An integrated model of environmental transport and human health exposure to biological pathogens is constructed which 1) includes the effects of environmental attenuation, 2) considers fomite contact exposure as well as inhalational exposure, and 3) includes an uncertainty analysis to identify key input uncertainties, which may inform future research directions. The findings provide a framework for developing the many different environmental standards that are needed for making risk-informed response decisions, such as when prophylactic antibiotics should be distributed, and whether or not a contaminated area should be cleaned up. The approach is based on the assumption of uniform mixing in environmental compartments and is thus applicable to areas sufficiently removed in time and space from the initial release that mixing has produced relatively uniform concentrations. Results indicate that when pathogens are released into the air, risk from inhalation is the main component of the overall risk, while risk from ingestion (dermal contact for B. anthracis) is the main component of the overall risk when pathogens are present on surfaces. Concentrations sampled from untracked floor, walls and the filter of heating ventilation and air conditioning (HVAC) system are proposed as indicators of previous exposure risk, while samples taken from touched surfaces are proposed as indicators of future risk if the building is reoccupied. A Monte Carlo uncertainty analysis is conducted and input-output correlations used to identify important parameter uncertainties. An approach is proposed for integrating these quantitative assessments of parameter uncertainty with broader, qualitative considerations to identify future research priorities. PMID:22412915
Moore, G W K; Semple, J L
2011-01-01
Cold injury is an acknowledged risk factor for those who venture into high altitude regions. There is, however, little quantitative information on this risk that can be used to implement mitigation strategies. Here we provide the first characterization of the risk of cold injury near the summit of Mount Everest. This is accomplished through the application of a meteorological dataset that has been demonstrated to characterize conditions in the region as inputs to new parameterizations of wind chill equivalent temperature (WCT) and facial frostbite time (FFT). Throughout the year, the typical WCT near the summit of Everest is always <-30°C, and the typical FFT is always less than 20 min. During the spring climbing season, WCTs of -50°C and FFTs of 5 min are typical; during severe storms, they approach -60°C and 1 min, respectively; values typically found during the winter. Further, we show that the summit barometric pressure is an excellent predictor of summit WCT and FFT. Our results provide the first quantitative characterization of the risk of cold injury on Mount Everest and also allow for the possibility of using barometric pressure, an easily observed parameter, in real time to characterize this risk and to implement mitigation strategies. The results also provide additional confirmation as to the extreme environment experienced by those attempting to summit Mount Everest and other high mountains.
Control-oriented reduced order modeling of dipteran flapping flight
NASA Astrophysics Data System (ADS)
Faruque, Imraan
Flying insects achieve flight stabilization and control in a manner that requires only small, specialized neural structures to perform the essential components of sensing and feedback, achieving unparalleled levels of robust aerobatic flight on limited computational resources. An engineering mechanism to replicate these control strategies could provide a dramatic increase in the mobility of small scale aerial robotics, but a formal investigation has not yet yielded tools that both quantitatively and intuitively explain flapping wing flight as an "input-output" relationship. This work uses experimental and simulated measurements of insect flight to create reduced order flight dynamics models. The framework presented here creates models that are relevant for the study of control properties. The work begins with automated measurement of insect wing motions in free flight, which are then used to calculate flight forces via an empirically-derived aerodynamics model. When paired with rigid body dynamics and experimentally measured state feedback, both the bare airframe and closed loop systems may be analyzed using frequency domain system identification. Flight dynamics models describing maneuvering about hover and cruise conditions are presented for example fruit flies (Drosophila melanogaster) and blowflies (Calliphorids). The results show that biologically measured feedback paths are appropriate for flight stabilization and sexual dimorphism is only a minor factor in flight dynamics. A method of ranking kinematic control inputs to maximize maneuverability is also presented, showing that the volume of reachable configurations in state space can be dramatically increased due to appropriate choice of kinematic inputs.
Helioseismic and neutrino data-driven reconstruction of solar properties
NASA Astrophysics Data System (ADS)
Song, Ningqiang; Gonzalez-Garcia, M. C.; Villante, Francesco L.; Vinyoles, Nuria; Serenelli, Aldo
2018-06-01
In this work, we use Bayesian inference to quantitatively reconstruct the solar properties most relevant to the solar composition problem using as inputs the information provided by helioseismic and solar neutrino data. In particular, we use a Gaussian process to model the functional shape of the opacity uncertainty to gain flexibility and become as free as possible from prejudice in this regard. With these tools we first readdress the statistical significance of the solar composition problem. Furthermore, starting from a composition unbiased set of standard solar models (SSMs) we are able to statistically select those with solar chemical composition and other solar inputs which better describe the helioseismic and neutrino observations. In particular, we are able to reconstruct the solar opacity profile in a data-driven fashion, independently of any reference opacity tables, obtaining a 4 per cent uncertainty at the base of the convective envelope and 0.8 per cent at the solar core. When systematic uncertainties are included, results are 7.5 per cent and 2 per cent, respectively. In addition, we find that the values of most of the other inputs of the SSMs required to better describe the helioseismic and neutrino data are in good agreement with those adopted as the standard priors, with the exception of the astrophysical factor S11 and the microscopic diffusion rates, for which data suggests a 1 per cent and 30 per cent reduction, respectively. As an output of the study we derive the corresponding data-driven predictions for the solar neutrino fluxes.
Sundar, Lalith Ks; Muzik, Otto; Rischka, Lucas; Hahn, Andreas; Rausch, Ivo; Lanzenberger, Rupert; Hienert, Marius; Klebermass, Eva-Maria; Füchsel, Frank-Günther; Hacker, Marcus; Pilz, Magdalena; Pataraia, Ekaterina; Traub-Weidinger, Tatjana; Beyer, Thomas
2018-01-01
Absolute quantification of PET brain imaging requires the measurement of an arterial input function (AIF), typically obtained invasively via an arterial cannulation. We present an approach to automatically calculate an image-derived input function (IDIF) and cerebral metabolic rates of glucose (CMRGlc) from the [18F]FDG PET data using an integrated PET/MRI system. Ten healthy controls underwent test-retest dynamic [18F]FDG-PET/MRI examinations. The imaging protocol consisted of a 60-min PET list-mode acquisition together with a time-of-flight MR angiography scan for segmenting the carotid arteries and intermittent MR navigators to monitor subject movement. AIFs were collected as the reference standard. Attenuation correction was performed using a separate low-dose CT scan. Assessment of the percentage difference between area-under-the-curve of IDIF and AIF yielded values within ±5%. Similar test-retest variability was seen between AIFs (9 ± 8) % and the IDIFs (9 ± 7) %. Absolute percentage difference between CMRGlc values obtained from AIF and IDIF across all examinations and selected brain regions was 3.2% (interquartile range: (2.4-4.3) %, maximum < 10%). High test-retest intravariability was observed between CMRGlc values obtained from AIF (14%) and IDIF (17%). The proposed approach provides an IDIF, which can be effectively used in lieu of AIF.
Changes in freshwater mussel communities linked to legacy pollution in the Lower Delaware River
Blakeslee, Carrie J.; Silldorff, Erik L.; Galbraith, Heather S.
2018-01-01
Freshwater mussels are among the most-imperiled organisms worldwide, although they provide a variety of important functions in the streams and rivers they inhabit. Among Atlantic-slope rivers, the Delaware River is known for its freshwater mussel diversity and biomass; however, limited data are available on the freshwater mussel fauna in the lower, non-tidal portion of the river. This section of the Delaware River has experienced decades of water-quality degradation from both industrial and municipal sources, primarily as a function of one of its major tributaries, the Lehigh River. We completed semi-quantitative snorkel surveys in 53.5 of the 121 km of the river to document mussel community composition and the continued impacts from pollution (particularly inputs from the Lehigh River) on mussel fauna. We detected changes in mussel catch per unit effort (CPUE) below the confluence of the Lehigh River, with significant declines in the dominant species Elliptio complanata (Eastern Elliptio) as we moved downstream from its confluence—CPUE dropped from 179 to 21 mussels/h. Patterns in mussel distribution around the Lehigh confluence matched chemical signatures of Lehigh water input. Specifically, Eastern Elliptio CPUE declined more quickly moving downstream on the Pennsylvania bank, where Lehigh River water input was more concentrated compared to the New Jersey bank. A definitive causal link remains to be established between the Lehigh River and the dramatic shifts in mussel community composition, warranting continued investigation as it relates to mussel conservation and restoration in the basin.
Mirzazadeh, Azim; Gandomkar, Roghayeh; Hejri, Sara Mortaz; Hassanzadeh, Gholamreza; Koochak, Hamid Emadi; Golestani, Abolfazl; Jafarian, Ali; Jalili, Mohammad; Nayeri, Fatemeh; Saleh, Narges; Shahi, Farhad; Razavi, Seyed Hasan Emami
2016-02-01
The purpose of this study was to utilize the Context, Input, Process and Product (CIPP) evaluation model as a comprehensive framework to guide initiating, planning, implementing and evaluating a revised undergraduate medical education programme. The eight-year longitudinal evaluation study consisted of four phases compatible with the four components of the CIPP model. In the first phase, we explored the strengths and weaknesses of the traditional programme as well as contextual needs, assets, and resources. For the second phase, we proposed a model for the programme considering contextual features. During the process phase, we provided formative information for revisions and adjustments. Finally, in the fourth phase, we evaluated the outcomes of the new undergraduate medical education programme in the basic sciences phase. Information was collected from different sources such as medical students, faculty members, administrators, and graduates, using various qualitative and quantitative methods including focus groups, questionnaires, and performance measures. The CIPP model has the potential to guide policy makers to systematically collect evaluation data and to manage stakeholders' reactions at each stage of the reform in order to make informed decisions. However, the model may result in evaluation burden and fail to address some unplanned evaluation questions.
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-01-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity). PMID:25976626
Climate science and famine early warning
Verdin, James P.; Funk, Chris; Senay, Gabriel B.; Choularton, R.
2005-01-01
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.
Effect of land use change on the carbon cycle in Amazon soils
NASA Technical Reports Server (NTRS)
Trumbore, Susan E.; Davidson, Eric A.
1994-01-01
The overall goal of this study was to provide a quantitative understanding of the cycling of carbon in the soils associated with deep-rooting Amazon forests. In particular, we wished to apply the understanding gained by answering two questions: (1) what changes will accompany the major land use change in this region, the conversion of forest to pasture? and (2) what is the role of carbon stored deeper than one meter in depth in these soils? To construct carbon budgets for pasture and forest soils we combined the following: measurements of carbon stocks in above-ground vegetation, root biomass, detritus, and soil organic matter; rates of carbon inputs to soil and detrital layers using litterfall collection and sequential coring to estimate fine root turnover; C-14 analyses of fractionated SOM and soil CO2 to estimate residence times; C-13 analyses to estimate C inputs to pasture soils from C-4 grasses; soil pCO2, volumetric water content, and radon gradients to estimate CO2 production as a function of soil depth; soil respiration to estimate total C outputs; and a model of soil C dynamics that defines SOM fractions cycling on annual, decadal, and millennial time scales.
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity.
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-15
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
NASA Astrophysics Data System (ADS)
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Poliovirus RNA recombination: mechanistic studies in the absence of selection.
Jarvis, T C; Kirkegaard, K
1992-01-01
Direct and quantitative detection of recombinant RNA molecules by polymerase chain reaction (PCR) provides a novel method for studying recombination in RNA viruses without selection for viable progeny. The parental poliovirus strains used in this study contained polymorphic marker loci approximately 600 bases apart; both exhibited wild-type growth characteristics. We established conditions under which the amount of PCR product was linearly proportional to the amount of input template, and the reproducibility was high. Recombinant progeny were predominantly homologous and arose at frequencies up to 2 x 10(-3). Recombination events increased in frequency throughout replication, indicating that there is no viral RNA sequestration or inhibition of recombination late in infection as proposed in earlier genetic studies. Previous studies have demonstrated that poliovirus recombination occurs by a copy-choice mechanism in which the viral polymerase switches templates during negative-strand synthesis. Varying the relative amount of input parental virus markedly altered reciprocal recombination frequencies. This, in conjunction with the kinetics data, indicated that acceptor template concentration is a determinant of template switching frequency. Since positive strands greatly outnumber negative strands throughout poliovirus infection, this would explain the bias toward recombination during negative-strand synthesis. Images PMID:1379178
Climate science and famine early warning.
Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard
2005-11-29
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.
Climate science and famine early warning
Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard
2005-01-01
Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised. PMID:16433101
Seasonal variability of the hydrogen exosphere of Mars
NASA Astrophysics Data System (ADS)
Halekas, J. S.
2017-05-01
The Mars Atmosphere and Volatile EvolutioN (MAVEN) mission measures both the upstream solar wind and collisional products from energetic neutral hydrogen atoms that precipitate into the upper atmosphere after their initial formation by charge exchange with exospheric hydrogen. By computing the ratio between the densities of these populations, we derive a robust measurement of the column density of exospheric hydrogen upstream of the Martian bow shock. By comparing with Chamberlain-type model exospheres, we place new constraints on the structure and escape rates of exospheric hydrogen, derived from observations sensitive to a different and potentially complementary column from most scattered sunlight observations. Our observations provide quantitative estimates of the hydrogen exosphere with nearly complete temporal coverage, revealing order of magnitude seasonal changes in column density and a peak slightly after perihelion, approximately at southern summer solstice. The timing of this peak suggests either a lag in the response of the Martian atmosphere to solar inputs or a seasonal effect driven by lower atmosphere dynamics. The high degree of seasonal variability implied by our observations suggests that the Martian atmosphere and the thermal escape of light elements depend sensitively on solar inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Mass balance assessment for mercury in Lake Champlain
Gao, N.; Armatas, N.G.; Shanley, J.B.; Kamman, N.C.; Miller, E.K.; Keeler, G.J.; Scherbatskoy, T.; Holsen, T.M.; Young, T.; McIlroy, L.; Drake, S.; Olsen, Bill; Cady, C.
2006-01-01
A mass balance model for mercury in Lake Champlain was developed in an effort to understand the sources, inventories, concentrations, and effects of mercury (Hg) contamination in the lake ecosystem. To construct the mass balance model, air, water, and sediment were sampled as a part of this project and other research/monitoring projects in the Lake Champlain Basin. This project produced a STELLA-based computer model and quantitative apportionments of the principal input and output pathways of Hg for each of 13 segments in the lake. The model Hg concentrations in the lake were consistent with measured concentrations. Specifically, the modeling identified surface water inflows as the largest direct contributor of Hg into the lake. Direct wet deposition to the lake was the second largest source of Hg followed by direct dry deposition. Volatilization and sedimentation losses were identified as the two major removal mechanisms. This study significantly improves previous estimates of the relative importance of Hg input pathways and of wet and dry deposition fluxes of Hg into Lake Champlain. It also provides new estimates of volatilization fluxes across different lake segments and sedimentation loss in the lake. ?? 2006 American Chemical Society.
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
40 CFR 96.76 - Additional requirements to provide heat input data for allocations purposes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... heat input data for allocations purposes. 96.76 Section 96.76 Protection of Environment ENVIRONMENTAL... to provide heat input data for allocations purposes. (a) The owner or operator of a unit that elects... also monitor and report heat input at the unit level using the procedures set forth in part 75 of this...
Long-term Changes in Water Quality and Productivity in the Patuxent River Estuary: 1985 to 2003
We conducted a quantitative assessment of estuarine ecosystem responses to reduced phosphorus and nitrogen loading from sewage treatment facilities and to variability in freshwater flow and non-point nutrient inputs to the Patuxent River estuary. We analyzed a 19-year data set o...
Comparing the High School English Curriculum in Turkey through Multi-Analysis
ERIC Educational Resources Information Center
Batdi, Veli
2017-01-01
This study aimed to compare the High School English Curriculum (HSEC) in accordance with Stufflebeam's context, input, process and product (CIPP) model through multi-analysis. The research includes both quantitative and qualitative aspects. A descriptive analysis was operated through Rasch Measurement Model; SPSS program for the quantitative…
1986-04-14
CONCIPT DIFINITION OIVILOPMINTITIST I OPERATION ANO ■ MAINTENANCE ■ TRACK MOifCTIO PROGRAMS • «VIIW CRITICAL ISSUIS . Mt PARI INPUTS TO PMO...development and beyond, evaluation criteria must Include quantitative goals (the desired value) and thresholds (the value beyond which the charac
Parenting Styles and Attachment in School-Aged Children Who Stutter
ERIC Educational Resources Information Center
Lau, Su Re; Beilby, Janet M.; Byrnes, Michelle L.; Hennessey, Neville W.
2012-01-01
Parental input has been described as influential in early childhood stuttering yet the exact nature of this influence remains equivocal. The present study aimed to examine whether quantitative measures of parenting styles, parent and peer attachment patterns, and parent- and self-reported child behaviour could differentiate between school-aged…
Analytical aids in land management planning
David R. Betters
1978-01-01
Quantitative techniques may be applied to aid in completing various phases of land management planning. Analytical procedures which have been used include a procedure for public involvement, PUBLIC; a matrix information generator, MAGE5; an allocation procedure, linear programming (LP); and an input-output economic analysis (EA). These techniques have proven useful in...
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-01-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-10-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.
Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro
2018-05-01
CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .
Paterson, Eric; Midwood, Andrew J; Millard, Peter
2009-01-01
For soils in carbon balance, losses of soil carbon from biological activity are balanced by organic inputs from vegetation. Perturbations, such as climate or land use change, have the potential to disrupt this balance and alter soil-atmosphere carbon exchanges. As the quantification of soil organic matter stocks is an insensitive means of detecting changes, certainly over short timescales, there is a need to apply methods that facilitate a quantitative understanding of the biological processes underlying soil carbon balance. We outline the processes by which plant carbon enters the soil and critically evaluate isotopic methods to quantify them. Then, we consider the balancing CO(2) flux from soil and detail the importance of partitioning the sources of this flux into those from recent plant assimilate and those from native soil organic matter. Finally, we consider the interactions between the inputs of carbon to soil and the losses from soil mediated by biological activity. We emphasize the key functional role of the microbiota in the concurrent processing of carbon from recent plant inputs and native soil organic matter. We conclude that quantitative isotope labelling and partitioning methods, coupled to those for the quantification of microbial community substrate use, offer the potential to resolve the functioning of the microbial control point of soil carbon balance in unprecedented detail.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
Ludlow, Andrew T; Robin, Jerome D; Sayed, Mohammed; Litterst, Claudia M; Shelton, Dawne N; Shay, Jerry W; Wright, Woodring E
2014-07-01
The telomere repeat amplification protocol (TRAP) for the human reverse transcriptase, telomerase, is a PCR-based assay developed two decades ago and is still used for routine determination of telomerase activity. The TRAP assay can only reproducibly detect ∼ 2-fold differences and is only quantitative when compared to internal standards and reference cell lines. The method generally involves laborious radioactive gel electrophoresis and is not conducive to high-throughput analyzes. Recently droplet digital PCR (ddPCR) technologies have become available that allow for absolute quantification of input deoxyribonucleic acid molecules following PCR. We describe the reproducibility and provide several examples of a droplet digital TRAP (ddTRAP) assay for telomerase activity, including quantitation of telomerase activity in single cells, telomerase activity across several common telomerase positive cancer cells lines and in human primary peripheral blood mononuclear cells following mitogen stimulation. Adaptation of the TRAP assay to digital format allows accurate and reproducible quantification of the number of telomerase-extended products (i.e. telomerase activity; 57.8 ± 7.5) in a single HeLa cell. The tools developed in this study allow changes in telomerase enzyme activity to be monitored on a single cell basis and may have utility in designing novel therapeutic approaches that target telomerase. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Schram-Bijkerk, D; van Kempen, E; Knol, A B; Kruize, H; Staatsen, B; van Kamp, I
2009-10-01
Few quantitative health impact assessments (HIAs) of transport policies have been published so far and there is a lack of a common methodology for such assessments. To evaluate the usability of existing HIA methodology to quantify health effects of transport policies at the local level. Health impact of two simulated but realistic transport interventions - speed limit reduction and traffic re-allocation - was quantified by selecting traffic-related exposures and health endpoints, modelling of population exposure, selecting exposure-effect relations and estimating the number of local traffic-related cases and disease burden, expressed in disability-adjusted life-years (DALYs), before and after the intervention. Exposure information was difficult to retrieve because of the local scale of the interventions, and exposure-effect relations for subgroups and combined effects were missing. Given uncertainty in the outcomes originating from this kind of missing information, simulated changes in population health by two local traffic interventions were estimated to be small (<5%), except for the estimated reduction in DALYs by less traffic accidents (60%) due to speed limit reduction. Quantitative HIA of transport policies at a local scale is possible, provided that data on exposures, the exposed population and their baseline health status are available. The interpretation of the HIA information should be carried out in the context of the quality of input data and assumptions and uncertainties of the analysis.
A quantitative risk assessment for the safety of carcase storage systems for scrapie infected farms.
Adkin, A; Jones, D L; Eckford, R L; Edwards-Jones, G; Williams, A P
2014-10-01
To determine the risk associated with the use of carcase storage vessels on a scrapie infected farm. A stochastic quantitative risk assessment was developed to determine the rate of accumulation and fate of scrapie in a novel low-input storage system. For an example farm infected with classical scrapie, a mean of 10(3·6) Ovine Oral ID50 s was estimated to accumulate annually. Research indicates that the degradation of any prions present may range from insignificant to a magnitude of one or two logs over several months of storage. For infected farms, the likely partitioning of remaining prion into the sludge phase would necessitate the safe operation and removal of resulting materials from these systems. If complete mixing could be assumed, on average, the concentrations of infectivity are estimated to be slightly lower than that measured in placenta from infected sheep at lambing. This is the first quantitative assessment of the scrapie risk associated with fallen stock on farm and provides guidance to policy makers on the safety of one type of storage system and the relative risk when compared to other materials present on an infected farm. © 2014 Crown Copyright. Journal of Applied Microbiology © 2014 Society for Applied Microbiology This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
The UK waste input-output table: Linking waste generation to the UK economy.
Salemdeeb, Ramy; Al-Tabbaa, Abir; Reynolds, Christian
2016-10-01
In order to achieve a circular economy, there must be a greater understanding of the links between economic activity and waste generation. This study introduces the first version of the UK waste input-output table that could be used to quantify both direct and indirect waste arisings across the supply chain. The proposed waste input-output table features 21 industrial sectors and 34 waste types and is for the 2010 time-period. Using the waste input-output table, the study results quantitatively confirm that sectors with a long supply chain (i.e. manufacturing and services sectors) have higher indirect waste generation rates compared with industrial primary sectors (e.g. mining and quarrying) and sectors with a shorter supply chain (e.g. construction). Results also reveal that the construction, mining and quarrying sectors have the highest waste generation rates, 742 and 694 tonne per £1m of final demand, respectively. Owing to the aggregated format of the first version of the waste input-output, the model does not address the relationship between waste generation and recycling activities. Therefore, an updated version of the waste input-output table is expected be developed considering this issue. Consequently, the expanded model would lead to a better understanding of waste and resource flows in the supply chain. © The Author(s) 2016.
Local and Long-Range Circuit Connections to Hilar Mossy Cells in the Dentate Gyrus
Sun, Yanjun; Grieco, Steven F.; Holmes, Todd C.
2017-01-01
Abstract Hilar mossy cells are the prominent glutamatergic cell type in the dentate hilus of the dentate gyrus (DG); they have been proposed to have critical roles in the DG network. To better understand how mossy cells contribute to DG function, we have applied new viral genetic and functional circuit mapping approaches to quantitatively map and compare local and long-range circuit connections of mossy cells and dentate granule cells in the mouse. The great majority of inputs to mossy cells consist of two parallel inputs from within the DG: an excitatory input pathway from dentate granule cells and an inhibitory input pathway from local DG inhibitory neurons. Mossy cells also receive a moderate degree of excitatory and inhibitory CA3 input from proximal CA3 subfields. Long range inputs to mossy cells are numerically sparse, and they are only identified readily from the medial septum and the septofimbrial nucleus. In comparison, dentate granule cells receive most of their inputs from the entorhinal cortex. The granule cells receive significant synaptic inputs from the hilus and the medial septum, and they also receive direct inputs from both distal and proximal CA3 subfields, which has been underdescribed in the existing literature. Our slice-based physiological mapping studies further supported the identified circuit connections of mossy cells and granule cells. Together, our data suggest that hilar mossy cells are major local circuit integrators and they exert modulation of the activity of dentate granule cells as well as the CA3 region through “back-projection” pathways. PMID:28451637
NASA Astrophysics Data System (ADS)
Wang, Weiguang; Li, Changni; Xing, Wanqiu; Fu, Jianyu
2017-12-01
Representing atmospheric evaporating capability for a hypothetical reference surface, potential evapotranspiration (PET) determines the upper limit of actual evapotranspiration and is an important input to hydrological models. Due that present climate models do not give direct estimates of PET when simulating the hydrological response to future climate change, the PET must be estimated first and is subject to the uncertainty on account of many existing formulae and different input data reliabilities. Using four different PET estimation approaches, i.e., the more physically Penman (PN) equation with less reliable input variables, more empirical radiation-based Priestley-Taylor (PT) equation with relatively dependable downscaled data, the most simply temperature-based Hamon (HM) equation with the most reliable downscaled variable, and downscaling PET directly by the statistical downscaling model, this paper investigated the differences of runoff projection caused by the alternative PET methods by a well calibrated abcd monthly hydrological model. Three catchments, i.e., the Luanhe River Basin, the Source Region of the Yellow River and the Ganjiang River Basin, representing a large climatic diversity were chosen as examples to illustrate this issue. The results indicated that although similar monthly patterns of PET over the period 2021-2050 for each catchment were provided by the four methods, the magnitudes of PET were still slightly different, especially for spring and summer months in the Luanhe River Basin and the Source Region of the Yellow River with relatively dry climate feature. The apparent discrepancy in magnitude of change in future runoff and even the diverse change direction for summer months in the Luanhe River Basin and spring months in the Source Region of the Yellow River indicated that the PET method related uncertainty occurred, especially in the Luanhe River Basin and the Source Region of the Yellow River with smaller aridity index. Moreover, the possible reason of discrepancies in uncertainty between three catchments was quantitatively discussed by the contribution analysis based on climatic elasticity method. This study can provide beneficial reference to comprehensively understand the impacts of climate change on hydrological regime and thus improve the regional strategy for future water resource management.
Frequency and function in the basal ganglia: the origins of beta and gamma band activity.
Blenkinsop, Alexander; Anderson, Sean; Gurney, Kevin
2017-07-01
Neuronal oscillations in the basal ganglia have been observed to correlate with behaviours, although the causal mechanisms and functional significance of these oscillations remain unknown. We present a novel computational model of the healthy basal ganglia, constrained by single unit recordings from non-human primates. When the model is run using inputs that might be expected during performance of a motor task, the network shows emergent phenomena: it functions as a selection mechanism and shows spectral properties that match those seen in vivo. Beta frequency oscillations are shown to require pallido-striatal feedback, and occur with behaviourally relevant cortical input. Gamma oscillations arise in the subthalamic-globus pallidus feedback loop, and occur during movement. The model provides a coherent framework for the study of spectral, temporal and functional analyses of the basal ganglia and lays the foundation for an integrated approach to study basal ganglia pathologies such as Parkinson's disease in silico. Neural oscillations in the basal ganglia (BG) are well studied yet remain poorly understood. Behavioural correlates of spectral activity are well described, yet a quantitative hypothesis linking time domain dynamics and spectral properties to BG function has been lacking. We show, for the first time, that a unified description is possible by interpreting previously ignored structure in data describing globus pallidus interna responses to cortical stimulation. These data were used to expose a pair of distinctive neuronal responses to the stimulation. This observation formed the basis for a new mathematical model of the BG, quantitatively fitted to the data, which describes the dynamics in the data, and is validated against other stimulus protocol experiments. A key new result is that when the model is run using inputs hypothesised to occur during the performance of a motor task, beta and gamma frequency oscillations emerge naturally during static-force and movement, respectively, consistent with experimental local field potentials. This new model predicts that the pallido-striatum connection has a key role in the generation of beta band activity, and that the gamma band activity associated with motor task performance has its origins in the pallido-subthalamic feedback loop. The network's functionality as a selection mechanism also occurs as an emergent property, and closer fits to the data gave better selection properties. The model provides a coherent framework for the study of spectral, temporal and functional analyses of the BG and therefore lays the foundation for an integrated approach to study BG pathologies such as Parkinson's disease in silico. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
A Bayesian method to rank different model forecasts of the same volcanic ash cloud: Chapter 24
Denlinger, Roger P.; Webley, P.; Mastin, Larry G.; Schwaiger, Hans F.
2012-01-01
Volcanic eruptions often spew fine ash high into the atmosphere, where it is carried downwind, forming long ash clouds that disrupt air traffic and pose a hazard to air travel. To mitigate such hazards, the community studying ash hazards must assess risk of ash ingestion for any flight path and provide robust and accurate forecasts of volcanic ash dispersal. We provide a quantitative and objective method to evaluate the efficacy of ash dispersal estimates from different models, using Bayes theorem to assess the predictions that each model makes about ash dispersal. We incorporate model and measurement uncertainty and produce a posterior probability for model input parameters. The integral of the posterior over all possible combinations of model inputs determines the evidence for each model and is used to compare models. We compare two different types of transport models, an Eulerian model (Ash3d) and a Langrangian model (PUFF), as applied to the 2010 eruptions of Eyjafjallajökull volcano in Iceland. The evidence for each model benefits from common physical characteristics of ash dispersal from an eruption column and provides a measure of how well each model forecasts cloud transport. Given the complexity of the wind fields, we find that the differences between these models depend upon the differences in the way the models disperse ash into the wind from the source plume. With continued observation, the accuracy of the estimates made by each model increases, increasing the efficacy of each model’s ability to simulate ash dispersal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahi-Anwar, M; Lo, P; Kim, H
Purpose: The use of Quantitative Imaging (QI) methods in Clinical Trials requires both verification of adherence to a specified protocol and an assessment of scanner performance under that protocol, which are currently accomplished manually. This work introduces automated phantom identification and image QA measure extraction towards a fully-automated CT phantom QA system to perform these functions and facilitate the use of Quantitative Imaging methods in clinical trials. Methods: This study used a retrospective cohort of CT phantom scans from existing clinical trial protocols - totaling 84 phantoms, across 3 phantom types using various scanners and protocols. The QA system identifiesmore » the input phantom scan through an ensemble of threshold-based classifiers. Each classifier - corresponding to a phantom type - contains a template slice, which is compared to the input scan on a slice-by-slice basis, resulting in slice-wise similarity metric values for each slice compared. Pre-trained thresholds (established from a training set of phantom images matching the template type) are used to filter the similarity distribution, and the slice with the most optimal local mean similarity, with local neighboring slices meeting the threshold requirement, is chosen as the classifier’s matched slice (if it existed). The classifier with the matched slice possessing the most optimal local mean similarity is then chosen as the ensemble’s best matching slice. If the best matching slice exists, image QA algorithm and ROIs corresponding to the matching classifier extracted the image QA measures. Results: Automated phantom identification performed with 84.5% accuracy and 88.8% sensitivity on 84 phantoms. Automated image quality measurements (following standard protocol) on identified water phantoms (n=35) matched user QA decisions with 100% accuracy. Conclusion: We provide a fullyautomated CT phantom QA system consistent with manual QA performance. Further work will include parallel component to automatically verify image acquisition parameters and automated adherence to specifications. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH Grant support from: U01 CA181156.« less
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Evaluation of input devices for teleoperation of concentric tube continuum robots for surgical tasks
NASA Astrophysics Data System (ADS)
Fellmann, Carolin; Kashi, Daryoush; Burgner-Kahrs, Jessica
2015-03-01
For those minimally invasive surgery where conventional surgical instruments cannot reach the surgical site due to their straight structure and rigidity, concentric tube continuum robots are a promising technology because of their small size (comparable to a needle) and maneuverability. These flexible, compliant manipulators can easily access hard to reach anatomical structures, e.g. by turning around corners. By teleoperating the robot the surgeon stays in direct control at any time. In this paper, three off-the-shelf input devices are considered for teleoperation of a concentric tube continuum robot: a 3D mouse, a gamepad, and a 3 degrees of freedom haptic input device. Three tasks which mimic relevant surgical maneuvers are performed by 12 subjects using each input device: reaching specific locations, picking and placing objects from one location to another, and approaching the surgical site through a restricted pathway. We present quantitative results (task completion time, accuracy, etc.), a statistical analysis, and empirical results (questionnaires). Overall, the performance of subjects using the 3D mouse was superior to the performance using the other input devices. The subjective ranking of the 3D mouse by the subjects confirms this result.
Processing Oscillatory Signals by Incoherent Feedforward Loops
Zhang, Carolyn; You, Lingchong
2016-01-01
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175
Diagnosable structured logic array
NASA Technical Reports Server (NTRS)
Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)
2009-01-01
A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.
Liu, Hongtao; Xi, Youmin; Ren, Bingqun; Zhou, Heng
2012-01-01
Infrastructure has become an important topic in a variety of areas of the policy debate, including energy saving and climate change. In this paper, we use an energy input-output model to evaluate the amounts of China's embodied energy use in infrastructure investment from 1992 to 2007. We also use the structure decomposition model to analyze the factors impacting the embodied energy use in infrastructure investment for the same time period. The results show that embodied energy use in infrastructure investment accounted for a significant proportion of China's total energy use with an increasing trend and reflect that improper infrastructure investment represents inefficient use of energy and other resources. Some quantitative information is provided for further determining the low carbon development potentials of China's economy.
User’s guide for MapMark4GUI—A graphical user interface for the MapMark4 R package
Shapiro, Jason
2018-05-29
MapMark4GUI is an R graphical user interface (GUI) developed by the U.S. Geological Survey to support user implementation of the MapMark4 R statistical software package. MapMark4 was developed by the U.S. Geological Survey to implement probability calculations for simulating undiscovered mineral resources in quantitative mineral resource assessments. The GUI provides an easy-to-use tool to input data, run simulations, and format output results for the MapMark4 package. The GUI is written and accessed in the R statistical programming language. This user’s guide includes instructions on installing and running MapMark4GUI and descriptions of the statistical output processes, output files, and test data files.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Christos Nikolaos; Vovoli, Eftichia
An emotion recognition framework based on sound processing could improve services in human-computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were “known” to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
Local facet approximation for image stitching
NASA Astrophysics Data System (ADS)
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Liu, Hongtao; Xi, Youmin; Ren, Bingqun; Zhou, Heng
2012-01-01
Infrastructure has become an important topic in a variety of areas of the policy debate, including energy saving and climate change. In this paper, we use an energy input-output model to evaluate the amounts of China's embodied energy use in infrastructure investment from 1992 to 2007. We also use the structure decomposition model to analyze the factors impacting the embodied energy use in infrastructure investment for the same time period. The results show that embodied energy use in infrastructure investment accounted for a significant proportion of China's total energy use with an increasing trend and reflect that improper infrastructure investment represents inefficient use of energy and other resources. Some quantitative information is provided for further determining the low carbon development potentials of China's economy. PMID:23365534
Rail-to-rail differential input amplification stage with main and surrogate differential pairs
Britton, Jr., Charles Lanier; Smith, Stephen Fulton
2007-03-06
An operational amplifier input stage provides a symmetrical rail-to-rail input common-mode voltage without turning off either pair of complementary differential input transistors. Secondary, or surrogate, transistor pairs assume the function of the complementary differential transistors. The circuit also maintains essentially constant transconductance, constant slew rate, and constant signal-path supply current as it provides rail-to-rail operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A
An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCsmore » convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.« less
Quantitative analysis of four EMG amplifiers.
Perreault, E J; Hunter, I W; Kearney, R E
1993-09-01
Four typical EMG amplifiers were tested quantitatively to observe the diversity and specificity of available equipment. Gain, phase, common mode rejection ratio (CMRR) and noise characteristics were measured for each device. Various gain and phase responses were observed, each best suited to specific application areas. For all amplifiers, the CMRR was shown to decrease dramatically in the presence of input impedance mismatches of more than 10 k omega between the two electrodes. Because such impedance mismatches are common on the skin surface, these results indicate that proper skin preparation is required to maximize the noise rejection capabilities of the tested amplifiers.
Silicon solar cell process development, fabrication and analysis
NASA Technical Reports Server (NTRS)
Leung, D. C.; Iles, P. A.
1983-01-01
Measurements of minority carrier diffusion lengths were made on the small mesa diodes from HEM Si and SILSO Si. The results were consistent with previous Voc and Isc measurements. Only the medium grain SILSO had a distinct advantage for the non grain boundary diodes. Substantial variations were observed for the HEM ingot 4141C. Also a quantitatively scaled light spot scan was being developed for localized diffusion length measurements in polycrystalline silicon solar cells. A change to a more monochromatic input for the light spot scan results in greater sensitivity and in principle, quantitative measurement of local material qualities is now possible.
Resonant inelastic scattering by use of geometrical optics.
Schulte, Jörg; Schweiger, Gustav
2003-02-01
We investigate the inelastic scattering on spherical particles that contain one concentric inclusion in the case of input and output resonances, using a geometrical optics method. The excitation of resonances is included in geometrical optics by use of the concept of tunneled rays. To get a quantitative description of optical tunneling on spherical surfaces, we derive appropriate Fresnel-type reflection and transmission coefficients for the tunneled rays. We calculate the inelastic scattering cross section in the case of input and output resonances and investigate the influence of the distribution of the active material in the particle as well as the influence of the inclusion on inelastic scattering.
Response of a lock-in amplifier to noise
NASA Astrophysics Data System (ADS)
Van Baak, D. A.; Herold, George
2014-08-01
The "lock-in" detection technique can extract, from a possibly noisy waveform, the amplitude of a signal that is synchronous with a known reference signal. This paper examines the effects of input noise on the output of a lock-in amplifier. We present quantitative predictions for the root-mean-square size of the resulting fluctuations and for the spectral density of the noise at the output of a lock-in amplifier. Our results show how a lock-in amplifier can be used to measure the spectral density of noise in the case of a noise-only input signal. Some implications of the theory, familiar and surprising, are tested against experimental data.
Electromagnetic Pumps for Liquid Metal-Fed Electric Thrusters
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Markusic, Thomas E.
2007-01-01
Prototype designs of two separate pumps for use in electric propulsion systems with liquid lithium and bismuth propellants are presented. Both pumps are required to operate at elevated temperatures, and the lithium pump must additionally withstand the corrosive nature of the propellant. Compatibility of the pump materials and seals with lithium and bismuth were demonstrated through proof-of-concept experiments followed by post-experiment visual inspections. The pressure rise produced by the bismuth pump was found to be linear with input current and ranged from 0-9 kPa for corresponding input current levels of 0-30 A, showing good quantitative agreement with theoretical analysis.
40 CFR 60.4176 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Additional requirements to provide heat... requirements to provide heat input data. The owner or operator of a Hg Budget unit that monitors and reports Hg... monitor and report heat input rate at the unit level using the procedures set forth in part 75 of this...
Qualititive and Quantitative Allocations of Program Funds in a Non-Profit Institution.
ERIC Educational Resources Information Center
Brown, Edward K.
Through a generalized application of the principles of programing-planning-budgeting (PPB), a process was devised for describing the methods of resource allocation in a nonprofit institution. By categorizing pupil service inputs according to basic skills, instruction, and supportive services it became possible to identify meaningful service input…
USDA-ARS?s Scientific Manuscript database
Interspecific hybrids of tall caespitose Leymus cinereus (Scribn. & Merr.) A. Love and strongly rhizomatous Leymus triticoides (Buckley) Pilg. display a heterotic combination of traits important for perennial grass biomass production. The objectives of this study were to: 1) compare seasonal biomas...
Interviewing in Educational Research: A Bibliographic Essay.
ERIC Educational Resources Information Center
Chu, Felix T.
Different types of interviews serve different purposes; however, they all share a common goal of collecting data in different situations. The data may be factual in generating quantitative input for a research project, attitudinal in gauging public acceptance of a proposed educational policy, or used in gaining a better understanding of a certain…
Sociolinguistic Variation and Acquisition in Two-Way Language Immersion: Negotiating the Standard
ERIC Educational Resources Information Center
Starr, Rebecca Lurie
2016-01-01
This book investigates the acquisition of sociolinguistic knowledge in the early elementary school years of a Mandarin-English two-way immersion program in the United States. Using ethnographic observation and quantitative analysis of data, the author explores how input from teachers and classmates shapes students' language acquisition. The book…
A number of mathematical models have been developed to predict activated carbon column performance using single-solute isotherm data as inputs. Many assumptions are built into these models to account for kinetics of adsorption and competition for adsorption sites. This work...
Assessing Virtue: Measurement in Moral Education at Home and Abroad
ERIC Educational Resources Information Center
Alexander, Hanan A.
2016-01-01
How should we assess programs dedicated to education in virtue? One influential answer draws on quantitative research designs. By measuring the inputs and processes that produce the highest levels of virtue among participants according to some reasonable criterion, in this view, we can determine which programs engender the most desired results.…
USDA-ARS?s Scientific Manuscript database
The hop cultivar Cascade has been grown in the Pacific Northwestern U.S. with minimal input for management of powdery mildew (Podosphaera macularis) for nearly 20 years due to the putatively quantitative resistance in this cultivar. While partial resistance is generally thought to be more durable th...
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Lippa, Katrice A.
2016-10-01
Chemical purity assessment using quantitative 1H-nuclear magnetic resonance spectroscopy is a method based on ratio references of mass and signal intensity of the analyte species to that of chemical standards of known purity. As such, it is an example of a calculation using a known measurement equation with multiple inputs. Though multiple samples are often analyzed during purity evaluations in order to assess measurement repeatability, the uncertainty evaluation must also account for contributions from inputs to the measurement equation. Furthermore, there may be other uncertainty components inherent in the experimental design, such as independent implementation of multiple calibration standards. As such, the uncertainty evaluation is not purely bottom up (based on the measurement equation) or top down (based on the experimental design), but inherently contains elements of both. This hybrid form of uncertainty analysis is readily implemented with Bayesian statistical analysis. In this article we describe this type of analysis in detail and illustrate it using data from an evaluation of chemical purity and its uncertainty for a folic acid material.
SNP ID-info: SNP ID searching and visualization platform.
Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei
2008-09-01
Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.
SUS in nuclear medicine in Brazil: analysis and comparison of data provided by Datasus and CNEN*
Pozzo, Lorena; Coura Filho, George; Osso Júnior, João Alberto; Squair, Peterson Lima
2014-01-01
Objective To investigate the outpatient access to nuclear medicine procedures by means of the Brazilian Unified Health System (SUS), analyzing the correspondence between data provided by this system and those from Comissão Nacional de Energia Nuclear (CNEN) (National Commission of Nuclear Energy). Materials and Methods Data provided by Datasus regarding number of scintillation chambers, outpatient procedures performed from 2008 to 2012, administrative responsibility for such procedures, type of service providers and outsourced services were retrieved and evaluated. Also, such data were compared with those from institutions certified by CNEN. Results The present study demonstrated that the system still lacks maturity in terms of correct data input, particularly regarding equipment available. It was possible to list the most common procedures and check the growth of the specialty along the study period. Private centers are responsible for most of the procedures covered and reimbursed by SUS. However, many healthcare facilities are not certified by CNEN. Conclusion Datasus provides relevant data for analysis as done in the present study, although some issues still require attention. The present study has quantitatively depicted the Brazilian reality regarding access to nuclear medicine procedures offered by/for SUS. PMID:25741070
SUS in nuclear medicine in Brazil: analysis and comparison of data provided by Datasus and CNEN.
Pozzo, Lorena; Coura Filho, George; Osso Júnior, João Alberto; Squair, Peterson Lima
2014-01-01
To investigate the outpatient access to nuclear medicine procedures by means of the Brazilian Unified Health System (SUS), analyzing the correspondence between data provided by this system and those from Comissão Nacional de Energia Nuclear (CNEN) (National Commission of Nuclear Energy). Data provided by Datasus regarding number of scintillation chambers, outpatient procedures performed from 2008 to 2012, administrative responsibility for such procedures, type of service providers and outsourced services were retrieved and evaluated. Also, such data were compared with those from institutions certified by CNEN. The present study demonstrated that the system still lacks maturity in terms of correct data input, particularly regarding equipment available. It was possible to list the most common procedures and check the growth of the specialty along the study period. Private centers are responsible for most of the procedures covered and reimbursed by SUS. However, many healthcare facilities are not certified by CNEN. Datasus provides relevant data for analysis as done in the present study, although some issues still require attention. The present study has quantitatively depicted the Brazilian reality regarding access to nuclear medicine procedures offered by/for SUS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Rasztovits, Sascha; Roncat, Andreas; Zámolyi, András; Krawczyk, Dominik; Pfeifer, Norbert
2014-05-01
Aerial imagery derivatives collected by the Unmanned Aerial Vehicle (UAV) technology can be used as input for generation of high resolution digital terrain model (DTM) data along with the Terrestrial Laser Scanning (TLS) method. Both types of datasets are suitable for detailed geological and geomorphometric analysis, because the data provide micro-topographical and structural geological information. Our study focuses on the comparison of the possibilities of the extracted geological information, which is available from high resolution DTMs. This research attempts to find an answer which technology is more effective for geological and geomorphological analysis. The measurements were taken at the Doren landslide (Vorarlberg, Austria), a complex rotational land slide situated in the Alpine molasse foreland. Several formations (Kojen Formation, Würmian glacial moraine sediments, Weissach Formation) were tectonized there in the course of the alpine orogeny (Oberhauser et al, 2007). The typical fault direction is WSW-ENE. The UAV measurements that were carried out simultaneously with the TLS campaign focused on the landslide scarp. The original image resolution was 4 mm/pixel. Image matching was implemented in pyramid level 2 and the achieved resolution of the DTM was 0.05 meter. The TLS dataset includes 18 scan positions and more than 300 million points for the whole landslide area. The achieved DTM has 0.2 meter resolution. The steps of the geological and geomorphological analysis were: (1) visual interpretation based on field work and geological maps, (2) quantitative DTM analysis. In the quantitative analysis input data provided by the different kinds of DTMs were used for further parameter calculations (e.g. slope, aspect, sigmaZ). In the next step an automatic classification method was used for the detection of faults and classification of different parts of the landslide. The conclusion was that for geological visualization interpretation UAV datasets are better, because the high resolution texture information allows for the extraction of the digital geomorphology indicators. For quantitative analysis both datasets are informative, but the TLS DTM has an advantage of accessing additional information on faults beneath the vegetation cover. These studies were carried out partly in the framework of Hybrid 3D project financed by the Austrian Research Promotion Agency (FFG) and Von-Oben and 4D-IT; the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1; BSz contributed partly as an Alexander von Humboldt Research Fellow.
Event selection services in ATLAS
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Cuhadar-Donszelmann, T.; Gallas, E.; Hrivnac, J.; Kenyon, M.; McGlone, H.; Malon, D.; Mambelli, M.; Nowak, M.; Viegas, F.; Vinek, E.; Zhang, Q.
2010-04-01
ATLAS has developed and deployed event-level selection services based upon event metadata records ("TAGS") and supporting file and database technology. These services allow physicists to extract events that satisfy their selection predicates from any stage of data processing and use them as input to later analyses. One component of these services is a web-based Event-Level Selection Service Interface (ELSSI). ELSSI supports event selection by integrating run-level metadata, luminosity-block-level metadata (e.g., detector status and quality information), and event-by-event information (e.g., triggers passed and physics content). The list of events that survive after some selection criterion is returned in a form that can be used directly as input to local or distributed analysis; indeed, it is possible to submit a skimming job directly from the ELSSI interface using grid proxy credential delegation. ELSSI allows physicists to explore ATLAS event metadata as a means to understand, qualitatively and quantitatively, the distributional characteristics of ATLAS data. In fact, the ELSSI service provides an easy interface to see the highest missing ET events or the events with the most leptons, to count how many events passed a given set of triggers, or to find events that failed a given trigger but nonetheless look relevant to an analysis based upon the results of offline reconstruction, and more. This work provides an overview of ATLAS event-level selection services, with an emphasis upon the interactive Event-Level Selection Service Interface.
Akl, Elie A; El-Jardali, Fadi; Bou Karroum, Lama; El-Eid, Jamale; Brax, Hneine; Akik, Chaza; Osman, Mona; Hassan, Ghayda; Itani, Mira; Farha, Aida; Pottie, Kevin; Oliver, Sandy
2015-01-01
Effective coordination between organizations, agencies and bodies providing or financing health services in humanitarian crises is required to ensure efficiency of services, avoid duplication, and improve equity. The objective of this review was to assess how, during and after humanitarian crises, different mechanisms and models of coordination between organizations, agencies and bodies providing or financing health services compare in terms of access to health services and health outcomes. We registered a protocol for this review in PROSPERO International prospective register of systematic reviews under number PROSPERO2014:CRD42014009267. Eligible studies included randomized and nonrandomized designs, process evaluations and qualitative methods. We electronically searched Medline, PubMed, EMBASE, Cochrane Central Register of Controlled Trials, CINAHL, PsycINFO, and the WHO Global Health Library and websites of relevant organizations. We followed standard systematic review methodology for the selection, data abstraction, and risk of bias assessment. We assessed the quality of evidence using the GRADE approach. Of 14,309 identified citations from databases and organizations' websites, we identified four eligible studies. Two studies used mixed-methods, one used quantitative methods, and one used qualitative methods. The available evidence suggests that information coordination between bodies providing health services in humanitarian crises settings may be effective in improving health systems inputs. There is additional evidence suggesting that management/directive coordination such as the cluster model may improve health system inputs in addition to access to health services. None of the included studies assessed coordination through common representation and framework coordination. The evidence was judged to be of very low quality. This systematic review provides evidence of possible effectiveness of information coordination and management/directive coordination between organizations, agencies and bodies providing or financing health services in humanitarian crises. Our findings can inform the research agenda and highlight the need for improving conduct and reporting of research in this field.
Akl, Elie A.; El-Jardali, Fadi; Bou Karroum, Lama; El-Eid, Jamale; Brax, Hneine; Akik, Chaza; Osman, Mona; Hassan, Ghayda; Itani, Mira; Farha, Aida; Pottie, Kevin; Oliver, Sandy
2015-01-01
Background Effective coordination between organizations, agencies and bodies providing or financing health services in humanitarian crises is required to ensure efficiency of services, avoid duplication, and improve equity. The objective of this review was to assess how, during and after humanitarian crises, different mechanisms and models of coordination between organizations, agencies and bodies providing or financing health services compare in terms of access to health services and health outcomes. Methods We registered a protocol for this review in PROSPERO International prospective register of systematic reviews under number PROSPERO2014:CRD42014009267. Eligible studies included randomized and nonrandomized designs, process evaluations and qualitative methods. We electronically searched Medline, PubMed, EMBASE, Cochrane Central Register of Controlled Trials, CINAHL, PsycINFO, and the WHO Global Health Library and websites of relevant organizations. We followed standard systematic review methodology for the selection, data abstraction, and risk of bias assessment. We assessed the quality of evidence using the GRADE approach. Results Of 14,309 identified citations from databases and organizations' websites, we identified four eligible studies. Two studies used mixed-methods, one used quantitative methods, and one used qualitative methods. The available evidence suggests that information coordination between bodies providing health services in humanitarian crises settings may be effective in improving health systems inputs. There is additional evidence suggesting that management/directive coordination such as the cluster model may improve health system inputs in addition to access to health services. None of the included studies assessed coordination through common representation and framework coordination. The evidence was judged to be of very low quality. Conclusion This systematic review provides evidence of possible effectiveness of information coordination and management/directive coordination between organizations, agencies and bodies providing or financing health services in humanitarian crises. Our findings can inform the research agenda and highlight the need for improving conduct and reporting of research in this field. PMID:26332670
A single-layer platform for Boolean logic and arithmetic through DNA excision in mammalian cells
Weinberg, Benjamin H.; Hang Pham, N. T.; Caraballo, Leidy D.; Lozanoski, Thomas; Engel, Adrien; Bhatia, Swapnil; Wong, Wilson W.
2017-01-01
Genetic circuits engineered for mammalian cells often require extensive fine-tuning to perform their intended functions. To overcome this problem, we present a generalizable biocomputing platform that can engineer genetic circuits which function in human cells with minimal optimization. We used our Boolean Logic and Arithmetic through DNA Excision (BLADE) platform to build more than 100 multi-input-multi-output circuits. We devised a quantitative metric to evaluate the performance of the circuits in human embryonic kidney and Jurkat T cells. Of 113 circuits analysed, 109 functioned (96.5%) with the correct specified behavior without any optimization. We used our platform to build a three-input, two-output Full Adder and six-input, one-output Boolean Logic Look Up Table. We also used BLADE to design circuits with temporal small molecule-mediated inducible control and circuits that incorporate CRISPR/Cas9 to regulate endogenous mammalian genes. PMID:28346402
Heinmets, F; Leary, R H
1991-06-01
A model system (1) was established to analyze purine and pyrimidine metabolism. This system has been expanded to include macrosimulation of DNA synthesis and the study of its regulation by terminal deoxynucleoside triphosphates (dNTPs) via a complex set of interactions. Computer experiments reveal that our model exhibits adequate and reasonable sensitivity in terms of dNTP pool levels and rates of DNA synthesis when inputs to the system are varied. These simulation experiments reveal that in order to achieve maximum DNA synthesis (in terms of purine metabolism), a proper balance is required in guanine and adenine input into this metabolic system. Excessive inputs will become inhibitory to DNA synthesis. In addition, studies are carried out on rates of DNA synthesis when various parameters are changed quantitatively. The current system is formulated by 110 differential equations.
NASA Astrophysics Data System (ADS)
Fröhlich, Dominik; Matzarakis, Andreas
2016-04-01
Human thermal perception is best described through thermal indices. The most popular thermal indices applied in human bioclimatology are the perceived temperature (PT), the Universal Thermal Climate Index (UTCI), and the physiologically equivalent temperature (PET). They are analysed focusing on their sensitivity to single meteorological input parameters under the hot and windy meteorological conditions observed in Doha, Qatar. It can be noted, that the results for the three indices are distributed quite differently. Furthermore, they respond quite differently to modifications in the input conditions. All of them show particular limitations and shortcomings that have to be considered and discussed. While the results for PT are unevenly distributed, UTCI shows limitations concerning the input data accepted. PET seems to respond insufficiently to changes in vapour pressure. The indices should therefore be improved to be valid for several kinds of climates.
Big bang nucleosynthesis revisited via Trojan Horse method measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pizzone, R. G.; Spartá, R.; Spitaleri, C.
Nuclear reaction rates are among the most important input for understanding primordial nucleosynthesis and, therefore, for a quantitative description of the early universe. An up-to-date compilation of direct cross-sections of {sup 2}H(d, p){sup 3}H, {sup 2}H(d, n){sup 3}He, {sup 7}Li(p, α){sup 4}He, and {sup 3}He(d, p){sup 4}He reactions is given. These are among the most uncertain cross-sections used and input for big bang nucleosynthesis calculations. Their measurements through the Trojan Horse method are also reviewed and compared with direct data. The reaction rates and the corresponding recommended errors in this work were used as input for primordial nucleosynthesis calculations tomore » evaluate their impact on the {sup 2}H, {sup 3,4}He, and {sup 7}Li primordial abundances, which are then compared with observations.« less
3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling
Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...
2016-02-05
13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less
Chen, Cheng; Wang, Wei; Ozolek, John A.; Rohde, Gustavo K.
2013-01-01
We describe a new supervised learning-based template matching approach for segmenting cell nuclei from microscopy images. The method uses examples selected by a user for building a statistical model which captures the texture and shape variations of the nuclear structures from a given dataset to be segmented. Segmentation of subsequent, unlabeled, images is then performed by finding the model instance that best matches (in the normalized cross correlation sense) local neighborhood in the input image. We demonstrate the application of our method to segmenting nuclei from a variety of imaging modalities, and quantitatively compare our results to several other methods. Quantitative results using both simulated and real image data show that, while certain methods may work well for certain imaging modalities, our software is able to obtain high accuracy across several imaging modalities studied. Results also demonstrate that, relative to several existing methods, the template-based method we propose presents increased robustness in the sense of better handling variations in illumination, variations in texture from different imaging modalities, providing more smooth and accurate segmentation borders, as well as handling better cluttered nuclei. PMID:23568787
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
A novel approach for evaluating the risk of health care failure modes.
Chang, Dong Shang; Chung, Jenq Hann; Sun, Kuo Lung; Yang, Fu Chiang
2012-12-01
Failure mode and effects analysis (FMEA) can be employed to reduce medical errors by identifying the risk ranking of the health care failure modes and taking priority action for safety improvement. The purpose of this paper is to propose a novel approach of data analysis. The approach is to integrate FMEA and a mathematical tool-Data envelopment analysis (DEA) with "slack-based measure" (SBM), in the field of data analysis. The risk indexes (severity, occurrence, and detection) of FMEA are viewed as multiple inputs of DEA. The practicality and usefulness of the proposed approach is illustrated by one case of health care. Being a systematic approach for improving the service quality of health care, the approach can offer quantitative corrective information of risk indexes that thereafter reduce failure possibility. For safety improvement, these new targets of the risk indexes could be used for management by objectives. But FMEA cannot provide quantitative corrective information of risk indexes. The novel approach can surely overcome this chief shortcoming of FMEA. After combining DEA SBM model with FMEA, the two goals-increase of patient safety, medical cost reduction-can be together achieved.
Krølner, Rikke; Rasmussen, Mette; Brug, Johannes; Klepp, Knut-Inge; Wind, Marianne; Due, Pernille
2011-10-14
Large proportions of children do not fulfil the World Health Organization recommendation of eating at least 400 grams of fruit and vegetables (FV) per day. To promote an increased FV intake among children it is important to identify factors which influence their consumption. Both qualitative and quantitative studies are needed. Earlier reviews have analysed evidence from quantitative studies. The aim of this paper is to present a systematic review of qualitative studies of determinants of children's FV intake. Relevant studies were identified by searching Anthropology Plus, Cinahl, CSA illumine, Embase, International Bibliography of the Social Sciences, Medline, PsycINFO, and Web of Science using combinations of synonyms for FV intake, children/adolescents and qualitative methods as search terms. The literature search was completed by December 1st 2010. Papers were included if they applied qualitative methods to investigate 6-18-year-olds' perceptions of factors influencing their FV consumption. Quantitative studies, review studies, studies reported in other languages than English, and non-peer reviewed or unpublished manuscripts were excluded. The papers were reviewed systematically using standardised templates for summary of papers, quality assessment, and synthesis of findings across papers. The review included 31 studies, mostly based on US populations and focus group discussions. The synthesis identified the following potential determinants for FV intake which supplement the quantitative knowledge base: Time costs; lack of taste guarantee; satiety value; appropriate time/occasions/settings for eating FV; sensory and physical aspects; variety, visibility, methods of preparation; access to unhealthy food; the symbolic value of food for image, gender identity and social interaction with peers; short term outcome expectancies. The review highlights numerous potential determinants which have not been investigated thoroughly in quantitative studies. Future large scale quantitative studies should attempt to quantify the importance of these factors. Further, mechanisms behind gender, age and socioeconomic differences in FV consumption are proposed which should be tested quantitatively in order to better tailor interventions to vulnerable groups. Finally, the review provides input to the conceptualisation and measurements of concepts (i.e. peer influence, availability in schools) which may refine survey instruments and theoretical frameworks concerning eating behaviours.
Quantitative imaging of protein targets in the human brain with PET
NASA Astrophysics Data System (ADS)
Gunn, Roger N.; Slifstein, Mark; Searle, Graham E.; Price, Julie C.
2015-11-01
PET imaging of proteins in the human brain with high affinity radiolabelled molecules has a history stretching back over 30 years. During this period the portfolio of protein targets that can be imaged has increased significantly through successes in radioligand discovery and development. This portfolio now spans six major categories of proteins; G-protein coupled receptors, membrane transporters, ligand gated ion channels, enzymes, misfolded proteins and tryptophan-rich sensory proteins. In parallel to these achievements in radiochemical sciences there have also been significant advances in the quantitative analysis and interpretation of the imaging data including the development of methods for image registration, image segmentation, tracer compartmental modeling, reference tissue kinetic analysis and partial volume correction. In this review, we analyze the activity of the field around each of the protein targets in order to give a perspective on the historical focus and the possible future trajectory of the field. The important neurobiology and pharmacology is introduced for each of the six protein classes and we present established radioligands for each that have successfully transitioned to quantitative imaging in humans. We present a standard quantitative analysis workflow for these radioligands which takes the dynamic PET data, associated blood and anatomical MRI data as the inputs to a series of image processing and bio-mathematical modeling steps before outputting the outcome measure of interest on either a regional or parametric image basis. The quantitative outcome measures are then used in a range of different imaging studies including tracer discovery and development studies, cross sectional studies, classification studies, intervention studies and longitudinal studies. Finally we consider some of the confounds, challenges and subtleties that arise in practice when trying to quantify and interpret PET neuroimaging data including motion artifacts, partial volume effects, age effects, image registration and normalization, input functions and metabolites, parametric imaging, receptor internalization and genetic factors.
Jafari, Ramin; Chhabra, Shalini; Prince, Martin R; Wang, Yi; Spincemaille, Pascal
2018-04-01
To propose an efficient algorithm to perform dual input compartment modeling for generating perfusion maps in the liver. We implemented whole field-of-view linear least squares (LLS) to fit a delay-compensated dual-input single-compartment model to very high temporal resolution (four frames per second) contrast-enhanced 3D liver data, to calculate kinetic parameter maps. Using simulated data and experimental data in healthy subjects and patients, whole-field LLS was compared with the conventional voxel-wise nonlinear least-squares (NLLS) approach in terms of accuracy, performance, and computation time. Simulations showed good agreement between LLS and NLLS for a range of kinetic parameters. The whole-field LLS method allowed generating liver perfusion maps approximately 160-fold faster than voxel-wise NLLS, while obtaining similar perfusion parameters. Delay-compensated dual-input liver perfusion analysis using whole-field LLS allows generating perfusion maps with a considerable speedup compared with conventional voxel-wise NLLS fitting. Magn Reson Med 79:2415-2421, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Tan, M L H; Kok, K; Ganesh, V; Thomas, S S
2014-02-01
Breast cancer patient's expectation and choice of reconstruction is increasing and patients often satisfy their information needs outside clinic time by searching the world wide web. The aim of our study was to analyse the quality of content and extent of information regarding breast reconstruction available on YouTube videos and whether this is an appropriate additional source of information for patients. A snapshot qualitative and quantitative analysis of the first 100 videos was performed after the term 'breast reconstruction' was input into the search window of the video sharing website www.youtube.com on the 1st of September 2011. Qualitative categorical analysis included patient, oncological and reconstruction factors. It was concluded that although videos uploaded onto YouTube do not provide comprehensive information, it is a useful resource that can be utilised in patient education provided comprehensive and validated videos are made available. Copyright © 2013 Elsevier Ltd. All rights reserved.
A GIS-based modeling system for petroleum waste management. Geographical information system.
Chen, Z; Huang, G H; Li, J B
2003-01-01
With an urgent need for effective management of petroleum-contaminated sites, a GIS-aided simulation (GISSIM) system is presented in this study. The GISSIM contains two components: an advanced 3D numerical model and a geographical information system (GIS), which are integrated within a general framework. The modeling component undertakes simulation for the fate of contaminants in subsurface unsaturated and saturated zones. The GIS component is used in three areas throughout the system development and implementation process: (i) managing spatial and non-spatial databases; (ii) linking inputs, model, and outputs; and (iii) providing an interface between the GISSIM and its users. The developed system is applied to a North American case study. Concentrations of benzene, toluene, and xylenes in groundwater under a petroleum-contaminated site are dynamically simulated. Reasonable outputs have been obtained and presented graphically. They provide quantitative and scientific bases for further assessment of site-contamination impacts and risks, as well as decisions on practical remediation actions.
NASA Astrophysics Data System (ADS)
Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur
2016-03-01
This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.
Review of GEM Radiation Belt Dropout and Buildup Challenges
NASA Astrophysics Data System (ADS)
Tu, Weichao; Li, Wen; Morley, Steve; Albert, Jay
2017-04-01
In Summer 2015 the US NSF GEM (Geospace Environment Modeling) focus group named "Quantitative Assessment of Radiation Belt Modeling" started the "RB dropout" and "RB buildup" challenges, focused on quantitative modeling of the radiation belt buildups and dropouts. This is a community effort which includes selecting challenge events, gathering model inputs that are required to model the radiation belt dynamics during these events (e.g., various magnetospheric waves, plasmapause and density models, electron phase space density data), simulating the challenge events using different types of radiation belt models, and validating the model results by comparison to in situ observations of radiation belt electrons (from Van Allen Probes, THEMIS, GOES, LANL/GEO, etc). The goal is to quantitatively assess the relative importance of various acceleration, transport, and loss processes in the observed radiation belt dropouts and buildups. Since 2015, the community has selected four "challenge" events under four different categories: "storm-time enhancements", "non-storm enhancements", "storm-time dropouts", and "non-storm dropouts". Model inputs and data for each selected event have been coordinated and shared within the community to establish a common basis for simulations and testing. Modelers within and outside US with different types of radiation belt models (diffusion-type, diffusion-convection-type, test particle codes, etc.) have participated in our challenge and shared their simulation results and comparison with spacecraft measurements. Significant progress has been made in quantitative modeling of the radiation belt buildups and dropouts as well as accessing the modeling with new measures of model performance. In this presentation, I will review the activities from our "RB dropout" and "RB buildup" challenges and the progresses achieved in understanding radiation belt physics and improving model validation and verification.
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
Expert review on poliovirus immunity and transmission.
Duintjer Tebbens, Radboud J; Pallansch, Mark A; Chumakov, Konstantin M; Halsey, Neal A; Hovi, Tapani; Minor, Philip D; Modlin, John F; Patriarca, Peter A; Sutter, Roland W; Wright, Peter F; Wassilak, Steven G F; Cochi, Stephen L; Kim, Jong-Hoon; Thompson, Kimberly M
2013-04-01
Successfully managing risks to achieve wild polioviruses (WPVs) eradication and address the complexities of oral poliovirus vaccine (OPV) cessation to stop all cases of paralytic poliomyelitis depends strongly on our collective understanding of poliovirus immunity and transmission. With increased shifting from OPV to inactivated poliovirus vaccine (IPV), numerous risk management choices motivate the need to understand the tradeoffs and uncertainties and to develop models to help inform decisions. The U.S. Centers for Disease Control and Prevention hosted a meeting of international experts in April 2010 to review the available literature relevant to poliovirus immunity and transmission. This expert review evaluates 66 OPV challenge studies and other evidence to support the development of quantitative models of poliovirus transmission and potential outbreaks. This review focuses on characterization of immunity as a function of exposure history in terms of susceptibility to excretion, duration of excretion, and concentration of excreted virus. We also discuss the evidence of waning of host immunity to poliovirus transmission, the relationship between the concentration of poliovirus excreted and infectiousness, the importance of different transmission routes, and the differences in transmissibility between OPV and WPV. We discuss the limitations of the available evidence for use in polio risk models, and conclude that despite the relatively large number of studies on immunity, very limited data exist to directly support quantification of model inputs related to transmission. Given the limitations in the evidence, we identify the need for expert input to derive quantitative model inputs from the existing data. © 2012 Society for Risk Analysis.
System and circuitry to provide stable transconductance for biasing
NASA Technical Reports Server (NTRS)
Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor)
2012-01-01
An amplifier system can include an input amplifier configured to receive an analog input signal and provide an amplified signal corresponding to the analog input signal. A tracking loop is configured to employ delta modulation for tracking the amplified signal, the tracking loop providing a corresponding output signal. A biasing circuit is configured to adjust a bias current to maintain stable transconductance over temperature variations, the biasing circuit providing at least one bias signal for biasing at least one of the input amplifier and the tracking loop, whereby the circuitry receiving the at least one bias signal exhibits stable performance over the temperature variations. In another embodiment the biasing circuit can be utilized in other applications.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
Exploring the Underlying Mechanisms of the Xenopus laevis Embryonic Cell Cycle.
Zhang, Kun; Wang, Jin
2018-05-31
The cell cycle is an indispensable process in proliferation and development. Despite significant efforts, global quantification and physical understanding are still challenging. In this study, we explored the mechanisms of the Xenopus laevis embryonic cell cycle by quantifying the underlying landscape and flux. We uncovered the Mexican hat landscape of the Xenopus laevis embryonic cell cycle with several local basins and barriers on the oscillation path. The local basins characterize the different phases of the Xenopus laevis embryonic cell cycle, and the local barriers represent the checkpoints. The checkpoint mechanism of the cell cycle is revealed by the landscape basins and barriers. While landscape shape determines the stabilities of the states on the oscillation path, the curl flux force determines the stability of the cell cycle flow. Replication is fundamental for biology of living cells. We quantify the input energy (through the entropy production) as the thermodynamic requirement for initiation and sustainability of single cell life (cell cycle). Furthermore, we also quantify curl flux originated from the input energy as the dynamical requirement for the emergence of a new stable phase (cell cycle). This can provide a new quantitative insight for the origin of single cell life. In fact, the curl flux originated from the energy input or nutrition supply determines the speed and guarantees the progression of the cell cycle. The speed of the cell cycle is a hallmark of cancer. We characterized the quality of the cell cycle by the coherence time and found it is supported by the flux and energy cost. We are also able to quantify the degree of time irreversibility by the cross correlation function forward and backward in time from the stochastic traces in the simulation or experiments, providing a way for the quantification of the time irreversibility and the flux. Through global sensitivity analysis upon landscape and flux, we can identify the key elements for controlling the cell cycle speed. This can help to design an effective strategy for drug discovery against cancer.
Competitiveness, production, and productivity of cocoa in Indonesia
NASA Astrophysics Data System (ADS)
Fahmid, I. M.; Harun, H.; Fahmid, M. M.; Saadah; Busthanul, N.
2018-05-01
Cocoa is one of Indonesia’s five foreign exchange earner, thus cocoa must stay competitive for the export market. Aims of this study are: analyze the cost structure, production and productivity of cocoa farming, the level of competitiveness, and map the types of government policies that affect the competitiveness of cocoa plants. The method used is descriptive qualitative and quantitative. Data analysis is done by using PAM (Policy Analysis Matrix). The results showed, structures are at the cost of production of cocoa farming in Indonesia almost 50 percent for wages, and 31.6 percent for land rental. The big percentage of workers wages indicates that cocoa farming is labor intensive production. In Indonesia total productive cocoa farms only 27.6%, with a productivity level of 655,515 kg per hectare. Cocoa farming in Indonesia is carried out with protective policies, the value of EPC 4.29, indicating the government’s policy towards the inputs and outputs of cocoa has been effective. While the PCR value of 0.51, indicating cocoa farming has a competitive advantage, but it does not have a comparative advantage. In conclusion, productivity, out-put prices, and exchange rates should be raised, and input prices should be lowered, so that cocoa farming can provide higher net transfer values for farmers. To improve the competitiveness of cocoa farming, the islands of Sulawesi and Sumatra are two islands that require special policies, especially on out-put price policy, input prices, and productivity, as well as improvement of other cocoa commodity farming systems, as these two islands contributed more the 80 percent of Indonesia cocoa bean production.
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
NASA Astrophysics Data System (ADS)
Deng, H.; Wood, L.; Overeem, I.; Hutton, E.
2016-12-01
Submarine topography has a fundamental control on the movement of sediment gravity flows as well as the distribution, morphology, and internal heterogeneity of resultant overlying, healing-phase, deep-water reservoirs. Some of the most complex deep-water topography is generated through both destructive and constructive mass transport processes. A series of numerical models using Sedflux software have been constructed over high resolution mass transport complexes (MTCs) top paleobathymetric surfaces mapped from 3D seismic data in offshore Morocco and offshore eastern Trinidad. Morocco's margin is characterized by large, extant rafted blocks and a flow perpendicular fabric. Trinidad's margin is characterized by muddier, plastic flows and isolated extrusive diapiric buttresses. In addition, Morocco's margin is a dry, northern latitude margin that lacks major river inputs, while Trinidad's margin is an equatorial, wet climate that is fed by the Orinoco River and delta. These models quantitatively delineate the interaction of healing-phase gravity flows on the tops of two very different topographies and provide insights into healing-phase reservoir distribution and stratigraphic trap development. Slopes roughness, curvatures, and surface shapes are measured and quantified relative to input points to quantify depositional surface character. A variety of sediment gravity flow types have been input and the resultant interval assessed for thickness and distribution relative to key topography parameters. Mathematical relationships are to be analyzed and compared with seismic data interpretation of healing-phase interval character, toward an improved model of gravity sedimentation and topography interactions.
Rudroff, Thorsten; Kindred, John H; Kalliokoski, Kari K
2015-05-15
Positron emission tomography (PET) with [(18)F]-fluorodeoxyglucose (FDG) is an established clinical tool primarily used to diagnose and evaluate disease status in patients with cancer. PET imaging using FDG can be a highly valuable tool to investigate normal human physiology by providing a noninvasive, quantitative measure of glucose uptake into various cell types. Over the past years it has also been increasingly used in exercise physiology studies to identify changes in glucose uptake, metabolism, and muscle activity during different exercise modalities. Metabolically active cells transport FDG, an (18)fluorine-labeled glucose analog tracer, from the blood into the cells where it is then phosphorylated but not further metabolized. This metabolic trapping process forms the basis of this method's use during exercise. The tracer is given to a participant during an exercise task, and the actual PET imaging is performed immediately after the exercise. Provided the uptake period is of sufficient duration, and the imaging is performed shortly after the exercise; the captured image strongly reflects the metabolic activity of the cells used during the task. When combined with repeated blood sampling to determine tracer blood concentration over time, also known as the input function, glucose uptake rate of the tissues can be quantitatively calculated. This synthesis provides an accounting of studies using FDG-PET to measure acute exercise-induced skeletal muscle activity, describes the advantages and limitations of this imaging technique, and discusses its applications to the field of exercise physiology. Copyright © 2015 the American Physiological Society.
NASA Technical Reports Server (NTRS)
Black, Jr., William C. (Inventor); Hermann, Theodore M. (Inventor)
1998-01-01
A current determiner having an output at which representations of input currents are provided having an input conductor for the input current and a current sensor supported on a substrate electrically isolated from one another but with the sensor positioned in the magnetic fields arising about the input conductor due to any input currents. The sensor extends along the substrate in a direction primarily perpendicular to the extent of the input conductor and is formed of at least a pair of thin-film ferromagnetic layers separated by a non-magnetic conductive layer. The sensor can be electrically connected to a electronic circuitry formed in the substrate including a nonlinearity adaptation circuit to provide representations of the input currents of increased accuracy despite nonlinearities in the current sensor, and can include further current sensors in bridge circuits.
Inverter ratio failure detector
NASA Technical Reports Server (NTRS)
Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)
1974-01-01
A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.
The USA Nr Inventory: Dominant Sources and Primary Transport Pathways
NASA Astrophysics Data System (ADS)
Sabo, R. D.; Clark, C.; Sobota, D. J.; Compton, J.; Cooter, E. J.; Schwede, D. B.; Bash, J. O.; Rea, A.; Dobrowolski, J. P.
2016-12-01
Efforts to mitigate the deleterious effects of excess reactive nitrogen (Nr) on human health and ecosystem goods and service while ensuring food, biofuel, and fiber availability, is one of the most pressing environmental management challenges of this century. Effective management of Nr requires up to date inventories that quantitatively characterize the sources, transport, and transformation of Nr through the environment. The inherent complexity of the nitrogen cycle, however, through multiple exchange points across air, water, and terrestrial media, renders such inventories difficult to compile and manage. Previous Nr Inventories are for 2002 and 2007, and used data sources that have since been improved. Thus, this recent inventory will substantially advance the methodology across many sectors of the inventory (e.g. deposition and biological fixation in crops and natural systems) and create a recent snapshot that is sorely needed for policy planning and trends analysis. Here we use a simple mass balance approach to estimate the input-output budgets for all United States Geologic Survey Hydrologic Unit Code-8 watersheds. We focus on a recent year (i.e. 2012) to update the Nr Inventory, but apply the analytical approach for multiple years where possible to assess trends through time. We also compare various sector estimates using multiple methodologies. Assembling datasets that account for new Nr inputs into watersheds (e.g., atmospheric NOy deposition, food imports, biologic N fixation) and internal fluxes of recycled Nr (e.g., manure, Nr emmissions/volatilization) provide an unprecedented, data driven computation of N flux. Input-output budgets will offer insight into 1) the dominant sources of Nr in a watershed (e.g., food imports, atmospheric N deposition, or fertilizer), 2) the primary loss pathways for Nr (e.g., crop N harvest, volatilization/emissions), and 3) what watersheds are net sources versus sinks of Nr. These insights will provide needed clarity for managers looking to minimize the loss of Nr to atmospheric and aquatic compartments, while also providing a foundational database for researchers assessing the dominant controls of N retention and loss in natural and anthropogenically dominated ecosystems. Disclaimer: Views expressed are the authors' and not views or polices of the U.S.EPA.
Dual Brushless Resolver Rate Sensor
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor)
1997-01-01
A resolver rate sensor is disclosed in which dual brushless resolvers are mechanically coupled to the same output shaft. Diverse inputs are provided to each resolver by providing the first resolver with a DC input and the second resolver with an AC sinusoidal input. A trigonometric identity in which the sum of the squares of the sin and cosine components equal one is used to advantage in providing a sensor of increased accuracy. The first resolver may have a fixed or variable DC input to permit dynamic adjustment of resolver sensitivity thus permitting a wide range of coverage. In one embodiment of the invention the outputs of the first resolver are directly inputted into two separate multipliers and the outputs of the second resolver are inputted into the two separate multipliers, after being demodulated in a pair of demodulator circuits. The multiplied signals are then added in an adder circuit to provide a directional sensitive output. In another embodiment the outputs from the first resolver is modulated in separate modulator circuits and the output from the modulator circuits are used to excite the second resolver. The outputs from the second resolver are demodulated in separate demodulator circuit and added in an adder circuit to provide a direction sensitive rate output.
Quantitative Resistance to Plant Pathogens in Pyramiding Strategies for Durable Crop Protection.
Pilet-Nayel, Marie-Laure; Moury, Benoît; Caffier, Valérie; Montarry, Josselin; Kerlan, Marie-Claire; Fournet, Sylvain; Durel, Charles-Eric; Delourme, Régine
2017-01-01
Quantitative resistance has gained interest in plant breeding for pathogen control in low-input cropping systems. Although quantitative resistance frequently has only a partial effect and is difficult to select, it is considered more durable than major resistance (R) genes. With the exponential development of molecular markers over the past 20 years, resistance QTL have been more accurately detected and better integrated into breeding strategies for resistant varieties with increased potential for durability. This review summarizes current knowledge on the genetic inheritance, molecular basis, and durability of quantitative resistance. Based on this knowledge, we discuss how strategies that combine major R genes and QTL in crops can maintain the effectiveness of plant resistance to pathogens. Combining resistance QTL with complementary modes of action appears to be an interesting strategy for breeding effective and potentially durable resistance. Combining quantitative resistance with major R genes has proven to be a valuable approach for extending the effectiveness of major genes. In the plant genomics era, improved tools and methods are becoming available to better integrate quantitative resistance into breeding strategies. Nevertheless, optimal combinations of resistance loci will still have to be identified to preserve resistance effectiveness over time for durable crop protection.
Bondurant, Amy E; Huang, Zhiqing; Whitaker, Regina S; Simel, Lauren R; Berchuck, Andrew; Murphy, Susan K
2011-12-01
Detection of cell free tumor-specific DNA methylation has been proposed as a potentially useful noninvasive mechanism to detect malignancies, including ovarian cancer, and to monitor response to treatment. However, there are few easily implemented quantitative approaches available for DNA methylation analysis. Our objectives were to develop an absolute quantitative method for detection of DNA methylation using RASSF1A, a known target of promoter methylation in ovarian cancer, and test the ability to detect RASSF1A methylation in tumors and serum specimens of women with ovarian cancer. Bisulfite modified DNAs were subjected to real time PCR using nondiscriminatory PCR primers and a probe with sequence containing a single CpG site, theoretically able to capture the methylation status of that CpG for every allele within a given specimen. Input DNA was normalized to ACTB levels detected simultaneously by assay multiplexing. Methylation levels were established by comparison to results obtained from universally methylated DNA. The assay was able to detect one methylated RASSF1A allele in 100,000 unmethylated alleles. RASSF1A was methylated in 54 of 106 (51%) invasive serous ovarian cancers analyzed and methylation status was concordant in 20/20 matched preoperative serum-tumor pairs. Serial serum specimens taken over the course of treatment for 8 of 9 patients showed fluctuations in RASSF1A methylation concomitant with disease status. This novel assay provides a real-time PCR-based method for absolute quantitation of DNA methylation. Our results support feasibility of monitoring RASSF1A methylation from serum samples taken over the course of treatment from women with ovarian cancer. Copyright © 2011 Elsevier Inc. All rights reserved.
Development of a Computerized In-Basket Exercise for the Classroom: A Sales Management Example
ERIC Educational Resources Information Center
Pearson, Michael M.; Barnes, John W.; Onken, Marina H.
2006-01-01
This article follows the development of a sales management in-basket exercise for use in the classroom. The authors have computerized the exercise and added features to allow for additional and more quantitative input from the students. The exercise has evolved and been tested in numerous classroom situations. The computerized in-basket exercise…
Privatising Public Schooling in Post-Apartheid South Africa: Equity Considerations
ERIC Educational Resources Information Center
Motala, Shireen
2009-01-01
Through an analysis of quantitative and qualitative data on school funding in South Africa, this paper aims to analyse the user fee policy option in public schooling in South Africa. Debate is ongoing about the role of private input into public schooling and whether this practice affects access (and the constitutional right) to basic education,…
ERIC Educational Resources Information Center
Marinac, Julie V.; Woodyatt, Gail C.; Ozanne, Anne E.
2008-01-01
This paper reports the design and trial of an original Observational Framework for quantitative investigation of young children's responses to adult language in their typical language learning environments. The Framework permits recording of both the response expectation of the adult utterances, and the degree of compliance in the child's…
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Studying Distance Students: Methods, Findings, Actions
ERIC Educational Resources Information Center
Wahl, Diane; Avery, Beth; Henry, Lisa
2013-01-01
University of North Texas (UNT) Libraries began studying the library needs of distance learners in 2009 using a variety of approaches to explore and confirm these needs as well as obtain input into how to meet them. Approaches used to date include analysis of both quantitative and qualitative responses by online students to the LibQUAL+[R] surveys…
Using a Tablet PC in the German Classroom to Enliven Teacher Input
ERIC Educational Resources Information Center
Van Orden, Stephen
2006-01-01
Providing students with lively, authentic comprehensible input is one of the most important tasks of introductory German teachers. Using a Tablet PC can enable teachers to improve the quality of the comprehensible input they provide their students. This article describes how integrating a Tablet PC into daily teaching processes allows classroom…
High input impedance amplifier
NASA Technical Reports Server (NTRS)
Kleinberg, Leonard L.
1995-01-01
High input impedance amplifiers are provided which reduce the input impedance solely to a capacitive reactance, or, in a somewhat more complex design, provide an extremely high essentially infinite, capacitive reactance. In one embodiment, where the input impedance is reduced in essence, to solely a capacitive reactance, an operational amplifier in a follower configuration is driven at its non-inverting input and a resistor with a predetermined magnitude is connected between the inverting and non-inverting inputs. A second embodiment eliminates the capacitance from the input by adding a second stage to the first embodiment. The second stage is a second operational amplifier in a non-inverting gain-stage configuration where the output of the first follower stage drives the non-inverting input of the second stage and the output of the second stage is fed back to the non-inverting input of the first stage through a capacitor of a predetermined magnitude. These amplifiers, while generally useful, are very useful as sensor buffer amplifiers that may eliminate significant sources of error.
Sprague, Lori A.; Gronberg, Jo Ann M.
2013-01-01
Anthropogenic inputs of nitrogen and phosphorus to each county in the conterminous United States and to the watersheds of 495 surface-water sites studied as part of the U.S. Geological Survey National Water-Quality Assessment Program were quantified for the years 1992, 1997, and 2002. Estimates of inputs of nitrogen and phosphorus from biological fixation by crops (for nitrogen only), human consumption, crop production for human consumption, animal production for human consumption, animal consumption, and crop production for animal consumption for each county are provided in a tabular dataset. These county-level estimates were allocated to the watersheds of the surface-water sites to estimate watershed-level inputs from the same sources; these estimates also are provided in a tabular dataset, together with calculated estimates of net import of food and net import of feed and previously published estimates of inputs from atmospheric deposition, fertilizer, and recoverable manure. The previously published inputs are provided for each watershed so that final estimates of total anthropogenic nutrient inputs could be calculated. Estimates of total anthropogenic inputs are presented together with previously published estimates of riverine loads of total nitrogen and total phosphorus for reference.
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
Clinical Investigation of the Dopaminergic System with PET and FLUORINE-18-FLUORO-L-DOPA.
NASA Astrophysics Data System (ADS)
Oakes, Terrence Rayford
1995-01-01
Positron Emission Tomography (PET) is a tool that provides quantitative physiological information. It is valuable both in a clinical environment, where information is sought for an individual, and in a research environment, to answer more fundamental questions about physiology and disease states. PET is particularly attractive compared to other nuclear medicine imaging techniques in cases where the anatomical regions of interest are small or when true metabolic rate constants are required. One example with both of these requirements is the investigation of Parkinson's Disease, which is characterized as a presynaptic motor function deficit affecting the striatum. As dopaminergic neurons die, the ability of the striatum to affect motor function decreases. The extent of functional neuronal damage in the small sub-structures may be ascertained by measuring the ability of the caudate and putamen to trap and store dopamine, a neurotransmitter. PET is able to utilize a tracer of dopamine activity, ^ {18}F- scL-DOPA, to quantitate the viability of the striatum. This thesis work deals with implementing and optimizing the many different elements that compose a PET study of the dopaminergic system, including: radioisotope production; conversion of aqueous ^{18}F ^-into [^ {18}F]-F2; synthesis of ^{18}F- scL -DOPA; details of the PET scan itself; measurements to estimate the radiation dosimetry; accurate measurement of a plasma input function; and the quantitation of dopaminergic activity in normal human subjects as well as in Parkinson's Disease patients.
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Arterial input function derived from pairwise correlations between PET-image voxels.
Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea
2013-07-01
A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.
On the contribution of PRIDE-JUICE to Jovian system ephemerides
NASA Astrophysics Data System (ADS)
Dirkx, D.; Gurvits, L. I.; Lainey, V.; Lari, G.; Milani, A.; Cimò, G.; Bocanegra-Bahamon, T. M.; Visser, P. N. A. M.
2017-11-01
The Jupiter Icy Moons Explorer (JUICE) mission will perform detailed measurements of the properties of the Galilean moons, with a nominal in-system science-mission duration of about 3.5 years. Using both the radio tracking data, and (Earth- and JUICE-based) optical astrometry, the dynamics of the Galilean moons will be measured to unprecedented accuracy. This will provide crucial input to the determination of the ephemerides and physical properties of the system, most notably the dissipation in Io and Jupiter. The data from Planetary Radio Interferometry and Doppler Experiment (PRIDE) will provide the lateral position of the spacecraft in the International Celestial Reference Frame (ICRF). In this article, we analyze the relative quantitative influence of the JUICE-PRIDE observables to the determination of the ephemerides of the Jovian system and the associated physical parameters. We perform a covariance analysis for a broad range of mission and system characteristics. We analyze the influence of VLBI data quality, observation planning, as well as the influence of JUICE orbit determination quality. This provides key input for the further development of the PRIDE observational planning and ground segment development. Our analysis indicates that the VLBI data are especially important for constraining the dynamics of Ganymede and Callisto perpendicular to their orbital planes. Also, the use of the VLBI data makes the uncertainty in the ephemerides less dependent on the error in the orbit determination of the JUICE spacecraft itself. Furthermore, we find that optical astrometry data of especially Io using the JANUS instrument will be crucial for stabilizing the solution of the normal equations. Knowledge of the dissipation in the Jupiter system cannot be improved using satellite dynamics obtained from JUICE data alone, the uncertainty in Io's dissipation obtained from our simulations is similar to the present level of uncertainty.
Quantifying and managing uncertainty in operational modal analysis
NASA Astrophysics Data System (ADS)
Au, Siu-Kui; Brownjohn, James M. W.; Mottershead, John E.
2018-03-01
Operational modal analysis aims at identifying the modal properties (natural frequency, damping, etc.) of a structure using only the (output) vibration response measured under ambient conditions. Highly economical and feasible, it is becoming a common practice in full-scale vibration testing. In the absence of (input) loading information, however, the modal properties have significantly higher uncertainty than their counterparts identified from free or forced vibration (known input) tests. Mastering the relationship between identification uncertainty and test configuration is of great interest to both scientists and engineers, e.g., for achievable precision limits and test planning/budgeting. Addressing this challenge beyond the current state-of-the-art that are mostly concerned with identification algorithms, this work obtains closed form analytical expressions for the identification uncertainty (variance) of modal parameters that fundamentally explains the effect of test configuration. Collectively referred as 'uncertainty laws', these expressions are asymptotically correct for well-separated modes, small damping and long data; and are applicable under non-asymptotic situations. They provide a scientific basis for planning and standardization of ambient vibration tests, where factors such as channel noise, sensor number and location can be quantitatively accounted for. The work is reported comprehensively with verification through synthetic and experimental data (laboratory and field), scientific implications and practical guidelines for planning ambient vibration tests.
NASA Astrophysics Data System (ADS)
Elliott, Jonathan T.; Wright, Eric A.; Tichauer, Kenneth M.; Diop, Mamadou; Morrison, Laura B.; Pogue, Brian W.; Lee, Ting-Yim; St. Lawrence, Keith
2012-12-01
In many cases, kinetic modeling requires that the arterial input function (AIF)—the time-dependent arterial concentration of a tracer—be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO2) following a bolus injection of a light-absorbing dye. In other words, the change in SaO2 can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.
Elliott, Jonathan T; Wright, Eric A; Tichauer, Kenneth M; Diop, Mamadou; Morrison, Laura B; Pogue, Brian W; Lee, Ting-Yim; St Lawrence, Keith
2012-12-21
In many cases, kinetic modeling requires that the arterial input function (AIF)--the time-dependent arterial concentration of a tracer--be characterized. A straightforward method to measure the AIF of red and near-infrared optical dyes (e.g., indocyanine green) using a pulse oximeter is presented. The method is motivated by the ubiquity of pulse oximeters used in both preclinical and clinical applications, as well as the gap in currently available technologies to measure AIFs in small animals. The method is based on quantifying the interference that is observed in the derived arterial oxygen saturation (SaO₂) following a bolus injection of a light-absorbing dye. In other words, the change in SaO₂ can be converted into dye concentration knowing the chromophore-specific extinction coefficients, the true arterial oxygen saturation, and total hemoglobin concentration. A simple error analysis was performed to highlight potential limitations of the approach, and a validation of the method was conducted in rabbits by comparing the pulse oximetry method with the AIF acquired using a pulse dye densitometer. Considering that determining the AIF is required for performing quantitative tracer kinetics, this method provides a flexible tool for measuring the arterial dye concentration that could be used in a variety of applications.
Løhre, Camilla; Vik Halleraker, Hilde; Barth, Tanja
2017-01-01
The interest and on-going research on utilisation of lignin as feedstock for production of renewable and sustainable aromatics is expanding and shows great potential. This study investigates the applicability of semi-continuously organosolv extracted lignin in Lignin-to-Liquid (LtL) solvolysis, using formic acid as hydrogen donor and water as solvent under high temperature–high pressure (HTHP) conditions. The high purity of the organosolv lignin provides high conversion yields at up to 94% based on lignin mass input. The formic acid input is a dominating parameter in lignin conversion. Carbon balance calculations of LtL-solvolysis experiments also indicate that formic acid can give a net carbon contribution to the bio-oils, in addition to its property as hydrogenation agent. Compound specific quantification of the ten most abundant components in the LtL-oils describe up to 10% of the bio-oil composition, and reaction temperature is shown to be the dominating parameter for the structures present. The structural and quantitative results from this study identify components of considerable value in the LtL-oil, and support the position of this oil as a potentially important source of building blocks for the chemical and pharmaceutical industry. PMID:28124994
Enabling multi-faceted measures of success for protected area management in Trinidad and Tobago.
Granderson, Ainka A
2011-08-01
A key challenge has been to define and measure "success" in managing protected areas. A case study was conducted of efforts to evaluate the new protected area management system in Trinidad and Tobago using a participatory approach. The aim of the case study was to (1) examine whether stakeholder involvement better captures the multi-faceted nature of success and (2) identify the role and influence of various stakeholder groups in this process. An holistic and systematic framework was developed with stakeholder input that facilitated the integration of expert and lay knowledge, a broad emphasis on ecological, socio-economic, and institutional aspects, and the use of both quantitative and qualitative data allowing the evaluation to capture the multi-faceted nature and impacts of protected area management. Input from primary stakeholders, such as local communities, was critical as they have a high stake in protected area outcomes. Secondary and external stakeholders, including government agencies, non-governmental organizations, academia and the private sector, were also important in providing valuable technical assistance and serving as mediators. However, a lack of consensus over priorities, politics, and limited stakeholder capacity and data access pose significant barriers to engaging stakeholders to effectively measure the management success of protected areas. Copyright © 2011 Elsevier Ltd. All rights reserved.
A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.
Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi
2016-02-01
In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
2017-09-01
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
Remote sensing study of Maumee River effects of Lake Erie
NASA Technical Reports Server (NTRS)
Svehla, R.; Raquet, C.; Shook, D.; Salzman, J.; Coney, T.; Wachter, D.; Gedney, R.
1975-01-01
The effects of river inputs on boundary waters were studied in partial support of the task to assess the significance of river inputs into receiving waters, dispersion of pollutants, and water quality. The effects of the spring runoff of the Maumee River on Lake Erie were assessed by a combination of ship survey and remote sensing techniques. The imagery obtained from a multispectral scanner of the west basin of Lake Erie is discussed: this clearly showed the distribution of particulates throughout the covered area. This synoptic view, in addition to its qualitative value, is very useful in selecting sampling stations for shipboard in situ measurements, and for extrapolating these quantitative results throughout the area of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreev, V. V., E-mail: vvandreev@mail.ru; Vasileska, I., E-mail: ivonavasileska@yahoo.com; Korneeva, M. A., E-mail: korneevama@mail.ru
A pulse-periodic 2.45-GHz electron-cyclotron resonance plasma source on the basis of a permanent- magnet mirror trap has been constructed and tested. Variations in the discharge parameters and the electron temperature of argon plasma have been investigated in the argon pressure range of 1 × 10{sup –4} to 4 × 10{sup –3} Torr at a net pulsed input microwave power of up to 600 W. The plasma electron temperature in the above ranges of gas pressures and input powers has been measured by a Langmuir probe and determined using optical emission spectroscopy (OES) from the intensity ratios of spectral lines. Themore » OES results agree qualitatively and quantitatively with the data obtained using the double probe.« less
NASA Technical Reports Server (NTRS)
Jones, Denise R.
1990-01-01
A piloted simulation study was conducted comparing three different input methods for interfacing to a large-screen, multiwindow, whole-flight-deck display for management of transport aircraft systems. The thumball concept utilized a miniature trackball embedded in a conventional side-arm controller. The touch screen concept provided data entry through a capacitive touch screen. The voice concept utilized a speech recognition system with input through a head-worn microphone. No single input concept emerged as the most desirable method of interacting with the display. Subjective results, however, indicate that the voice concept was the most preferred method of data entry and had the most potential for future applications. The objective results indicate that, overall, the touch screen concept was the most effective input method. There was also significant differences between the time required to perform specific tasks and the input concept employed, with each concept providing better performance relative to a specific task. These results suggest that a system combining all three input concepts might provide the most effective method of interaction.
Brunie, Aurélie; Wamala-Mucheri, Patricia; Otterness, Conrad; Akol, Angela; Chen, Mario; Bufumbo, Leonard; Weaver, Mark
2014-01-01
Introduction: In the face of global health worker shortages, community health workers (CHWs) are an important health care delivery strategy for underserved populations. In Uganda, community-based programs often use volunteer CHWs to extend services, including family planning, in rural areas. This study examined factors related to CHW motivation and level of activity in 3 family planning programs in Uganda. Methods: Data were collected between July and August 2011, and sources comprised 183 surveys with active CHWs, in-depth interviews (IDIs) with 43 active CHWs and 5 former CHWs, and service statistics records. Surveys included a discrete choice experiment (DCE) to elicit CHW preferences for selected program inputs. Results: Service statistics indicated an average of 56 visits with family planning clients per surveyed CHW over the 3-month period prior to data collection. In the survey, new skills and knowledge, perceived impact on the community, and enhanced status were the main positive aspects of the job reported by CHWs; the main challenges related to transportation. Multivariate analyses identified 2 correlates of CHWs being highly vs. less active (in terms of number of client visits): experiencing problems with supplies and not collaborating with peers. DCE results showed that provision of a package including a T-shirt, badge, and bicycle was the program input CHWs preferred, followed by a mobile phone (without airtime). IDI data reinforced and supplemented these quantitative findings. Social prestige, social responsibility, and aspirations for other opportunities were important motivators, while main challenges related to transportation and commodity stockouts. CHWs had complex motivations for wanting better compensation, including offsetting time and transportation costs, providing for their families, and feeling appreciated for their efforts. Conclusion: Volunteer CHW programs in Uganda and elsewhere need to carefully consider appropriate combinations of financial and nonfinancial inputs for optimal results. PMID:25276566
Step-control of electromechanical systems
Lewis, Robert N.
1979-01-01
The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.
Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli
2017-10-01
Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1 yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.
Hou, Xiao-bin; Hu, Yong-cheng; He, Jin-quan
2013-02-01
To investigate the feasibility of determining the surface density of arginine-glycine-aspartic acid (RGD) peptides grafted onto allogeneic bone by an isotopic tracing method involving labeling these peptides with (125) I, evaluating the impact of the input concentration of RGD peptides on surface density and establishing the correlation between surface density and their input concentration. A synthetic RGD-containing polypeptide (EPRGDNYR) was labeled with (125) I and its specific radioactivity calculated. Reactive solutions of RGD peptide with radioactive (125) I-RGD as probe with input concentrations of 0.01 mg/mL, 0.10 mg/mL, 0.50 mg/mL, 1.00 mg/mL, 2.00 mg/mL and 4.00 mg/mL were prepared. Using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide as a cross-linking agent, reactions were induced by placing allogeneic bone fragments into reactive solutions of RGD peptide of different input concentrations. On completion of the reactions, the surface densities of RGD peptides grafted onto the allogeneic bone fragments were calculated by evaluating the radioactivity and surface areas of the bone fragments. The impact of input concentration of RGD peptides on surface density was measured and a curve constructed. Measurements by a radiodensity γ-counter showed that the RGD peptides had been labeled successfully with (125) I. The allogeneic bone fragments were radioactive after the reaction, demonstrating that the RGD peptides had been successfully grafted onto their surfaces. It was also found that with increasing input concentration, the surface density increased. It was concluded that the surface density of RGD peptides is quantitatively related to their input concentration. With increasing input concentration, the surface density gradually increases to saturation value. © 2013 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
An accelerated training method for back propagation networks
NASA Technical Reports Server (NTRS)
Shelton, Robert O. (Inventor)
1993-01-01
The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.
Knierim, James J; Neunuebel, Joshua P; Deshmukh, Sachin S
2014-02-05
The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between 'where' versus 'what' needs revision. We propose a refinement of this model, which is more complex than the simple spatial-non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience.
Knierim, James J.; Neunuebel, Joshua P.; Deshmukh, Sachin S.
2014-01-01
The hippocampus receives its major cortical input from the medial entorhinal cortex (MEC) and the lateral entorhinal cortex (LEC). It is commonly believed that the MEC provides spatial input to the hippocampus, whereas the LEC provides non-spatial input. We review new data which suggest that this simple dichotomy between ‘where’ versus ‘what’ needs revision. We propose a refinement of this model, which is more complex than the simple spatial–non-spatial dichotomy. MEC is proposed to be involved in path integration computations based on a global frame of reference, primarily using internally generated, self-motion cues and external input about environmental boundaries and scenes; it provides the hippocampus with a coordinate system that underlies the spatial context of an experience. LEC is proposed to process information about individual items and locations based on a local frame of reference, primarily using external sensory input; it provides the hippocampus with information about the content of an experience. PMID:24366146
C. Mann; J.D. Absher
2007-01-01
The scientific inputs to management of recreation areas in Germany have been largely determined by ecologically oriented quantitative impact and conflict studies with an emphasis on nature protection. Today, however, Germanyâs recreational situation has changed. New activities and increased participation by people seeking different recreational experiences challenge...
Relationships between net primary productivity and forest stand age in U.S. forests
Liming He; Jing M. Chen; Yude Pan; Richard Birdsey; Jens Kattge
2012-01-01
Net primary productivity (NPP) is a key flux in the terrestrial ecosystem carbon balance, as it summarizes the autotrophic input into the system. Forest NPP varies predictably with stand age, and quantitative information on the NPP-age relationship for different regions and forest types is therefore fundamentally important for forest carbon cycle modeling. We used four...
Structural Uncertainties in Numerical Induction Models
2006-07-01
divide and conquer” modelling approach. Analytical inputs are then assessments, quantitative or qualitative, of the value, performance, or some...said to be naïve because it relies heavily on the inductive method itself. Sophisticated Induction (Logical Positivism ) This form of induction...falters. Popper’s Falsification Karl Popper around 1959 introduced a variant to the above Logical Positivism , known as the inductive-hypothetico
Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I
2009-06-01
A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.
Knutsson, Linda; Bloch, Karin Markenroth; Holtås, Stig; Wirestam, Ronnie; Ståhlberg, Freddy
2008-05-01
To identify regional arterial input functions (AIFs) using factor analysis of dynamic studies (FADS) when quantification of perfusion is performed using model-free arterial spin labelling. Five healthy volunteers and one patient were examined on a 3-T Philips unit using quantitative STAR labelling of arterial regions (QUASAR). Two sets of images were retrieved, one where the arterial signal had been crushed and another where it was retained. FADS was applied to the arterial signal curves to acquire the AIFs. Perfusion maps were obtained using block-circulant SVD deconvolution and regional AIFs obtained by FADS. In the volunteers, the ASL experiment was repeated within 24 h. The patient was also examined using dynamic susceptibility contrast MRI. In the healthy volunteers, CBF was 64+/-10 ml/[min 100 g] (mean+/-S.D.) in GM and 24+/-4 ml/[min 100 g] in WM, while the mean aBV was 0.94% in GM and 0.25% in WM. Good CBF image quality and reasonable quantitative CBF values were obtained using the combined QUASAR/FADS technique. We conclude that FADS may be a useful supplement in the evaluation of ASL data using QUASAR.
Terrestrial litter inputs as determinants of food quality of organic matter in a forest stream
J.L. Meyer; C. Hax; J.B. Wallace; S.L. Eggert; J.R. Webster
2000-01-01
Inputs of leaf litter and other organic matter from the catchment exceed autochthonous production and provide an important food resource in most streams (WEBSTER & MEYER 1997, ANDERSON & SEDELL 1979). An experimental long-term exclusion of terrestrial litter inputs to a forested headwater stream (WALLACE et al. 1997) provided an opportunity to determine if the...
Methods, systems and apparatus for controlling operation of two alternating current (AC) machines
Gallegos-Lopez, Gabriel [Torrance, CA; Nagashima, James M [Cerritos, CA; Perisic, Milun [Torrance, CA; Hiti, Silva [Redondo Beach, CA
2012-02-14
A system is provided for controlling two AC machines. The system comprises a DC input voltage source that provides a DC input voltage, a voltage boost command control module (VBCCM), a five-phase PWM inverter module coupled to the two AC machines, and a boost converter coupled to the inverter module and the DC input voltage source. The boost converter is designed to supply a new DC input voltage to the inverter module having a value that is greater than or equal to a value of the DC input voltage. The VBCCM generates a boost command signal (BCS) based on modulation indexes from the two AC machines. The BCS controls the boost converter such that the boost converter generates the new DC input voltage in response to the BCS. When the two AC machines require additional voltage that exceeds the DC input voltage required to meet a combined target mechanical power required by the two AC machines, the BCS controls the boost converter to drive the new DC input voltage generated by the boost converter to a value greater than the DC input voltage.
The energy balance of the nighttime thermosphere
NASA Technical Reports Server (NTRS)
Glenar, D. A.
1977-01-01
The discrepancy between the input from the day hemisphere and the observed loss rates is discussed in terms of ion-neutral processes and gravity wave inputs. There has been considerable speculation as to the energy balance of the thermosphere and in particular about the fraction of the total energy input supplied by ultraviolet radiation. The problem is considerably simplified by considering the energy balance of the nighttime hemisphere alone. Sunrise and sunset vapor trail measurements provide data on the wind systems at the terminator boundary, and temperature measurements provide information on the vertical energy conduction. North-south winds from high latitude vapor trail measurements provide a measure of the energy input from auroral processes.
Kraus, Johanna M.; Pletcher, Leanna T.; Vonesh, James R.
2010-01-01
1. Cross-ecosystem movements of resources, including detritus, nutrients and living prey, can strongly influence food web dynamics in recipient habitats. Variation in resource inputs is thought to be driven by factors external to the recipient habitat (e.g. donor habitat productivity and boundary conditions). However, inputs of or by ‘active’ living resources may be strongly influenced by recipient habitat quality when organisms exhibit behavioural habitat selection when crossing ecosystem boundaries. 2. To examine whether behavioural responses to recipient habitat quality alter the relative inputs of ‘active’ living and ‘passive’ detrital resources to recipient food webs, we manipulated the presence of caged predatory fish and measured biomass, energy and organic content of inputs to outdoor experimental pools of adult aquatic insects, frog eggs, terrestrial plant matter and terrestrial arthropods. 3. Caged fish reduced the biomass, energy and organic matter donated to pools by tree frog eggs by ∼70%, but did not alter insect colonisation or passive allochthonous inputs of terrestrial arthropods and plant material. Terrestrial plant matter and adult aquatic insects provided the most energy and organic matter inputs to the pools (40–50%), while terrestrial arthropods provided the least (7%). Inputs of frog egg were relatively small but varied considerably among pools and over time (3%, range = 0–20%). Absolute and proportional amounts varied by input type. 4. Aquatic predators can strongly affect the magnitude of active, but not passive, inputs and that the effect of recipient habitat quality on active inputs is variable. Furthermore, some active inputs (i.e. aquatic insect colonists) can provide similar amounts of energy and organic matter as passive inputs of terrestrial plant matter, which are well known to be important. Because inputs differ in quality and the trophic level they subsidise, proportional changes in input type could have strong effects on recipient food webs. 5. Cross-ecosystem resource inputs have previously been characterised as donor-controlled. However, control by the recipient food web could lead to greater feedback between resource flow and consumer dynamics than has been appreciated so far.
Haines, Seth S.
2015-07-13
The quantities of water and hydraulic fracturing proppant required for producing petroleum (oil, gas, and natural gas liquids) from continuous accumulations, and the quantities of water extracted during petroleum production, can be quantitatively assessed using a probabilistic approach. The water and proppant assessment methodology builds on the U.S. Geological Survey methodology for quantitative assessment of undiscovered technically recoverable petroleum resources in continuous accumulations. The U.S. Geological Survey assessment methodology for continuous petroleum accumulations includes fundamental concepts such as geologically defined assessment units, and probabilistic input values including well-drainage area, sweet- and non-sweet-spot areas, and success ratio within the untested area of each assessment unit. In addition to petroleum-related information, required inputs for the water and proppant assessment methodology include probabilistic estimates of per-well water usage for drilling, cementing, and hydraulic-fracture stimulation; the ratio of proppant to water for hydraulic fracturing; the percentage of hydraulic fracturing water that returns to the surface as flowback; and the ratio of produced water to petroleum over the productive life of each well. Water and proppant assessments combine information from recent or current petroleum assessments with water- and proppant-related input values for the assessment unit being studied, using Monte Carlo simulation, to yield probabilistic estimates of the volume of water for drilling, cementing, and hydraulic fracture stimulation; the quantity of proppant for hydraulic fracture stimulation; and the volumes of water produced as flowback shortly after well completion, and produced over the life of the well.
Piezoelectric particle accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemp, Mark A.; Jongewaard, Erik N.; Haase, Andrew A.
2017-08-29
A particle accelerator is provided that includes a piezoelectric accelerator element, where the piezoelectric accelerator element includes a hollow cylindrical shape, and an input transducer, where the input transducer is disposed to provide an input signal to the piezoelectric accelerator element, where the input signal induces a mechanical excitation of the piezoelectric accelerator element, where the mechanical excitation is capable of generating a piezoelectric electric field proximal to an axis of the cylindrical shape, where the piezoelectric accelerator is configured to accelerate a charged particle longitudinally along the axis of the cylindrical shape according to the piezoelectric electric field.
Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell
2011-01-01
Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.
A study of remote sensing as applied to regional and small watersheds. Volume 1: Summary report
NASA Technical Reports Server (NTRS)
Ambaruch, R.
1974-01-01
The accuracy of remotely sensed measurements to provide inputs to hydrologic models of watersheds is studied. A series of sensitivity analyses on continuous simulation models of three watersheds determined: (1)Optimal values and permissible tolerances of inputs to achieve accurate simulation of streamflow from the watersheds; (2) Which model inputs can be quantified from remote sensing, directly, indirectly or by inference; and (3) How accurate remotely sensed measurements (from spacecraft or aircraft) must be to provide a basis for quantifying model inputs within permissible tolerances.
ParamAP: Standardized Parameterization of Sinoatrial Node Myocyte Action Potentials.
Rickert, Christian; Proenza, Catherine
2017-08-22
Sinoatrial node myocytes act as cardiac pacemaker cells by generating spontaneous action potentials (APs). Much information is encoded in sinoatrial AP waveforms, but both the analysis and the comparison of AP parameters between studies is hindered by the lack of standardized parameter definitions and the absence of automated analysis tools. Here we introduce ParamAP, a standalone cross-platform computational tool that uses a template-free detection algorithm to automatically identify and parameterize APs from text input files. ParamAP employs a graphic user interface with automatic and user-customizable input modes, and it outputs data files in text and PDF formats. ParamAP returns a total of 16 AP waveform parameters including time intervals such as the AP duration, membrane potentials such as the maximum diastolic potential, and rates of change of the membrane potential such as the diastolic depolarization rate. ParamAP provides a robust AP detection algorithm in combination with a standardized AP parameter analysis over a wide range of AP waveforms and firing rates, owing in part to the use of an iterative algorithm for the determination of the threshold potential and the diastolic depolarization rate that is independent of the maximum upstroke velocity, a parameter that can vary significantly among sinoatrial APs. Because ParamAP is implemented in Python 3, it is also highly customizable and extensible. In conclusion, ParamAP is a powerful computational tool that facilitates quantitative analysis and enables comparison of sinoatrial APs by standardizing parameter definitions and providing an automated work flow. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Borsato, Eros; Marinello, Francesco; Tarolli, Paolo
2017-04-01
World population is increasing and human diet is becoming of considerable concern for human welfare. Natural resources are overexploited and governments need policies for a good management of the environment. Sustainable agriculture can provide some solutions, as it minimizes inputs, wastes or pollution. The aim of the present study is to provide a combined analysis of different footprints approaches in order to allow comparison of different agricultural and livestock products in terms of efficiency of resource exploitation. Time is the real important variable that influences the footprint. Water use efficiency, greenhouse gas emissions and energy indexes are included in this study. The study takes advantage of indexes collected from a wide bibliography focused on different fresh agricultural products: the target is the definition of a time table of footprints for agricultural products. Starting from a top-down prospective, an analysis of the environmental footprint for different products is an approach to understand which products can be more sustainable for human diet. This study distinguishes different clusters in different sub-cluster of vegetable products and animal products. The classification is based on a comparison of water consumption in relation to yield, greenhouse gas emissions equivalent and energy for a given product quantity. Additionally time is considered, which affects sustainability, in terms of inputs caught for a period. The footprint is spread out in time, thus changing its relevance with respect to the exploitation of a resource. Ultimately, this works wants to propose a new original basis for sustainability metrics, allowing an effective quantitative comparison of food products for a more conscious human diet.
NASA Technical Reports Server (NTRS)
Derkevorkian, Armen; Peterson, Lee; Kolaini, Ali R.; Hendricks, Terry J.; Nesmith, Bill J.
2016-01-01
An analytic approach is demonstrated to reveal potential pyroshock -driven dynamic effects causing power losses in the Thermo -Electric (TE) module bars of the Mars Science Laboratory (MSL) Multi -Mission Radioisotope Thermoelectric Generator (MMRTG). This study utilizes high- fidelity finite element analysis with SIERRA/PRESTO codes to estimate wave propagation effects due to large -amplitude suddenly -applied pyro shock loads in the MMRTG. A high fidelity model of the TE module bar was created with approximately 30 million degrees -of-freedom (DOF). First, a quasi -static preload was applied on top of the TE module bar, then transient tri- axial acceleration inputs were simultaneously applied on the preloaded module. The applied input acceleration signals were measured during MMRTG shock qualification tests performed at the Jet Propulsion Laboratory. An explicit finite element solver in the SIERRA/PRESTO computational environment, along with a 3000 processor parallel super -computing framework at NASA -AMES, was used for the simulation. The simulation results were investigated both qualitatively and quantitatively. The predicted shock wave propagation results provide detailed structural responses throughout the TE module bar, and key insights into the dynamic response (i.e., loads, displacements, accelerations) of critical internal spring/piston compression systems, TE materials, and internal component interfaces in the MMRTG TE module bar. They also provide confidence on the viability of this high -fidelity modeling scheme to accurately predict shock wave propagation patterns within complex structures. This analytic approach is envisioned for modeling shock sensitive hardware susceptible to intense shock environments positioned near shock separation devices in modern space vehicles and systems.
Enhancement of CFD validation exercise along the roof profile of a low-rise building
NASA Astrophysics Data System (ADS)
Deraman, S. N. C.; Majid, T. A.; Zaini, S. S.; Yahya, W. N. W.; Abdullah, J.; Ismail, M. A.
2018-04-01
The aim of this study is to enhance the validation of CFD exercise along the roof profile of a low-rise building. An isolated gabled-roof house having 26.6° roof pitch was simulated to obtain the pressure coefficient around the house. Validation of CFD analysis with experimental data requires many input parameters. This study performed CFD simulation based on the data from a previous study. Where the input parameters were not clearly stated, new input parameters were established from the open literatures. The numerical simulations were performed in FLUENT 14.0 by applying the Computational Fluid Dynamics (CFD) approach based on steady RANS equation together with RNG k-ɛ model. Hence, the result from CFD was analysed by using quantitative test (statistical analysis) and compared with CFD results from the previous study. The statistical analysis results from ANOVA test and error measure showed that the CFD results from the current study produced good agreement and exhibited the closest error compared to the previous study. All the input data used in this study can be extended to other types of CFD simulation involving wind flow over an isolated single storey house.
670 GHz Schottky Diode Based Subharmonic Mixer with CPW Circuits and 70 GHz IF
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam (Inventor); Schlecht, Erich T. (Inventor); Lee, Choonsup (Inventor); Lin, Robert H. (Inventor); Gill, John J. (Inventor); Sin, Seth (Inventor); Mehdi, Imran (Inventor)
2014-01-01
A coplanar waveguide (CPW) based subharmonic mixer working at 670 GHz using GaAs Schottky diodes. One example of the mixer has a LO input, an RF input and an IF output. Another possible mixer has a LO input, and IF input and an RF output. Each input or output is connected to a coplanar waveguide with a matching network. A pair of antiparallel diodes provides a signal at twice the LO frequency, which is then mixed with a second signal to provide signals having sum and difference frequencies. The output signal of interest is received after passing through a bandpass filter tuned to the frequency range of interest.
Andrew, David; Craig, A D (Bud)
2002-01-01
Nociceptive spinothalamic tract (STT) neurones in lamina I of the lumbosacral spinal cord of anaesthetized cats were characterized by recording their responses to graded mechanical stimulation with controlled forces of 10-120 g and probes of 5.0, 0.5 and 0.1 mm2 contact area. Neurones were identified by antidromic activation from the contralateral thalamus, and cells that responded to noxious stimulation were categorized as either nociceptive specific (NS, n = 20) or as polymodal nociceptive (HPC, responsive to heat, pinch and cold, n = 19) based on their responses to quantitative thermal stimuli. The mean responses of the 39 units increased linearly as stimulus intensity increased, and the population stimulus-response curves evoked by each of the three probes were all significantly different from each other. Thresholds were 45 g for the 5.0 mm2 probe, 30 g for the 0.5 mm2 probe and 20 g for the 0.1 mm2 probe. Further analysis showed that the NS neurones encoded both stimulus intensity and area (probe size) significantly better than HPC neurones in terms of their thresholds to individual probes, their peak discharge rates, their suprathreshold responsiveness and their ability to discriminate the three different probe sizes. These differences are consistent with the known differences between the mechanical encoding properties of A-fibre nociceptors, which provide the dominant inputs to NS neurones, and C-fibre nociceptors, which are the dominant inputs to HPC cells. Comparison of the stimulus-response curves of NS and HPC neurones indicated that the discharge of NS neurones better match the psychophysics of mechanical pain sensations in humans than the discharge of the HPC neurones do. Our findings support the view that NS neurones have a prominent role in mechanical pain and sharpness, and they corroborate the concept that the lamina I STT projection comprises several discrete channels that are integrated in the forebrain to generate qualitatively distinct sensations. PMID:12482896
Rosa, Isabel M D; Ahmed, Sadia E; Ewers, Robert M
2014-06-01
Land-use and land-cover (LULC) change is one of the largest drivers of biodiversity loss and carbon emissions globally. We use the tropical rainforests of the Amazon, the Congo basin and South-East Asia as a case study to investigate spatial predictive models of LULC change. Current predictions differ in their modelling approaches, are highly variable and often poorly validated. We carried out a quantitative review of 48 modelling methodologies, considering model spatio-temporal scales, inputs, calibration and validation methods. In addition, we requested model outputs from each of the models reviewed and carried out a quantitative assessment of model performance for tropical LULC predictions in the Brazilian Amazon. We highlight existing shortfalls in the discipline and uncover three key points that need addressing to improve the transparency, reliability and utility of tropical LULC change models: (1) a lack of openness with regard to describing and making available the model inputs and model code; (2) the difficulties of conducting appropriate model validations; and (3) the difficulty that users of tropical LULC models face in obtaining the model predictions to help inform their own analyses and policy decisions. We further draw comparisons between tropical LULC change models in the tropics and the modelling approaches and paradigms in other disciplines, and suggest that recent changes in the climate change and species distribution modelling communities may provide a pathway that tropical LULC change modellers may emulate to further improve the discipline. Climate change models have exerted considerable influence over public perceptions of climate change and now impact policy decisions at all political levels. We suggest that tropical LULC change models have an equally high potential to influence public opinion and impact the development of land-use policies based on plausible future scenarios, but, to do that reliably may require further improvements in the discipline. © 2014 John Wiley & Sons Ltd.
A general method for assessing brain-computer interface performance and its limitations
NASA Astrophysics Data System (ADS)
Hill, N. Jeremy; Häuser, Ann-Katrin; Schalk, Gerwin
2014-04-01
Objective. When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system’s performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw’s information transfer rate as a special case, but addresses the latter’s limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects’ performance using an EEG-based BCI, a ‘Direct Controller’ (a high-performance hardware input device), and a ‘Pseudo-BCI Controller’ (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min-1). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.
Storey, Bob; Marcellino, Chris; Miller, Melissa; Maclean, Mary; Mostafa, Eman; Howell, Sue; Sakanari, Judy; Wolstenholme, Adrian; Kaplan, Ray
2014-12-01
A major hindrance to evaluating nematode populations for anthelmintic resistance, as well as for screening existing drugs, new compounds, or bioactive plant extracts for anthelmintic properties, is the lack of an efficient, objective, and reproducible in vitro assay that is adaptable to multiple life stages and parasite genera. To address this need we have developed the "Worminator" system, which objectively and quantitatively measures the motility of microscopic stages of parasitic nematodes. The system is built around the computer application "WormAssay", developed at the Center for Discovery and Innovation in Parasitic Diseases at the University of California, San Francisco. WormAssay was designed to assess motility of macroscopic parasites for the purpose of high throughput screening of potential anthelmintic compounds, utilizing high definition video as an input to assess motion of adult stage (macroscopic) parasites (e.g. Brugia malayi). We adapted this assay for use with microscopic parasites by modifying the software to support a full frame analysis mode that applies the motion algorithm to the entire video frame. Thus, the motility of all parasites in a given well are recorded and measured simultaneously. Assays performed on third-stage larvae (L3) of the bovine intestinal nematode Cooperia spp., as well as microfilariae (mf) of the filarioid nematodes B. malayi and Dirofilaria immitis, yielded reproducible dose responses using the macrocyclic lactones ivermectin, doramectin, and moxidectin, as well as the nicotinic agonists, pyrantel, oxantel, morantel, and tribendimidine. This new computer based-assay is simple to use, requires minimal new investment in equipment, is robust across nematode genera and developmental stage, and does not require subjective scoring of motility by an observer. Thus, the "Worminator" provides a relatively low-cost platform for developing genera- and stage-specific assays with high efficiency and reproducibility, low labor input, and yields objective motility data that is not subject to scorer bias.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Electrochemical Probing through a Redox Capacitor To Acquire Chemical Information on Biothiols
2016-01-01
The acquisition of chemical information is a critical need for medical diagnostics, food/environmental monitoring, and national security. Here, we report an electrochemical information processing approach that integrates (i) complex electrical inputs/outputs, (ii) mediators to transduce the electrical I/O into redox signals that can actively probe the chemical environment, and (iii) a redox capacitor that manipulates signals for information extraction. We demonstrate the capabilities of this chemical information processing strategy using biothiols because of the emerging importance of these molecules in medicine and because their distinct chemical properties allow evaluation of hypothesis-driven information probing. We show that input sequences can be tailored to probe for chemical information both qualitatively (step inputs probe for thiol-specific signatures) and quantitatively. Specifically, we observed picomolar limits of detection and linear responses to concentrations over 5 orders of magnitude (1 pM–0.1 μM). This approach allows the capabilities of signal processing to be extended for rapid, robust, and on-site analysis of chemical information. PMID:27385047
Electrochemical Probing through a Redox Capacitor To Acquire Chemical Information on Biothiols.
Liu, Zhengchun; Liu, Yi; Kim, Eunkyoung; Bentley, William E; Payne, Gregory F
2016-07-19
The acquisition of chemical information is a critical need for medical diagnostics, food/environmental monitoring, and national security. Here, we report an electrochemical information processing approach that integrates (i) complex electrical inputs/outputs, (ii) mediators to transduce the electrical I/O into redox signals that can actively probe the chemical environment, and (iii) a redox capacitor that manipulates signals for information extraction. We demonstrate the capabilities of this chemical information processing strategy using biothiols because of the emerging importance of these molecules in medicine and because their distinct chemical properties allow evaluation of hypothesis-driven information probing. We show that input sequences can be tailored to probe for chemical information both qualitatively (step inputs probe for thiol-specific signatures) and quantitatively. Specifically, we observed picomolar limits of detection and linear responses to concentrations over 5 orders of magnitude (1 pM-0.1 μM). This approach allows the capabilities of signal processing to be extended for rapid, robust, and on-site analysis of chemical information.
A novel high-speed CMOS circuit based on a gang of capacitors
NASA Astrophysics Data System (ADS)
Sharroush, Sherif M.
2017-08-01
There is no doubt that complementary metal-oxide semiconductor (CMOS) circuits with wide fan-in suffers from the relatively sluggish operation. In this paper, a circuit that contains a gang of capacitors sharing their charge with each other is proposed as an alternative to long N-channel MOS and P-channel MOS stacks. The proposed scheme is investigated quantitatively and verified by simulation using the 45-nm CMOS technology with VDD = 1 V. The time delay, area and power consumption of the proposed scheme are investigated and compared with the conventional static CMOS logic circuit. It is verified that the proposed scheme achieves 52% saving in the average propagation delay for eight inputs and that it has a smaller area compared to the conventional CMOS logic when the number of inputs exceeds three and a smaller power consumption for a number of inputs exceeding two. The impacts of process variations, component mismatches and technology scaling on the proposed scheme are also investigated.
NASA Astrophysics Data System (ADS)
Yan, Zilin; Kim, Yongtae; Hara, Shotaro; Shikazono, Naoki
2017-04-01
The Potts Kinetic Monte Carlo (KMC) model, proven to be a robust tool to study all stages of sintering process, is an ideal tool to analyze the microstructure evolution of electrodes in solid oxide fuel cells (SOFCs). Due to the nature of this model, the input parameters of KMC simulations such as simulation temperatures and attempt frequencies are difficult to identify. We propose a rigorous and efficient approach to facilitate the input parameter calibration process using artificial neural networks (ANNs). The trained ANN reduces drastically the number of trial-and-error of KMC simulations. The KMC simulation using the calibrated input parameters predicts the microstructures of a La0.6Sr0.4Co0.2Fe0.8O3 cathode material during sintering, showing both qualitative and quantitative congruence with real 3D microstructures obtained by focused ion beam scanning electron microscopy (FIB-SEM) reconstruction.
Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Nondegenerate parametric oscillations in a tunable superconducting resonator
NASA Astrophysics Data System (ADS)
Bengtsson, Andreas; Krantz, Philip; Simoen, Michaël; Svensson, Ida-Maria; Schneider, Ben; Shumeiko, Vitaly; Delsing, Per; Bylander, Jonas
2018-04-01
We investigate nondegenerate parametric oscillations in a superconducting microwave multimode resonator that is terminated by a superconducting quantum interference device (SQUID). The parametric effect is achieved by modulating magnetic flux through the SQUID at a frequency close to the sum of two resonator-mode frequencies. For modulation amplitudes exceeding an instability threshold, self-sustained oscillations are observed in both modes. The amplitudes of these oscillations show good quantitative agreement with a theoretical model. The oscillation phases are found to be correlated and exhibit strong fluctuations which broaden the oscillation spectral linewidths. These linewidths are significantly reduced by applying a weak on-resonant tone, which also suppresses the phase fluctuations. When the weak tone is detuned, we observe synchronization of the oscillation frequency with the frequency of the input. For the detuned input, we also observe an emergence of three idlers in the output. This observation is in agreement with theory indicating four-mode amplification and squeezing of a coherent input.
Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.
Stellar population of the superbubble N 206 in the LMC. I. Analysis of the Of-type stars
NASA Astrophysics Data System (ADS)
Ramachandran, Varsha; Hainich, R.; Hamann, W.-R.; Oskinova, L. M.; Shenar, T.; Sander, A. A. C.; Todt, H.; Gallagher, J. S.
2018-01-01
Context. Massive stars severely influence their environment by their strong ionizing radiation and by the momentum and kinetic energy input provided by their stellar winds and supernovae. Quantitative analyses of massive stars are required to understand how their feedback creates and shapes large scale structures of the interstellar medium. The giant H II region N 206 in the Large Magellanic Cloud contains an OB association that powers a superbubble filled with hot X-ray emitting gas, serving as an ideal laboratory in this context. Aims: We aim to estimate stellar and wind parameters of all OB stars in N 206 by means of quantitative spectroscopic analyses. In this first paper, we focus on the nine Of-type stars located in this region. We determine their ionizing flux and wind mechanical energy. The analysis of nitrogen abundances in our sample probes rotational mixing. Methods: We obtained optical spectra with the multi-object spectrograph FLAMES at the ESO-VLT. When possible, the optical spectroscopy was complemented by UV spectra from the HST, IUE, and FUSE archives. Detailed spectral classifications are presented for our sample Of-type stars. For the quantitative spectroscopic analysis we used the Potsdam Wolf-Rayet model atmosphere code. We determined the physical parameters and nitrogen abundances of our sample stars by fitting synthetic spectra to the observations. Results: The stellar and wind parameters of nine Of-type stars, which are largely derived from spectral analysis are used to construct wind momentum - luminosity relationship. We find that our sample follows a relation close to the theoretical prediction, assuming clumped winds. The most massive star in the N 206 association is an Of supergiant that has a very high mass-loss rate. Two objects in our sample reveal composite spectra, showing that the Of primaries have companions of late O subtype. All stars in our sample have an evolutionary age of less than 4 million yr, with the O2-type star being the youngest. All these stars show a systematic discrepancy between evolutionary and spectroscopic masses. All stars in our sample are nitrogen enriched. Nitrogen enrichment shows a clear correlation with increasing projected rotational velocities. Conclusions: The mechanical energy input from the Of stars alone is comparable to the energy stored in the N 206 superbubble as measured from the observed X-ray and Hα emission.
Quantitative Image Restoration in Bright Field Optical Microscopy.
Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús
2017-11-07
Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Variable Delay Element For Jitter Control In High Speed Data Links
Livolsi, Robert R.
2002-06-11
A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.
Dual physiological rate measurement instrument
NASA Technical Reports Server (NTRS)
Cooper, Tommy G. (Inventor)
1990-01-01
The object of the invention is to provide an instrument for converting a physiological pulse rate into a corresponding linear output voltage. The instrument which accurately measures the rate of an unknown rectangular pulse wave over an extended range of values comprises a phase-locked loop including a phase comparator, a filtering network, and a voltage-controlled oscillator, arranged in cascade. The phase comparator has a first input responsive to the pulse wave and a second input responsive to the output signal of the voltage-controlled oscillator. The comparator provides a signal dependent on the difference in phase and frequency between the signals appearing on the first and second inputs. A high-input impedance amplifier accepts an output from the filtering network and provides an amplified output DC signal to a utilization device for providing a measurement of the rate of the pulse wave.
Theory for source-responsive and free-surface film modeling of unsaturated flow
Nimmo, J.R.
2010-01-01
A new model explicitly incorporates the possibility of rapid response, across significant distance, to substantial water input. It is useful for unsaturated flow processes that are not inherently diffusive, or that do not progress through a series of equilibrium states. The term source-responsive is used to mean that flow responds sensitively to changing conditions at the source of water input (e.g., rainfall, irrigation, or ponded infiltration). The domain of preferential flow can be conceptualized as laminar flow in free-surface films along the walls of pores. These films may be considered to have uniform thickness, as suggested by field evidence that preferential flow moves at an approximately uniform rate when generated by a continuous and ample water supply. An effective facial area per unit volume quantitatively characterizes the medium with respect to source-responsive flow. A flow-intensity factor dependent on conditions within the medium represents the amount of source-responsive flow at a given time and position. Laminar flow theory provides relations for the velocity and thickness of flowing source-responsive films. Combination with the Darcy-Buckingham law and the continuity equation leads to expressions for both fluxes and dynamic water contents. Where preferential flow is sometimes or always significant, the interactive combination of source-responsive and diffuse flow has the potential to improve prediction of unsaturated-zone fluxes in response to hydraulic inputs and the evolving distribution of soil moisture. Examples for which this approach is efficient and physically plausible include (i) rainstorm-generated rapid fluctuations of a deep water table and (ii) space- and time-dependent soil water content response to infiltration in a macroporous soil. ?? Soil Science Society of America.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Neural Correlates of Sensory Substitution in Vestibular Pathways Following Complete Vestibular Loss
Sadeghi, Soroush G.; Minor, Lloyd B.; Cullen, Kathleen E.
2012-01-01
Sensory substitution is the term typically used in reference to sensory prosthetic devices designed to replace input from one defective modality with input from another modality. Such devices allow an alternative encoding of sensory information that is no longer directly provided by the defective modality in a purposeful and goal-directed manner. The behavioral recovery that follows complete vestibular loss is impressive and has long been thought to take advantage of a natural form of sensory substitution in which head motion information is no longer provided by vestibular inputs, but instead by extra-vestibular inputs such as proprioceptive and motor efference copy signals. Here we examined the neuronal correlates of this behavioral recovery after complete vestibular loss in alert behaving monkeys (Macaca mulata). We show for the first time that extra-vestibular inputs substitute for the vestibular inputs to stabilize gaze at the level of single neurons in the VOR premotor circuitry. The summed weighting of neck proprioceptive and efference copy information was sufficient to explain simultaneously observed behavioral improvements in gaze stability. Furthermore, by altering correspondence between intended and actual head movement we revealed a four-fold increase in the weight of neck motor efference copy signals consistent with the enhanced behavioral recovery observed when head movements are voluntary versus unexpected. Thus, taken together our results provide direct evidence that the substitution by extra-vestibular inputs in vestibular pathways provides a neural correlate for the improvements in gaze stability that are observed following the total loss of vestibular inputs. PMID:23077054
FSCATT: Angular Dependence and Filter Options.
The input routines to the code have been completely rewritten to allow for a free-form input format. The input routines now provide self-consistency checks and diagnostics for the user’s edification .
Quantitative verification of ab initio self-consistent laser theory.
Ge, Li; Tandy, Robert J; Stone, A D; Türeci, Hakan E
2008-10-13
We generalize and test the recent "ab initio" self-consistent (AISC) time-independent semiclassical laser theory. This self-consistent formalism generates all the stationary lasing properties in the multimode regime (frequencies, thresholds, internal and external fields, output power and emission pattern) from simple inputs: the dielectric function of the passive cavity, the atomic transition frequency, and the transverse relaxation time of the lasing transition.We find that the theory gives excellent quantitative agreement with full time-dependent simulations of the Maxwell-Bloch equations after it has been generalized to drop the slowly-varying envelope approximation. The theory is infinite order in the non-linear hole-burning interaction; the widely used third order approximation is shown to fail badly.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genereux, David; Osburn, Christopher; Oberbauer, Steven
This report covers the outcomes from a quantitative, interdisciplinary field investigation of how carbon fluxes and budgets in a lowland tropical rainforest are affected by the discharge of old regional groundwater into streams, springs, and wetlands in the forest. The work was carried out in a lowland rainforest of Costa Rica, at La Selva Biological Station. The research shows that discharge of regional groundwater high in dissolved carbon dioxide represents a significant input of carbon to the rainforest "from below", an input that is on average larger than the carbon input "from above" from the atmosphere. A stream receiving dischargemore » of regional groundwater had greatly elevated emissions of carbon dioxide (but not methane) to the overlying air, and elevated downstream export of carbon from its watershed with stream flow. The emission of deep geological carbon dioxide from stream water elevates the carbon dioxide concentrations in air above the streams. Carbon-14 tracing revealed the presence of geological carbon in the leaves and stems of some riparian plants near streams that receive inputs of regional groundwater. Also, discharge of regional groundwater is responsible for input of dissolved organic matter with distinctive chemistry to rainforest streams and wetlands. The discharge of regional groundwater in lowland surface waters has a major impact on the carbon cycle in this and likely other tropical and non-tropical forests.« less
NASA Astrophysics Data System (ADS)
Bohrson, Wendy A.; Spera, Frank J.
2007-11-01
Volcanic and plutonic rocks provide abundant evidence for complex processes that occur in magma storage and transport systems. The fingerprint of these processes, which include fractional crystallization, assimilation, and magma recharge, is captured in petrologic and geochemical characteristics of suites of cogenetic rocks. Quantitatively evaluating the relative contributions of each process requires integration of mass, species, and energy constraints, applied in a self-consistent way. The energy-constrained model Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RaχFC) tracks the trace element and isotopic evolution of a magmatic system (melt + solids) undergoing simultaneous fractional crystallization, recharge, and assimilation. Mass, thermal, and compositional (trace element and isotope) output is provided for melt in the magma body, cumulates, enclaves, and anatectic (i.e., country rock) melt. Theory of the EC computational method has been presented by Spera and Bohrson (2001, 2002, 2004), and applications to natural systems have been elucidated by Bohrson and Spera (2001, 2003) and Fowler et al. (2004). The purpose of this contribution is to make the final version of the EC-RAχFC computer code available and to provide instructions for code implementation, description of input and output parameters, and estimates of typical values for some input parameters. A brief discussion highlights measures by which the user may evaluate the quality of the output and also provides some guidelines for implementing nonlinear productivity functions. The EC-RAχFC computer code is written in Visual Basic, the programming language of Excel. The code therefore launches in Excel and is compatible with both PC and MAC platforms. The code is available on the authors' Web sites http://magma.geol.ucsb.edu/and http://www.geology.cwu.edu/ecrafc) as well as in the auxiliary material.
Smadi, Hanan; Sargeant, Jan M
2013-02-01
The current quantitative risk assessment model followed the framework proposed by the Codex Alimentarius to provide an estimate of the risk of human salmonellosis due to consumption of chicken breasts which were bought from Canadian retail stores and prepared in Canadian domestic kitchens. The model simulated the level of Salmonella contamination on chicken breasts throughout the retail-to-table pathway. The model used Canadian input parameter values, where available, to represent risk of salmonellosis. From retail until consumption, changes in the concentration of Salmonella on each chicken breast were modeled using equations for growth and inactivation. The model predicted an average of 318 cases of salmonellosis per 100,000 consumers per year. Potential reasons for this overestimation were discussed. A sensitivity analysis showed that concentration of Salmonella on chicken breasts at retail and food hygienic practices in private kitchens such as cross-contamination due to not washing cutting boards (or utensils) and hands after handling raw meat along with inadequate cooking contributed most significantly to the risk of human salmonellosis. The outcome from this model emphasizes that responsibility for protection from Salmonella hazard on chicken breasts is a shared responsibility. Data needed for a comprehensive Canadian Salmonella risk assessment were identified for future research. © 2012 Society for Risk Analysis.
Quantitative analysis of pre-and postsynaptic sex differences in the nucleus accumbens
Forlano, Paul M.; Woolley, Catherine S.
2010-01-01
The nucleus accumbens (NAc) plays a central role in motivation and reward. While there is ample evidence for sex differences in addiction-related behaviors, little is known about the neuroanatomical substrates that underlie these sexual dimorphisms. We investigated sex differences in synaptic connectivity of the NAc by evaluating pre- and postsynaptic measures in gonadally intact male and proestrous female rats. We used DiI labeling and confocal microscopy to measure dendritic spine density, spine head size, dendritic length and branching of medium spiny neurons (MSNs) in the NAc, and quantitative immunofluorescence to measure glutamatergic innervation using pre- (vesicular glutamate transporter 1 and 2) and postsynaptic (post synaptic density 95) markers, as well as dopaminergic innervation of the NAc. We also utilized electron microscopy to complement the above measures. Clear but subtle sex differences were identified, namely in distal dendritic spine density and the proportion of large spines on MSNs, both of which are greater in females. Sex differences in spine density and spine head size are evident in both the core and shell subregions, but are stronger in the core. This study is the first demonstration of neuroanatomical sex differences in the NAc and provides evidence that structural differences in synaptic connectivity and glutamatergic input may contribute to behavioral sex differences in reward and addiction. PMID:20151363
Texture-Based Automated Lithological Classification Using Aeromagenetic Anomaly Images
Shankar, Vivek
2009-01-01
This report consists of a thesis submitted to the faculty of the Department of Electrical and Computer Engineering, in partial fulfillment of the requirements for the degree of Master of Science, Graduate College, The University of Arizona, 2004 Aeromagnetic anomaly images are geophysical prospecting tools frequently used in the exploration of metalliferous minerals and hydrocarbons. The amplitude and texture content of these images provide a wealth of information to geophysicists who attempt to delineate the nature of the Earth's upper crust. These images prove to be extremely useful in remote areas and locations where the minerals of interest are concealed by basin fill. Typically, geophysicists compile a suite of aeromagnetic anomaly images, derived from amplitude and texture measurement operations, in order to obtain a qualitative interpretation of the lithological (rock) structure. Texture measures have proven to be especially capable of capturing the magnetic anomaly signature of unique lithological units. We performed a quantitative study to explore the possibility of using texture measures as input to a machine vision system in order to achieve automated classification of lithological units. This work demonstrated a significant improvement in classification accuracy over random guessing based on a priori probabilities. Additionally, a quantitative comparison between the performances of five classes of texture measures in their ability to discriminate lithological units was achieved.
D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio
2014-08-01
In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Spiering, Bruce A.; Holekamp, Kara L.
2010-01-01
Water quality standards in the U.S. consist of: designated uses (the services that a water body provides; e.g., drinking water, aquatic life, harvestable species, recreation) . criteria that define the environmental conditions that must be maintained to support the uses For estuaries and coastal waters in the Gulf of Mexico, there are no numeric (quantitative) criteria to protect designated uses from effects of nutrients. This is largely due to the absence of adequate data that would quantitatively link biological conditions to nutrient concentrations. The Gulf of Mexico Alliance, an organization fostering collaboration between the Gulf States and U.S. Federal agencies, has identified the development of the numeric nutrient criteria as a major step leading to reduction in MODIS Products Figure 6. Map of the Mobile Bay with a yellow patch indicating the Bon Secour Bay area selected in this study for averaging water clarity parameters retrieved from MODIS datasets. nutrient inputs to coastal ecosystems. Nutrient enrichment in estuaries and coastal waters can be quantified based on response variables that measure phytoplankton biomass and water clarity. Long-term, spatially and temporally resolved measurements of chlorophyll a concentration, total concentration of suspended solids, and water clarity are needed to establish reference conditions and to quantify stressor-response relationships.
NASA Astrophysics Data System (ADS)
El Koussaifi, R.; Tikan, A.; Toffoli, A.; Randoux, S.; Suret, P.; Onorato, M.
2018-01-01
Rogue waves are extreme and rare fluctuations of the wave field that have been discussed in many physical systems. Their presence substantially influences the statistical properties of a partially coherent wave field, i.e., a wave field characterized by a finite band spectrum with random Fourier phases. Their understanding is fundamental for the design of ships and offshore platforms. In many meteorological conditions waves in the ocean are characterized by the so-called Joint North Sea Wave Project (JONSWAP) spectrum. Here we compare two unique experimental results: the first one has been performed in a 270 m wave tank and the other in optical fibers. In both cases, waves characterized by a JONSWAP spectrum and random Fourier phases have been launched at the input of the experimental device. The quantitative comparison, based on an appropriate scaling of the two experiments, shows a very good agreement between the statistics in hydrodynamics and optics. Spontaneous emergence of heavy tails in the probability density function of the wave amplitude is observed in both systems. The results demonstrate the universal features of rogue waves and provide a fundamental and explicit bridge between two important fields of research. Numerical simulations are also compared with experimental results.
El Koussaifi, R; Tikan, A; Toffoli, A; Randoux, S; Suret, P; Onorato, M
2018-01-01
Rogue waves are extreme and rare fluctuations of the wave field that have been discussed in many physical systems. Their presence substantially influences the statistical properties of a partially coherent wave field, i.e., a wave field characterized by a finite band spectrum with random Fourier phases. Their understanding is fundamental for the design of ships and offshore platforms. In many meteorological conditions waves in the ocean are characterized by the so-called Joint North Sea Wave Project (JONSWAP) spectrum. Here we compare two unique experimental results: the first one has been performed in a 270 m wave tank and the other in optical fibers. In both cases, waves characterized by a JONSWAP spectrum and random Fourier phases have been launched at the input of the experimental device. The quantitative comparison, based on an appropriate scaling of the two experiments, shows a very good agreement between the statistics in hydrodynamics and optics. Spontaneous emergence of heavy tails in the probability density function of the wave amplitude is observed in both systems. The results demonstrate the universal features of rogue waves and provide a fundamental and explicit bridge between two important fields of research. Numerical simulations are also compared with experimental results.
NASA Astrophysics Data System (ADS)
Anderson, M. R.; Rivkin, R. B.
2016-02-01
Petroleum hydrocarbon discharges related to fossil fuel exploitation have the potential to alter microbial processes in the upper ocean. While the ecotoxicological effects of such inputs are commonly evaluated, the potential for eutrophication from the constituent organic and inorganic nutrients has been largely ignored. Hydrocarbons from natural seeps and anthropogenic sources represent a measurable source of organic carbon for surface waters. The most recent (1989-1997) estimate of average world-wide input of hydrocarbons to the sea is 1.250 x 1012 g/yr ≈ 1.0 x 1012g C/year. Produced water from offshore platforms is the largest waste stream from oil and gas exploitation and contributes significant quantities of inorganic nutrients such as N, P and Fe. In coastal areas where such inputs are a significant source of these nutrients, model studies show the potential to shift production toward smaller cells and net heterotrophy. The consequences of these nutrient sources for coastal systems and semi enclosed seas are complex and difficult to predict, because (1) there is a lack of comprehensive data on inputs and in situ concentrations and (2) the is no conceptual or quantitative framework to consider their effects on ocean biogeochemical processes. Here we use examples from the North Sea (produced water discharges 1% total riverine input and NH4 3% of the annual riverine nitrogen load), the South China Sea (total petroleum hydrocarbons = 10-1750 μg/l in offshore waters), and the Gulf of Mexico (seeps = 76-106 x 109 gC/yr, Macondo blowout 545 x 109 gC) to demonstrate how hydrocarbon and produced water inputs can influence basin scale biogeochemical and ecosystem processes and to propose a framework to consider these effects on larger scales.
NASA Astrophysics Data System (ADS)
Liu, Wenjing; Yu, Longfei; Zhang, Ting; Kang, Ronghua; Zhu, Jing; Mulder, Jan; Huang, Yongmei; Duan, Lei
2017-09-01
Chronically elevated deposition of reactive nitrogen (N), as ammonium (NH4+) and nitrate (NO3-), in subtropical forests with monsoonal climate has caused widespread N leaching in southern China. So far, little is known about the effect of further increases in N input and changes in the relative proportion of NH4+ and NO3- on turnover rate and fate of atmogenic N. Here we report a 15N tracer experiment in Tieshanping (TSP) forest, SW China, conducted as part of a long-term N fertilization experiment, using NH4NO3 and NaNO3, where effects of a doubling of monthly N inputs were compared. In June 2012, the regular N fertilizers were replaced by their 15N-labeled forms, viz., 15NH4NO3 and Na15NO3, as a single-dose addition. Mass balances of N for the initial 1.5 years following label addition showed that for both treatments, 70% to 80% of the annual N input was leached as NO3-, both at ambient and at double N input rates. This confirms the earlier reported extreme case of N saturation at TSP. The 15N, added as Na15NO3, showed recoveries of about 74% in soil leachates, indicating that NO3- input at TSP is subject to a rapid and nearly quantitative loss through direct leaching as a mobile anion. By contrast, recoveries of 15N in soil leachates of only 33% were found if added as 15NH4NO3. Much of the 15N was immobilized in the soil and to a lesser extent in the vegetation. Thus, immobilization of fresh N input is significantly greater if added as NH4+, than as NO3-.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Multicriteria decision analysis applied to Glen Canyon Dam
Flug, M.; Seitz, H.L.H.; Scott, J.F.
2000-01-01
Conflicts in water resources exist because river-reservoir systems are managed to optimize traditional benefits (e.g., hydropower and flood control), which are historically quantified in economic terms, whereas natural and environmental resources, including in-stream and riparian resources, are more difficult or impossible to quantify in economic terms. Multicriteria decision analysis provides a quantitative approach to evaluate resources subject to river basin management alternatives. This objective quantification method includes inputs from special interest groups, the general public, and concerned individuals, as well as professionals for each resource considered in a trade-off analysis. Multicriteria decision analysis is applied to resources and flow alternatives presented in the environmental impact statement for Glen Canyon Dam on the Colorado River. A numeric rating and priority-weighting scheme is used to evaluate 29 specific natural resource attributes, grouped into seven main resource objectives, for nine flow alternatives enumerated in the environmental impact statement.
Optimal multisensory decision-making in a reaction-time task.
Drugowitsch, Jan; DeAngelis, Gregory C; Klier, Eliana M; Angelaki, Dora E; Pouget, Alexandre
2014-06-14
Humans and animals can integrate sensory evidence from various sources to make decisions in a statistically near-optimal manner, provided that the stimulus presentation time is fixed across trials. Little is known about whether optimality is preserved when subjects can choose when to make a decision (reaction-time task), nor when sensory inputs have time-varying reliability. Using a reaction-time version of a visual/vestibular heading discrimination task, we show that behavior is clearly sub-optimal when quantified with traditional optimality metrics that ignore reaction times. We created a computational model that accumulates evidence optimally across both cues and time, and trades off accuracy with decision speed. This model quantitatively explains subjects's choices and reaction times, supporting the hypothesis that subjects do, in fact, accumulate evidence optimally over time and across sensory modalities, even when the reaction time is under the subject's control.
Comprehensive Design Reliability Activities for Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Whitley, M. R.; Knight, K. C.
2000-01-01
This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.
Sensitivity of seafloor bathymetry to climate-driven fluctuations in mid-ocean ridge magma supply.
Olive, J-A; Behn, M D; Ito, G; Buck, W R; Escartín, J; Howell, S
2015-10-16
Recent studies have proposed that the bathymetric fabric of the seafloor formed at mid-ocean ridges records rapid (23,000 to 100,000 years) fluctuations in ridge magma supply caused by sealevel changes that modulate melt production in the underlying mantle. Using quantitative models of faulting and magma emplacement, we demonstrate that, in fact, seafloor-shaping processes act as a low-pass filter on variations in magma supply, strongly damping fluctuations shorter than about 100,000 years. We show that the systematic decrease in dominant seafloor wavelengths with increasing spreading rate is best explained by a model of fault growth and abandonment under a steady magma input. This provides a robust framework for deciphering the footprint of mantle melting in the fabric of abyssal hills, the most common topographic feature on Earth. Copyright © 2015, American Association for the Advancement of Science.
Evaluation of fuzzy inference systems using fuzzy least squares
NASA Technical Reports Server (NTRS)
Barone, Joseph M.
1992-01-01
Efforts to develop evaluation methods for fuzzy inference systems which are not based on crisp, quantitative data or processes (i.e., where the phenomenon the system is built to describe or control is inherently fuzzy) are just beginning. This paper suggests that the method of fuzzy least squares can be used to perform such evaluations. Regressing the desired outputs onto the inferred outputs can provide both global and local measures of success. The global measures have some value in an absolute sense, but they are particularly useful when competing solutions (e.g., different numbers of rules, different fuzzy input partitions) are being compared. The local measure described here can be used to identify specific areas of poor fit where special measures (e.g., the use of emphatic or suppressive rules) can be applied. Several examples are discussed which illustrate the applicability of the method as an evaluation tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheldon, Frederick T; Abercrombie, Robert K; Mili, Ali
2009-01-01
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. This paper proposes a Cyberspace Security Econometrics System (CSES) that provides a measure (i.e., a quantitative indication) of reliability, performance and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. Formore » a given stakeholder, CSES reflects the variance that may exist among the stakes she/he attaches to meeting each requirement. This paper introduces the basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings.« less
Cadmium telluride photovoltaic radiation detector
Agouridis, D.C.; Fox, R.J.
A dosimetry-type radiation detector is provided which employs a polycrystalline, chlorine-compensated cadmium telluride wafer fabricated to operate as a photovoltaic current generator used as the basic detecting element. A photovoltaic junction is formed in the wafer by painting one face of the cadmium telluride wafer with an n-type semi-conductive material. The opposite face of the wafer is painted with an electrically conductive material to serve as a current collector. The detector is mounted in a hermetically sealed vacuum containment. The detector is operated in a photovoltaic mode (zero bias) while DC coupled to a symmetrical differential current amplifier having a very low input impedance. The amplifier converts the current signal generated by radiation impinging upon the barrier surface face of the wafer to a voltage which is supplied to a voltmeter calibrated to read quantitatively the level of radiation incident upon the detecting wafer.
Reactive solute transport in acidic streams
Broshears, R.E.
1996-01-01
Spatial and temporal profiles of Ph and concentrations of toxic metals in streams affected by acid mine drainage are the result of the interplay of physical and biogeochemical processes. This paper describes a reactive solute transport model that provides a physically and thermodynamically quantitative interpretation of these profiles. The model combines a transport module that includes advection-dispersion and transient storage with a geochemical speciation module based on MINTEQA2. Input to the model includes stream hydrologic properties derived from tracer-dilution experiments, headwater and lateral inflow concentrations analyzed in field samples, and a thermodynamic database. Simulations reproduced the general features of steady-state patterns of observed pH and concentrations of aluminum and sulfate in St. Kevin Gulch, an acid mine drainage stream near Leadville, Colorado. These patterns were altered temporarily by injection of sodium carbonate into the stream. A transient simulation reproduced the observed effects of the base injection.
A Bayesian Active Learning Experimental Design for Inferring Signaling Networks.
Ness, Robert O; Sachs, Karen; Mallick, Parag; Vitek, Olga
2018-06-21
Machine learning methods for learning network structure are applied to quantitative proteomics experiments and reverse-engineer intracellular signal transduction networks. They provide insight into the rewiring of signaling within the context of a disease or a phenotype. To learn the causal patterns of influence between proteins in the network, the methods require experiments that include targeted interventions that fix the activity of specific proteins. However, the interventions are costly and add experimental complexity. We describe an active learning strategy for selecting optimal interventions. Our approach takes as inputs pathway databases and historic data sets, expresses them in form of prior probability distributions on network structures, and selects interventions that maximize their expected contribution to structure learning. Evaluations on simulated and real data show that the strategy reduces the detection error of validated edges as compared with an unguided choice of interventions and avoids redundant interventions, thereby increasing the effectiveness of the experiment.
The influence of the hydrologic cycle on the extent of sea ice with climatic implications
NASA Technical Reports Server (NTRS)
Dean, Ken; Gosink, Joan
1991-01-01
The role was analyzed of the hydrologic cycle on the distribution of sea ice, and its influence on forcings and fluxes between the marine environment and the atmosphere. River discharge plays a significant role in degrading the sea ice before any melting occurs elsewhere along the coast. The influence is considered of river discharge on the albedo, thermal balance, and distribution of sea ice. Quantitative atmospheric-hydrologic models are being developed to describe these processes in the coastal zone. Input for the models will come from satellite images, hydrologic data, and field observations. The resulting analysis provides a basis for the study of the significance of the hydrologic cycle throughout the Arctic Basin and its influence on the regional climate as a result of possible climatic scenarios. The area offshore from the Mackenzie River delta was selected as the study area.
Coupling Processes Between Atmospheric Chemistry and Climate
NASA Technical Reports Server (NTRS)
Ko, Malcolm K. W.; Weisenstein, Debra; Rodriguez, Jose; Danilin, Michael; Scott, Courtney; Shia, Run-Lie; Eluszkiewicz, Junusz; Sze, Nien-Dak
1999-01-01
This is the final report. The overall objective of this project is to improve the understanding of coupling processes among atmospheric chemistry, aerosol and climate, all important for quantitative assessments of global change. Among our priority are changes in ozone and stratospheric sulfate aerosol, with emphasis on how ozone in the lower stratosphere would respond to natural or anthropogenic changes. The work emphasizes two important aspects: (1) AER's continued participation in preparation of, and providing scientific input for, various scientific reports connected with assessment of stratospheric ozone and climate. These include participation in various model intercomparison exercises as well as preparation of national and international reports. and (2) Continued development of the AER three-wave interactive model to address how the transport circulation will change as ozone and the thermal properties of the atmosphere change, and assess how these new findings will affect our confidence in the ozone assessment results.
Cadmium telluride photovoltaic radiation detector
Agouridis, Dimitrios C.; Fox, Richard J.
1981-01-01
A dosimetry-type radiation detector is provided which employs a polycrystalline, chlorine-compensated cadmium telluride wafer fabricated to operate as a photovoltaic current generator used as the basic detecting element. A photovoltaic junction is formed in the wafer by painting one face of the cadmium telluride wafer with an n-type semiconductive material. The opposite face of the wafer is painted with an electrically conductive material to serve as a current collector. The detector is mounted in a hermetically sealed vacuum containment. The detector is operated in a photovoltaic mode (zero bias) while DC coupled to a symmetrical differential current amplifier having a very low input impedance. The amplifier converts the current signal generated by radiation impinging upon the barrier surface face of the wafer to a voltage which is supplied to a voltmeter calibrated to read quantitatively the level of radiation incident upon the detecting wafer.
Quantitative analysis of plastic debris on recreational beaches in Mumbai, India.
Jayasiri, H B; Purushothaman, C S; Vennila, A
2013-12-15
Plastic litter was quantified on four sandy beaches in Mumbai. The mean abundance of 7.49 g and 68.83 items per square metre was recorded. The abundance of plastics significantly varied among the beaches showing an increasing trend in the southern part. The abundance of plastics by weight in Dadar was significantly higher than that in Aksa. The size fractionation of plastics proved that small particles (1-20 mm) are predominant with 41.85% microplastics (1-5 mm) which emphasizes the high risk to marine organisms due to possible ingestion. The highest quantity of microplastics was seen in Juhu beach (55.33%) followed by Versova, Aksa and Dadar. The major contributing factors for the abundance are beach usage for different activities such as recreational, religious and fishing which suggest that the land-based sources provide major inputs to plastic pollution in these beaches. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cyberspace Security Econometrics System (CSES) - U.S. Copyright TXu 1-901-039
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T
2014-01-01
Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. The Cyberspace Security Econometrics System (CSES) provides a measure (i.e., a quantitative indication) of reliability, performance, and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders interests in that requirement. For a given stakeholder, CSESmore » accounts for the variance that may exist among the stakes one attaches to meeting each requirement. The basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings contained in this copyright.« less
Systems and methods for improved telepresence
Anderson, Matthew O.; Willis, W. David; Kinoshita, Robert A.
2005-10-25
The present invention provides a modular, flexible system for deploying multiple video perception technologies. The telepresence system of the present invention is capable of allowing an operator to control multiple mono and stereo video inputs in a hands-free manner. The raw data generated by the input devices is processed into a common zone structure that corresponds to the commands of the user, and the commands represented by the zone structure are transmitted to the appropriate device. This modularized approach permits input devices to be easily interfaced with various telepresence devices. Additionally, new input devices and telepresence devices are easily added to the system and are frequently interchangeable. The present invention also provides a modular configuration component that allows an operator to define a plurality of views each of which defines the telepresence devices to be controlled by a particular input device. The present invention provides a modular flexible system for providing telepresence for a wide range of applications. The modularization of the software components combined with the generalized zone concept allows the systems and methods of the present invention to be easily expanded to encompass new devices and new uses.
Target Scattering Metrics: Model-Model and Model-Data Comparisons
2017-12-13
measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for
Target Scattering Metrics: Model-Model and Model Data comparisons
2017-12-13
measured synthetic aperture sonar (SAS) data or from numerical models is investigated. Metrics are needed for quantitative comparisons for signals...candidate metrics for model-model comparisons are examined here with a goal to consider raw data prior to its reduction to data products, which may...be suitable for input to classification schemes. The investigated metrics are then applied to model-data comparisons. INTRODUCTION Metrics for
ERIC Educational Resources Information Center
Stott, Angela; Case, Jennifer M.
2014-01-01
Electronic tutors able to respond appropriately to a user's input have been shown to be effective in improving learning in a number of contexts. This study extends this research into the context of conceptual change during in-service science teacher workshops. Quantitative data were collected from 1,049 South African grade 12 physical sciences…
The Scientific Program of the U.S. Naval Research Laboratory
1958-07-01
systems, using mock-ups and simulated inputs. (2) Experimental determination of the quantitative parameters of systems, such as data-handling ability, time...naval service of equipment on ships, planes, and mis- siles are recorded, analyzed, and simulated . Methods are developed for the improve- ment of...H01 - NUCLEAR CONSTITUENTS AND STRUCTURE Theoretical and experimental studies concerned with elementary particles , field theory, nuclear structure
2011-01-01
Background Large proportions of children do not fulfil the World Health Organization recommendation of eating at least 400 grams of fruit and vegetables (FV) per day. To promote an increased FV intake among children it is important to identify factors which influence their consumption. Both qualitative and quantitative studies are needed. Earlier reviews have analysed evidence from quantitative studies. The aim of this paper is to present a systematic review of qualitative studies of determinants of children's FV intake. Methods Relevant studies were identified by searching Anthropology Plus, Cinahl, CSA illumine, Embase, International Bibliography of the Social Sciences, Medline, PsycINFO, and Web of Science using combinations of synonyms for FV intake, children/adolescents and qualitative methods as search terms. The literature search was completed by December 1st 2010. Papers were included if they applied qualitative methods to investigate 6-18-year-olds' perceptions of factors influencing their FV consumption. Quantitative studies, review studies, studies reported in other languages than English, and non-peer reviewed or unpublished manuscripts were excluded. The papers were reviewed systematically using standardised templates for summary of papers, quality assessment, and synthesis of findings across papers. Results The review included 31 studies, mostly based on US populations and focus group discussions. The synthesis identified the following potential determinants for FV intake which supplement the quantitative knowledge base: Time costs; lack of taste guarantee; satiety value; appropriate time/occasions/settings for eating FV; sensory and physical aspects; variety, visibility, methods of preparation; access to unhealthy food; the symbolic value of food for image, gender identity and social interaction with peers; short term outcome expectancies. Conclusions The review highlights numerous potential determinants which have not been investigated thoroughly in quantitative studies. Future large scale quantitative studies should attempt to quantify the importance of these factors. Further, mechanisms behind gender, age and socioeconomic differences in FV consumption are proposed which should be tested quantitatively in order to better tailor interventions to vulnerable groups. Finally, the review provides input to the conceptualisation and measurements of concepts (i.e. peer influence, availability in schools) which may refine survey instruments and theoretical frameworks concerning eating behaviours. PMID:21999291
Verification of a VRF Heat Pump Computer Model in EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigusse, Bereket; Raustad, Richard
2013-06-15
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less
Schoer, Karl; Wood, Richard; Arto, Iñaki; Weinzettel, Jan
2013-12-17
The mass of material consumed by a population has become a useful proxy for measuring environmental pressure. The "raw material equivalents" (RME) metric of material consumption addresses the issue of including the full supply chain (including imports) when calculating national or product level material impacts. The RME calculation suffers from data availability, however, as quantitative data on production practices along the full supply chain (in different regions) is required. Hence, the RME is currently being estimated by three main approaches: (1) assuming domestic technology in foreign economies, (2) utilizing region-specific life-cycle inventories (in a hybrid framework), and (3) utilizing multi-regional input-output (MRIO) analysis to explicitly cover all regions of the supply chain. While the first approach has been shown to give inaccurate results, this paper focuses on the benefits and costs of the latter two approaches. We analyze results from two key (MRIO and hybrid) projects modeling raw material equivalents, adjusting the models in a stepwise manner in order to quantify the effects of individual conceptual elements. We attempt to isolate the MRIO gap, which denotes the quantitative impact of calculating the RME of imports by an MRIO approach instead of the hybrid model, focusing on the RME of EU external trade imports. While, the models give quantitatively similar results, differences become more pronounced when tracking more detailed material flows. We assess the advantages and disadvantages of the two approaches and look forward to ways to further harmonize data and approaches.
Improving the Linkages between Air Pollution Epidemiology and Quantitative Risk Assessment
Bell, Michelle L.; Walker, Katy; Hubbell, Bryan
2011-01-01
Background: Air pollution epidemiology plays an integral role in both identifying the hazards of air pollution as well as supplying the risk coefficients that are used in quantitative risk assessments. Evidence from both epidemiology and risk assessments has historically supported critical environmental policy decisions. The extent to which risk assessors can properly specify a quantitative risk assessment and characterize key sources of uncertainty depends in part on the availability, and clarity, of data and assumptions in the epidemiological studies. Objectives: We discuss the interests shared by air pollution epidemiology and risk assessment communities in ensuring that the findings of epidemiological studies are appropriately characterized and applied correctly in risk assessments. We highlight the key input parameters for risk assessments and consider how modest changes in the characterization of these data might enable more accurate risk assessments that better represent the findings of epidemiological studies. Discussion: We argue that more complete information regarding the methodological choices and input data used in epidemiological studies would support more accurate risk assessments—to the benefit of both disciplines. In particular, we suggest including additional details regarding air quality, demographic, and health data, as well as certain types of data-rich graphics. Conclusions: Relatively modest changes to the data reported in epidemiological studies will improve the quality of risk assessments and help prevent the misinterpretation and mischaracterization of the results of epidemiological studies. Such changes may also benefit epidemiologists undertaking meta-analyses. We suggest workshops as a way to improve the dialogue between the two communities. PMID:21816702
A robust rotorcraft flight control system design methodology utilizing quantitative feedback theory
NASA Technical Reports Server (NTRS)
Gorder, Peter James
1993-01-01
Rotorcraft flight control systems present design challenges which often exceed those associated with fixed-wing aircraft. First, large variations in the response characteristics of the rotorcraft result from the wide range of airspeeds of typical operation (hover to over 100 kts). Second, the assumption of vehicle rigidity often employed in the design of fixed-wing flight control systems is rarely justified in rotorcraft where rotor degrees of freedom can have a significant impact on the system performance and stability. This research was intended to develop a methodology for the design of robust rotorcraft flight control systems. Quantitative Feedback Theory (QFT) was chosen as the basis for the investigation. Quantitative Feedback Theory is a technique which accounts for variability in the dynamic response of the controlled element in the design robust control systems. It was developed to address a Multiple-Input Single-Output (MISO) design problem, and utilizes two degrees of freedom to satisfy the design criteria. Two techniques were examined for extending the QFT MISO technique to the design of a Multiple-Input-Multiple-Output (MIMO) flight control system (FCS) for a UH-60 Black Hawk Helicopter. In the first, a set of MISO systems, mathematically equivalent to the MIMO system, was determined. QFT was applied to each member of the set simultaneously. In the second, the same set of equivalent MISO systems were analyzed sequentially, with closed loop response information from each loop utilized in subsequent MISO designs. The results of each technique were compared, and the advantages of the second, termed Sequential Loop Closure, were clearly evident.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
High frequency inductive lamp and power oscillator
Kirkpatrick, Douglas A.; Gitsevich, Aleksandr
2005-09-27
An oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and a tuning circuit connected to the input of the amplifier, wherein the tuning circuit is continuously variable and consists of solid state electrical components with no mechanically adjustable devices including a pair of diodes connected to each other at their respective cathodes with a control voltage connected at the junction of the diodes. Another oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and transmission lines connected to the input of the amplifier with an input pad and a perpendicular transmission line extending from the input pad and forming a leg of a resonant "T", and wherein the feedback network is coupled to the leg of the resonant "T".
Bering Sea Nd isotope records of North Pacific Intermediate Water circulation
NASA Astrophysics Data System (ADS)
Rabbat, C.; Knudson, K. P.; Goldstein, S. L.
2017-12-01
North Pacific Intermediate Water (NPIW) is the primary water mass associated with Pacific meridional overturning circulation. While the relationship between Atlantic meridional overturning circulation and climate has been extensively studied, a lack of suitable sediment cores has limited past investigations of North Pacific climate and NPIW variability. Integrated Ocean Drilling Program Site U1342 (818 m water depth) on Bower's Ridge in the Bering Sea is located at a sensitive depth for detecting changes in NPIW, and it is the only available sub-arctic North Pacific site that offers long, continuous core recovery, relatively high sedimentation rates, excellent foraminifera preservation, and a well-constrained age model over multiple glacial-interglacial cycles. Previous work at Site U1342 from Knudson and Ravelo (2015), using non-quantitative circulation proxies, provides evidence for enhanced NPIW formation during extreme glacials associated with the closure of the Bering Strait and suggest that NPIW was formed locally within the Bering Sea. Our work builds on the potential importance of these results and applies more robust and potentially quantitative circulation proxies to constrain NPIW variability. Here, we present new records of NPIW circulation from Site U1342 based on Nd isotope analyses on fish debris and Fe-Mn encrusted foraminifera, which serve as semi-quantitative "water mass tracers." Weak Bering Sea NPIW formation and ventilation are reflected by relatively lower eNd values indicative of open subarctic North Pacific waters, which are presently predominant, whereas enhanced Bering Sea NPIW formation and ventilation are be reflected by relatively higher eNd values due to the input of Nd from regional volcanic rocks.
Anomalous chiral transport in heavy ion collisions from Anomalous-Viscous Fluid Dynamics
NASA Astrophysics Data System (ADS)
Shi, Shuzhe; Jiang, Yin; Lilleskov, Elias; Liao, Jinfeng
2018-07-01
Chiral anomaly is a fundamental aspect of quantum theories with chiral fermions. How such microscopic anomaly manifests itself in a macroscopic many-body system with chiral fermions, is a highly nontrivial question that has recently attracted significant interest. As it turns out, unusual transport currents can be induced by chiral anomaly under suitable conditions in such systems, with the notable example of the Chiral Magnetic Effect (CME) where a vector current (e.g. electric current) is generated along an external magnetic field. A lot of efforts have been made to search for CME in heavy ion collisions, by measuring the charge separation effect induced by the CME transport. A crucial challenge in such effort, is the quantitative prediction for the CME signal. In this paper, we develop the Anomalous-Viscous Fluid Dynamics (AVFD) framework, which implements the anomalous fluid dynamics to describe the evolution of fermion currents in QGP, on top of the neutral bulk background described by the VISH2+1 hydrodynamic simulations for heavy ion collisions. With this new tool, we quantitatively and systematically investigate the dependence of the CME signal to a series of theoretical inputs and associated uncertainties. With realistic estimates of initial conditions and magnetic field lifetime, the predicted CME signal is quantitatively consistent with measured change separation data in 200GeV Au-Au collisions. Based on analysis of Au-Au collisions, we further make predictions for the CME observable to be measured in the planned isobaric (Ru-Ru v.s. Zr-Zr) collision experiment, which could provide a most decisive test of the CME in heavy ion collisions.
NASA Technical Reports Server (NTRS)
Sturman, J.
1968-01-01
Stable input stage was designed for the use with a integrated circuit operational amplifier to provide improved performance as an instrumentation-type amplifier. The circuit provides high input impedance, stable gain, good common mode rejection, very low drift, and low output impedance.
Time Triggered Ethernet System Testing Means and Method
NASA Technical Reports Server (NTRS)
Smithgall, William Todd (Inventor); Hall, Brendan (Inventor); Varadarajan, Srivatsan (Inventor)
2014-01-01
Methods and apparatus are provided for evaluating the performance of a Time Triggered Ethernet (TTE) system employing Time Triggered (TT) communication. A real TTE system under test (SUT) having real input elements communicating using TT messages with output elements via one or more first TTE switches during a first time interval schedule established for the SUT. A simulation system is also provided having input simulators that communicate using TT messages via one or more second TTE switches with the same output elements during a second time interval schedule established for the simulation system. The first and second time interval schedules are off-set slightly so that messages from the input simulators, when present, arrive at the output elements prior to messages from the analogous real inputs, thereby having priority over messages from the real inputs and causing the system to operate based on the simulated inputs when present.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Social Security Contribution to Productivity and Wages in Labour Organization Perspective
NASA Astrophysics Data System (ADS)
Supriadi, Y. N.
2017-03-01
This research is investigating the discrepancy fulfilment of the right to social security and decent wages to increase labour productivity in the perspective of labour organizations, in which the company provides social security, and wages have not been able to meet the needs of workers, on the other hand, the workers are always required to increase productivity. Therefore, this study aims to identify the social security and wages that affect labour productivity. So this research will provide input to the company to undertake effective measures and efficient for the company’s sustainability. This research was conducted using a survey method approach and quantitative data analysis techniques that are causal comparative sample of 223 respondents from 504 study population includes all labour organization’s District and municipal in Banten Province. The results showed the significant influence of social security and wages to increase labour productivity. Therefore, companies are required to act strategically in maintaining prohibitionists labour through re-design of the work environment, increase workers’ participation, intervention, and satisfy the needs of workers whose impact will be realized understanding between workers and companies in maintaining the company’s business.
Future air pollution in the Shared Socio-economic Pathways
Rao, Shilpa; Klimont, Zbigniew; Smith, Steven J.; ...
2016-07-15
Emissions of air pollutants such as sulfur and nitrogen oxides and particulates have significant health impacts as well as effects on natural and anthropogenic ecosystems. These same emissions also can change atmospheric chemistry and the planetary energy balance, thereby impacting global and regional climate. Long-term scenarios for air pollutant emissions are needed as inputs to global climate and chemistry models, and for analysis linking air pollutant impacts across sectors. In this paper we present methodology and results for air pollutant emissions in Shared Socioeconomic Pathways (SSP) scenarios. We first present a set of three air pollution narratives that describe high,more » central, and low pollution control ambitions over the 21 st century. These narratives are then translated into quantitative guidance for use in integrated assessment models. We provide an overview of pollutant emission trajectories under the SSP scenarios. Pollutant emissions in these scenarios cover a wider range than the scenarios used in previous international climate model comparisons. Furthermore, the SSP scenarios provide the opportunity to access a more comprehensive range of future global and regional air quality outcomes.« less
Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569
Salvo, Grazia; Doyle-Baker, Patricia K.; McCormack, Gavin R.
2018-01-01
Qualitative studies can provide important information about how and why the built environment impacts physical activity decision-making—information that is important for informing local urban policies. We undertook a systematized literature review to synthesize findings from qualitative studies exploring how the built environment influences physical activity in adults. Our review included 36 peer-reviewed qualitative studies published from 1998 onwards. Our findings complemented existing quantitative evidence and provided additional insight into how functional, aesthetic, destination, and safety built characteristics influence physical activity decision-making. Sociodemographic characteristics (age, sex, ethnicity, and socioeconomic status) also impacted the BE’s influence on physical activity. Our review findings reinforce the need for synergy between transportation planning, urban design, landscape architecture, road engineering, parks and recreation, bylaw enforcement, and public health to be involved in creating neighbourhood environments that support physical activity. Our findings support a need for local neighbourhood citizens and associations with representation from individuals and groups with different sociodemographic backgrounds to have input into neighbourhood environment planning process. PMID:29724048
Future air pollution in the Shared Socio-economic Pathways
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Shilpa; Klimont, Zbigniew; Smith, Steven J.
Emissions of air pollutants such as sulfur and nitrogen oxides and particulates have significant health impacts as well as effects on natural and anthropogenic ecosystems. These same emissions also can change atmospheric chemistry and the planetary energy balance, thereby impacting global and regional climate. Long-term scenarios for air pollutant emissions are needed as inputs to global climate and chemistry models, and for analysis linking air pollutant impacts across sectors. In this paper we present methodology and results for air pollutant emissions in Shared Socioeconomic Pathways (SSP) scenarios. We first present a set of three air pollution narratives that describe high,more » central, and low pollution control ambitions over the 21 st century. These narratives are then translated into quantitative guidance for use in integrated assessment models. We provide an overview of pollutant emission trajectories under the SSP scenarios. Pollutant emissions in these scenarios cover a wider range than the scenarios used in previous international climate model comparisons. Furthermore, the SSP scenarios provide the opportunity to access a more comprehensive range of future global and regional air quality outcomes.« less
NASA Technical Reports Server (NTRS)
Fung, Inez Y.; Tucker, C. J.; Prentice, Katharine C.
1985-01-01
The 'normalized difference vegetation indices' (NVI) derived from AVHRR radiances are combined with field data of soil respiration and a global map of net primary productivity to prescribe, for the globe, the seasonal exchange of CO2 between the atmosphere and the terrestrial biosphere. The monthly fluxes of CO2 thus obtained are used as inputs to a 3-D tracer transport model which uses winds generated by a 3-D atmospheric general circulation model to advect CO2 as an inert constituent. Analysis of the 3-D model results shows reasonable agreement between the simulated and observed annual cycles of atmospheric CO2 at the locations of the remote monitoring stations. The application is shown of atmospheric CO2 distributions to calibrate the NVI in terms of carbon fluxes. The approach suggests that the NVI may be used to provide quantitative information about long term and global scale variations of photosynthetic activity and of atmospheric CO2 concentrations provided that variations in the atmospheric circulation and in atmospheric composition are known.
Valder, Joshua F.; Delzer, Gregory C.; Carter, Janet M.; Smith, Bruce D.; Smith, David V.
2016-09-28
The city of Sioux Falls is the fastest growing community in South Dakota. In response to this continued growth and planning for future development, Sioux Falls requires a sustainable supply of municipal water. Planning and managing sustainable groundwater supplies requires a thorough understanding of local groundwater resources. The Big Sioux aquifer consists of glacial outwash sands and gravels and is hydraulically connected to the Big Sioux River, which provided about 90 percent of the city’s source-water production in 2015. Managing sustainable groundwater supplies also requires an understanding of groundwater availability. An effective mechanism to inform water management decisions is the development and utilization of a groundwater-flow model. A groundwater-flow model provides a quantitative framework for synthesizing field information and conceptualizing hydrogeologic processes. These groundwater-flow models can support decision making processes by mapping and characterizing the aquifer. Accordingly, the city of Sioux Falls partnered with the U.S. Geological Survey to construct a groundwater-flow model. Model inputs will include data from advanced geophysical techniques, specifically airborne electromagnetic methods.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core
Hucka, Michael; Bergmann, Frank T.; Hoops, Stefan; Keating, Sarah M.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528564
Mahajan, Abhishek; Deshpande, Sneha S; Thakur, Meenakshi H
2017-01-01
“Personalized oncology” is a multi-disciplinary science, which requires inputs from various streams for optimal patient management. Humongous progress in the treatment modalities available and the increasing need to provide functional information in addition to the morphological data; has led to leaping progress in the field of imaging. Magnetic resonance imaging has undergone tremendous progress with various newer MR techniques providing vital functional information and is becoming the cornerstone of “radiomics/radiogenomics”. Diffusion-weighted imaging is one such technique which capitalizes on the tendency of water protons to diffuse randomly in a given system. This technique has revolutionized oncological imaging, by giving vital qualitative and quantitative information regarding tumor biology which helps in detection, characterization and post treatment surveillance of the lesions and challenging the notion that “one size fits all”. It has been applied at various sites with different clinical experience. We hereby present a brief review of this novel functional imaging tool, with its application in “personalized oncology”. PMID:28717412
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
ISS Training Best Practices and Lessons Learned
NASA Technical Reports Server (NTRS)
Barshi, Immanuel; Dempsey, Donna L.
2017-01-01
Training our crew members for long duration exploration-class missions (LDEM) will have to be qualitatively and quantitatively different from current training practices. However, there is much to be learned from the extensive experience NASA has gained in training crew members for missions on board the International Space Station (ISS). Furthermore, the operational experience on board the ISS provides valuable feedback concerning training effectiveness. Keeping in mind the vast differences between current ISS crew training and training for LDEM, the needs of future crew members, and the demands of future missions, this ongoing study seeks to document current training practices and lessons learned. The goal of the study is to provide input to the design of future crew training that takes as much advantage as possible of what has already been learned and avoids as much as possible past inefficiencies. Results from this study will be presented upon its completion. By researching established training principles, examining future needs, and by using current practices in spaceflight training as test beds, this research project is mitigating program risks and generating templates and requirements to meet future training needs.
NASA Astrophysics Data System (ADS)
Mo, W.; Fang, W.
2015-12-01
Vulnerability which quantifies the loss ratio under different hazard intensity is an important feature of the natural disaster system and has important significance to natural disaster risk assessment. Agriculture is an outdoor industry with high risk of meteorological disasters. The strong winds, heavy rain and storm surge are main typhoon hazard factors to crops. To provide a quantitative research method for the loss evaluation of crops due to typhoon disaster we first revised two vulnerability curves for crops under comprehensive intensity of typhoon based on the simulated hazard data and loss data related to historical typhoon events landing on China from 1949 to 2014;and then established a storm surge vulnerability matrix of crops regarding Zhanjiang City of Guangdong Province as the study area ; finally, we put forward three storm surge fragility curves for crops representing different states of loss. The results can effectively describe the typhoon vulnerability for crops in China coastal areas so as to provide the input to post-disaster loss assessments and catastrophe modeling applications.
ISS Training Best Practices and Lessons Learned
NASA Technical Reports Server (NTRS)
Dempsey, Donna L.; Barshi, Immanuel
2018-01-01
Training our crew members for long-duration Deep Space Transport (DST) missions will have to be qualitatively and quantitatively different from current training practices. However, there is much to be learned from the extensive experience NASA has gained in training crew members for missions on board the International Space Station (ISS). Furthermore, the operational experience on board the ISS provides valuable feedback concerning training effectiveness. Keeping in mind the vast differences between current ISS crew training and training for DST missions, the needs of future crew members, and the demands of future missions, this ongoing study seeks to document current training practices and lessons learned. The goal of the study is to provide input to the design of future crew training that takes as much advantage as possible of what has already been learned and avoids as much as possible past inefficiencies. Results from this study will be presented upon its completion. By researching established training principles, examining future needs, and by using current practices in spaceflight training as test beds, this research project is mitigating program risks and generating templates and requirements to meet future training needs.
Dual Brushless Resolver Rate Sensor
NASA Technical Reports Server (NTRS)
Howard, David E. (Inventor)
1996-01-01
This invention relates to dual analog angular rate sensors which are implemented without the use of mechanical brushes. A resolver rate sensor which includes two brushless resolvers which are mechanically coupled to the same output shaft is provided with inputs which are provided to each resolver by providing the first resolver with a DC input and the second resolver with an AC sinusoidal input. A trigonometric identity in which the sum of the squares of the sin and cosine components equal one is used to advantage in providing a sensor of increased accuracy. The first resolver may have a fixed or variable DC input to permit dynamic adjustment of resolver sensitivity thus permitting a wide range of coverage. Novelty and advantages of the invention reside in the excitation of a resolver with a DC signal and in the utilization of two resolvers and the trigonometric identity of cos(exp 2)(theta) + sin(exp 2)(theta) = 1 to provide an accurate rate sensor which is sensitive to direction and accurate through zero rate.
The clinical use of dynamic posturography in the elderly.
Shepard, N T
1989-12-01
We provide an overview of the clinical uses of dynamic posturography. Although the equipment described to perform this testing is expensive, the concepts, especially those for sensory organization, can be applied for +20.00. To apply the six sensory organization conditions, one merely needs some way to disrupt proprioceptive information by maintaining ankle angle and providing for visual conflict stimuli. We found that proprioceptive information can be disrupted easily by asking the patient to stand on a thick (4-inch) dense piece of foam rubber like that used in cushions for furniture. Visual stabilization conflict can be provided by having the patient wear a 19- to 20-inch Japanese lantern with a head-mounting system in the center so that the patient's movements do not reflect themselves in relative movements to the visual environment. With use of these two simple tools, the six sensory organization tests can be approximated in a clinical situation in a short time and can provide some relative information about a patient's postural control capabilities. With minor additional work, a quantitative measure of output that gives indications of the amount of anterior-posterior sway also can be provided. For elderly patients with a variety of problems ranging from general unsteadiness to frank vertigo, the risk of falling can be devastating, and it is important to provide a thorough investigation of the total balance system. The systematic investigation, qualitatively or quantitatively, of integration of sensory input and motor outputs provides a dimension that typically has been lacking in the routine "dizzy patient workup" for all ages but especially for elderly patients. Therefore, the application of the postural maintenance theory with the above-described procedures or variations in these procedures appears to have a great deal of clinical relevance in the evaluation of patients with gait and balance disorders. These types of evaluations represent an adjunct or addition to the evaluation of the vestibular system and the vestibulo-ocular reflexes and by no means should be considered a substitute for that traditional evaluation. It is the combination of information that can provide the clinician with a more global picture of the entire balance system and its functional capabilities.
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
High-performance ultra-low power VLSI analog processor for data compression
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1996-01-01
An apparatus for data compression employing a parallel analog processor. The apparatus includes an array of processor cells with N columns and M rows wherein the processor cells have an input device, memory device, and processor device. The input device is used for inputting a series of input vectors. Each input vector is simultaneously input into each column of the array of processor cells in a pre-determined sequential order. An input vector is made up of M components, ones of which are input into ones of M processor cells making up a column of the array. The memory device is used for providing ones of M components of a codebook vector to ones of the processor cells making up a column of the array. A different codebook vector is provided to each of the N columns of the array. The processor device is used for simultaneously comparing the components of each input vector to corresponding components of each codebook vector, and for outputting a signal representative of the closeness between the compared vector components. A combination device is used to combine the signal output from each processor cell in each column of the array and to output a combined signal. A closeness determination device is then used for determining which codebook vector is closest to an input vector from the combined signals, and for outputting a codebook vector index indicating which of the N codebook vectors was the closest to each input vector input into the array.
Transfer Function Control for Biometric Monitoring System
NASA Technical Reports Server (NTRS)
Chmiel, Alan J. (Inventor); Grodinsky, Carlos M. (Inventor); Humphreys, Bradley T. (Inventor)
2015-01-01
A modular apparatus for acquiring biometric data may include circuitry operative to receive an input signal indicative of a biometric condition, the circuitry being configured to process the input signal according to a transfer function thereof and to provide a corresponding processed input signal. A controller is configured to provide at least one control signal to the circuitry to programmatically modify the transfer function of the modular system to facilitate acquisition of the biometric data.
Building Extraction from Remote Sensing Data Using Fully Convolutional Networks
NASA Astrophysics Data System (ADS)
Bittner, K.; Cui, S.; Reinartz, P.
2017-05-01
Building detection and footprint extraction are highly demanded for many remote sensing applications. Though most previous works have shown promising results, the automatic extraction of building footprints still remains a nontrivial topic, especially in complex urban areas. Recently developed extensions of the CNN framework made it possible to perform dense pixel-wise classification of input images. Based on these abilities we propose a methodology, which automatically generates a full resolution binary building mask out of a Digital Surface Model (DSM) using a Fully Convolution Network (FCN) architecture. The advantage of using the depth information is that it provides geometrical silhouettes and allows a better separation of buildings from background as well as through its invariance to illumination and color variations. The proposed framework has mainly two steps. Firstly, the FCN is trained on a large set of patches consisting of normalized DSM (nDSM) as inputs and available ground truth building mask as target outputs. Secondly, the generated predictions from FCN are viewed as unary terms for a Fully connected Conditional Random Fields (FCRF), which enables us to create a final binary building mask. A series of experiments demonstrate that our methodology is able to extract accurate building footprints which are close to the buildings original shapes to a high degree. The quantitative and qualitative analysis show the significant improvements of the results in contrast to the multy-layer fully connected network from our previous work.
Mapping the structure of the world economy.
Lenzen, Manfred; Kanemoto, Keiichiro; Moran, Daniel; Geschke, Arne
2012-08-07
We have developed a new series of environmentally extended multi-region input-output (MRIO) tables with applications in carbon, water, and ecological footprinting, and Life-Cycle Assessment, as well as trend and key driver analyses. Such applications have recently been at the forefront of global policy debates, such as about assigning responsibility for emissions embodied in internationally traded products. The new time series was constructed using advanced parallelized supercomputing resources, and significantly advances the previous state of art because of four innovations. First, it is available as a continuous 20-year time series of MRIO tables. Second, it distinguishes 187 individual countries comprising more than 15,000 industry sectors, and hence offers unsurpassed detail. Third, it provides information just 1-3 years delayed therefore significantly improving timeliness. Fourth, it presents MRIO elements with accompanying standard deviations in order to allow users to understand the reliability of data. These advances will lead to material improvements in the capability of applications that rely on input-output tables. The timeliness of information means that analyses are more relevant to current policy questions. The continuity of the time series enables the robust identification of key trends and drivers of global environmental change. The high country and sector detail drastically improves the resolution of Life-Cycle Assessments. Finally, the availability of information on uncertainty allows policy-makers to quantitatively judge the level of confidence that can be placed in the results of analyses.
Environmental assessment of metal exposure to corals living in Castle Harbour, Bermuda
Prouty, N.G.; Goodkin, N.F.; Jones, R.; Lamborg, C.H.; Storlazzi, C.D.; Hughen, K.A.
2013-01-01
Environmental contamination in Castle Harbour, Bermuda, has been linked to the dissolution and leaching of contaminants from the adjacent marine landfill. This study expands the evidence for environmental impact of leachate from the landfill by quantitatively demonstrating elevated metal uptake over the last 30 years in corals growing in Castle Harbour. Coral Pb/Ca, Zn/Ca and Mn/Ca ratios and total Hg concentrations are elevated relative to an adjacent control site in John Smith's Bay. The temporal variability in the Castle Harbour coral records suggests that while the landfill has increased in size over the last 35 years, the dominant input of metals is through periodic leaching of contaminants from the municipal landfill and surrounding sediment. Elevated contaminants in the surrounding sediment suggest that resuspension is an important transport medium for transferring heavy metals to corals. Increased winds, particularly during the 1990s, were accompanied by higher coral metal composition at Castle Harbour. Coupled with wind-induced resuspension, interannual changes in sea level within the Harbour can lead to increased bioavailability of sediment-bound metals and subsequent coral metal assimilation. At John Smith's Bay, large scale convective mixing may be driving interannual metal variability in the coral record rather than impacts from land-based activities. Results from this study provide important insights into the coupling of natural variability and anthropogenic input of contaminants to the nearshore environment.
Weeding, Emma; Houle, Jason
2010-01-01
Modeling tools can play an important role in synthetic biology the same way modeling helps in other engineering disciplines: simulations can quickly probe mechanisms and provide a clear picture of how different components influence the behavior of the whole. We present a brief review of available tools and present SynBioSS Designer. The Synthetic Biology Software Suite (SynBioSS) is used for the generation, storing, retrieval and quantitative simulation of synthetic biological networks. SynBioSS consists of three distinct components: the Desktop Simulator, the Wiki, and the Designer. SynBioSS Designer takes as input molecular parts involved in gene expression and regulation (e.g. promoters, transcription factors, ribosome binding sites, etc.), and automatically generates complete networks of reactions that represent transcription, translation, regulation, induction and degradation of those parts. Effectively, Designer uses DNA sequences as input and generates networks of biomolecular reactions as output. In this paper we describe how Designer uses universal principles of molecular biology to generate models of any arbitrary synthetic biological system. These models are useful as they explain biological phenotypic complexity in mechanistic terms. In turn, such mechanistic explanations can assist in designing synthetic biological systems. We also discuss, giving practical guidance to users, how Designer interfaces with the Registry of Standard Biological Parts, the de facto compendium of parts used in synthetic biology applications. PMID:20639523
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
An analysis of the process and results of manual geocode correction
McDonald, Yolanda J.; Schwind, Michael; Goldberg, Daniel W.; Lampley, Amanda; Wheeler, Cosette M.
2018-01-01
Geocoding is the science and process of assigning geographical coordinates (i.e. latitude, longitude) to a postal address. The quality of the geocode can vary dramatically depending on several variables, including incorrect input address data, missing address components, and spelling mistakes. A dataset with a considerable number of geocoding inaccuracies can potentially result in an imprecise analysis and invalid conclusions. There has been little quantitative analysis of the amount of effort (i.e. time) to perform geocoding correction, and how such correction could improve geocode quality type. This study used a low-cost and easy to implement method to improve geocode quality type of an input database (i.e. addresses to be matched) through the processes of manual geocode intervention, and it assessed the amount of effort to manually correct inaccurate geocodes, reported the resulting match rate improvement between the original and the corrected geocodes, and documented the corresponding spatial shift by geocode quality type resulting from the corrections. Findings demonstrated that manual intervention of geocoding resulted in a 90% improvement of geocode quality type, took 42 hours to process, and the spatial shift ranged from 0.02 to 151,368 m. This study provides evidence to inform research teams considering the application of manual geocoding intervention that it is a low-cost and relatively easy process to execute. PMID:28555477
The subjective experience of acute, experimentally-induced Salvia divinorum inebriation.
Addy, Peter H; Garcia-Romeu, Albert; Metzger, Matthew; Wade, Jenny
2015-04-01
This study examined the overall psychological effects of inebriation facilitated by the naturally-occurring plant hallucinogen Salvia divinorum using a double-blind, randomized, placebo-controlled trial. Thirty healthy individuals self-administered Salvia divinorum via combustion and inhalation in a quiet, comfortable research setting. Experimental sessions, post-session interviews, and 8-week follow-up meetings were audio recorded and transcribed to provide the primary qualitative material analyzed here. Additionally, post-session responses to the Hallucinogen Rating Scale provided a quantitative groundwork for mixed-methods discussion. Qualitative data underwent thematic content analysis, being coded independently by three researchers before being collaboratively integrated to provide the final results. Three main themes and 10 subthemes of acute intoxication emerged, encompassing the qualities of the experience, perceptual alterations, and cognitive-affective shifts. The experience was described as having rapid onset and being intense and unique. Participants reported marked changes in auditory, visual, and interoceptive sensory input; losing normal awareness of themselves and their surroundings; and an assortment of delusional phenomena. Additionally, the abuse potential of Salvia divinorum was examined post hoc. These findings are discussed in light of previous research, and provide an initial framework for greater understanding of the subjective effects of Salvia divinorum, an emerging drug of abuse. © The Author(s) 2015.
A Practical Probabilistic Graphical Modeling Tool for Weighing ...
Past weight-of-evidence frameworks for adverse ecological effects have provided soft-scoring procedures for judgments based on the quality and measured attributes of evidence. Here, we provide a flexible probabilistic structure for weighing and integrating lines of evidence for ecological risk determinations. Probabilistic approaches can provide both a quantitative weighing of lines of evidence and methods for evaluating risk and uncertainty. The current modeling structure wasdeveloped for propagating uncertainties in measured endpoints and their influence on the plausibility of adverse effects. To illustrate the approach, we apply the model framework to the sediment quality triad using example lines of evidence for sediment chemistry measurements, bioassay results, and in situ infauna diversity of benthic communities using a simplified hypothetical case study. We then combine the three lines evidence and evaluate sensitivity to the input parameters, and show how uncertainties are propagated and how additional information can be incorporated to rapidly update the probability of impacts. The developed network model can be expanded to accommodate additional lines of evidence, variables and states of importance, and different types of uncertainties in the lines of evidence including spatial and temporal as well as measurement errors. We provide a flexible Bayesian network structure for weighing and integrating lines of evidence for ecological risk determinations
Cervantes, Felix A; Backus, Elaine A
2018-05-31
Blue-green sharpshooter, Graphocephala atropunctata, is a native California vector of Xylella fastidiosa (Xf), a foregut-borne bacterium that is the causal agent of Pierce's disease in grapevines. A 3rd-generation, AC-DC electropenetrograph (EPG monitor) was used to record stylet probing and ingestion behaviors of adult G. atropunctata on healthy grapevines. This study presents for the first time a complete, updated waveform library for this species, as well as effects of different electropenetrograph settings and adhesives on waveform appearances. Both AC and DC applied signals were used with input resistor (Ri) levels (amplifier sensitivities) of 10 6 , 10 7 , 10 8 and 10 9 Ohms, as well as two type of adhesives, conducting silver paint and handmade silver glue. Waveform description, characterization of electrical origins (R versus emf components), and proposed biological meanings of waveforms are reported, as well as qualitative differences in waveform appearances observed with different electropenetrograph settings and adhesives. In addition, a quantitative study with AC signal, using two applied voltage levels (50 and 200 mV) and two Ri levels (10 7 and 10 9 Ohms) was performed. Intermediate Ri levels 10 7 and 10 8 Ohms provided EPG waveforms with the greatest amount of information, because both levels captured similar proportions of R and emf components, as supported by appearance, clarity, and definition of waveforms. Similarly, use of a gold wire loop plus handmade silver glue provided more definition of waveforms than a gold wire loop plus commercial conducting silver paint. Qualitative/observational evidence suggested that AC applied signal caused fewer aberrant behaviors/waveforms than DC applied signal. In the quantitative study, behavioral components of the sharpshooter X wave were the most affected by changes in Ri and voltage level. Because the X wave probably represents X. fastidiosa inoculation behavior, future studies of X. fastidiosa inoculation via EPG will require carefully determined instrument settings. An intermediate Ri level such as 10 8 Ohms with low voltage, AC applied signal, and gold wire loop plus silver glue is recommended as the best electropenetrograph methods to conduct future EPG studies of sharpshooter inoculation behaviors on Xf-resistant and -susceptible grapevine. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Bird, David A.
1983-01-01
A low-noise pulse conditioner is provided for driving electronic digital processing circuitry directly from differentially induced input pulses. The circuit uses a unique differential-to-peak detector circuit to generate a dynamic reference signal proportional to the input peak voltage. The input pulses are compared with the reference signal in an input network which operates in full differential mode with only a passive input filter. This reduces the introduction of circuit-induced noise, or jitter, generated in ground referenced input elements normally used in pulse conditioning circuits, especially speed transducer processing circuits.
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Yan; Chen, Tongjun; Yan, Qiuyan; Ma, Li
2017-04-01
The thickness of tectonically deformed coal (TDC) has positive correlation associations with gas outbursts. In order to predict the TDC thickness of coal beds, we propose a new quantitative predicting method using an extreme learning machine (ELM) algorithm, a principal component analysis (PCA) algorithm, and seismic attributes. At first, we build an ELM prediction model using the PCA attributes of a synthetic seismic section. The results suggest that the ELM model can produce a reliable and accurate prediction of the TDC thickness for synthetic data, preferring Sigmoid activation function and 20 hidden nodes. Then, we analyze the applicability of the ELM model on the thickness prediction of the TDC with real application data. Through the cross validation of near-well traces, the results suggest that the ELM model can produce a reliable and accurate prediction of the TDC. After that, we use 250 near-well traces from 10 wells to build an ELM predicting model and use the model to forecast the TDC thickness of the No. 15 coal in the study area using the PCA attributes as the inputs. Comparing the predicted results, it is noted that the trained ELM model with two selected PCA attributes yields better predication results than those from the other combinations of the attributes. Finally, the trained ELM model with real seismic data have a different number of hidden nodes (10) than the trained ELM model with synthetic seismic data. In summary, it is feasible to use an ELM model to predict the TDC thickness using the calculated PCA attributes as the inputs. However, the input attributes, the activation function and the number of hidden nodes in the ELM model should be selected and tested carefully based on individual application.
NASA Astrophysics Data System (ADS)
Xu, Zhaokai; Li, Tiegang; Clift, Peter D.; Lim, Dhongil; Wan, Shiming; Chen, Hongjin; Tang, Zheng; Jiang, Fuqing; Xiong, Zhifang
2015-09-01
We present a new high-resolution multiproxy data set of Sr-Nd isotopes, rare earth element, soluble iron, and total organic carbon data from International Marine Global Change Study Core MD06-3047 located in the western Philippine Sea. We integrate our new data with published clay mineralogy, rare earth element chemistry, thermocline depth, and δ13C differences between benthic and planktonic foraminifera, in order to quantitatively constrain Asian dust input to the basin. We explore the relationship between Philippine Sea and high-latitude Pacific eolian fluxes, as well as its significance for marine productivity and atmospheric CO2 during the mid-late Quaternary. Three different indices indicate that Asian dust contributes between ˜15% and ˜50% to the detrital fraction of the sediments. Eolian dust flux in Core MD06-3047 is similar to that in the polar southern Pacific sediment. Coherent changes for most dust flux maximum/minimum indicate that dust generation in interhemispheric source areas might have a common response to climatic variation over the mid-late Quaternary. Furthermore, we note relatively good coherence between Asian dust input, soluble iron concentration, local marine productivity, and even global atmospheric CO2 concentration over the entire study interval. This suggests that dust-borne iron fertilization of marine phytoplankton might have been a periodic process operating at glacial/interglacial time scales over the past 700 ka. We suggest that strengthening of the biological pump in the Philippine Sea, and elsewhere in the tropical western Pacific during the mid-late Quaternary glacial periods may contribute to the lowering of atmospheric CO2 concentrations during ice ages.
Wavelength meter having single mode fiber optics multiplexed inputs
Hackel, R.P.; Paris, R.D.; Feldman, M.
1993-02-23
A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.
Wavelength meter having single mode fiber optics multiplexed inputs
Hackel, Richard P.; Paris, Robert D.; Feldman, Mark
1993-01-01
A wavelength meter having a single mode fiber optics input is disclosed. The single mode fiber enables a plurality of laser beams to be multiplexed to form a multiplexed input to the wavelength meter. The wavelength meter can provide a determination of the wavelength of any one or all of the plurality of laser beams by suitable processing. Another aspect of the present invention is that one of the laser beams could be a known reference laser having a predetermined wavelength. Hence, the improved wavelength meter can provide an on-line calibration capability with the reference laser input as one of the plurality of laser beams.
NASA Technical Reports Server (NTRS)
Birchenough, Arthur G.
2003-01-01
Improvements in the efficiency and size of DC-DC converters have resulted from advances in components, primarily semiconductors, and improved topologies. One topology, which has shown very high potential in limited applications, is the Series Connected Boost Unit (SCBU), wherein a small DC-DC converter output is connected in series with the input bus to provide an output voltage equal to or greater than the input voltage. Since the DC-DC converter switches only a fraction of the power throughput, the overall system efficiency is very high. But this technique is limited to applications where the output is always greater than the input. The Series Connected Buck Boost Regulator (SCBBR) concept extends partial power processing technique used in the SCBU to operation when the desired output voltage is higher or lower than the input voltage, and the implementation described can even operate as a conventional buck converter to operate at very low output to input voltage ratios. This paper describes the operation and performance of an SCBBR configured as a bus voltage regulator providing 50 percent voltage regulation range, bus switching, and overload limiting, operating above 98 percent efficiency. The technique does not provide input-output isolation.
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Protein collapse is encoded in the folded state architecture.
Samanta, Himadri S; Zhuravlev, Pavel I; Hinczewski, Michael; Hori, Naoto; Chakrabarti, Shaon; Thirumalai, D
2017-05-21
Folded states of single domain globular proteins are compact with high packing density. The radius of gyration, R g , of both the folded and unfolded states increase as N ν where N is the number of amino acids in the protein. The values of the Flory exponent ν are, respectively, ≈⅓ and ≈0.6 in the folded and unfolded states, coinciding with those for homopolymers. However, the extent of compaction of the unfolded state of a protein under low denaturant concentration (collapsibility), conditions favoring the formation of the folded state, is unknown. We develop a theory that uses the contact map of proteins as input to quantitatively assess collapsibility of proteins. Although collapsibility is universal, the propensity to be compact depends on the protein architecture. Application of the theory to over two thousand proteins shows that collapsibility depends not only on N but also on the contact map reflecting the native structure. A major prediction of the theory is that β-sheet proteins are far more collapsible than structures dominated by α-helices. The theory and the accompanying simulations, validating the theoretical predictions, provide insights into the differing conclusions reached using different experimental probes assessing the extent of compaction of proteins. By calculating the criterion for collapsibility as a function of protein length we provide quantitative insights into the reasons why single domain proteins are small and the physical reasons for the origin of multi-domain proteins. Collapsibility of non-coding RNA molecules is similar β-sheet proteins structures adding support to "Compactness Selection Hypothesis".
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Segmentation of the Knee for Analysis of Osteoarthritis
NASA Astrophysics Data System (ADS)
Zerfass, Peter; Museyko, Oleg; Bousson, Valérie; Laredo, Jean-Denis; Kalender, Willi A.; Engelke, Klaus
Osteoarthritis changes the load distribution within joints and also changes bone density and structure. Within typical timelines of clinical studies these changes can be very small. Therefore precise definition of evaluation regions which are highly robust and show little to no interand intra-operator variance are essential for high quality quantitative analysis. To achieve this goal we have developed a system for the definition of such regions with minimal user input.
Performance of fire behavior fuel models developed for the Rothermel Surface Fire Spread Model
Robert Ziel; W. Matt Jolly
2009-01-01
In 2005, 40 new fire behavior fuel models were published for use with the Rothermel Surface Fire Spread Model. These new models are intended to augment the original 13 developed in 1972 and 1976. As a compiled set of quantitative fuel descriptions that serve as input to the Rothermel model, the selected fire behavior fuel model has always been critical to the resulting...
Hybrid powertrain system including smooth shifting automated transmission
Beaty, Kevin D.; Nellums, Richard A.
2006-10-24
A powertrain system is provided that includes a prime mover and a change-gear transmission having an input, at least two gear ratios, and an output. The powertrain system also includes a power shunt configured to route power applied to the transmission by one of the input and the output to the other one of the input and the output. A transmission system and a method for facilitating shifting of a transmission system are also provided.
Multiplexer and time duration measuring circuit
Gray, Jr., James
1980-01-01
A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.
Surface code implementation of block code state distillation.
Fowler, Austin G; Devitt, Simon J; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved [formula: see text] state given 15 input copies. New block code state distillation methods can produce k improved [formula: see text] states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Surface code implementation of block code state distillation
Fowler, Austin G.; Devitt, Simon J.; Jones, Cody
2013-01-01
State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868
NASA Technical Reports Server (NTRS)
Williams, J. H., Jr.; Karagulle, H.; Lee, S. S.
1982-01-01
The quantitative understanding of ultrasonic nondestructive evaluation parameters such as the stress wave factor were studied. Ultrasonic input/output characteristics for an isotropic elastic plate with transmitting and receiving longitudinal transducers coupled to the same face were analyzed. The asymptotic normal stress is calculated for an isotropic elastic half space subjected to a uniform harmonic normal stress applied to a circular region at the surface. The radiated stress waves are traced within the plate by considering wave reflections at the top and bottom faces. The output voltage amplitude of the receiving transducer is estimated by considering only longitudinal waves. Agreement is found between the output voltage wave packet amplitudes and times of arrival due to multiple reflections of the longitudinal waves.
Examing the Effects of Different IMF, F10.7, and Auroral Inputs on the Thermospheric Neutral Winds
NASA Astrophysics Data System (ADS)
Deng, Y.; Ridley, A. J.
2003-12-01
To obtain a better understanding of how the magnetosphere effects the global thermospheric and ionospheric structure, we conduct some numerical experiments using the University of Michigan's Global Ionosphere-Thermosphere Model (GITM). We have run GITM to roughly steady-state using different strengths of the high-latitude electric potential pattern, F10.7, and auroral inputs to determine how these effect the temporal history and stead-state of the thermospheric neutral winds. Our model reproduces the well known fact that the neutral winds are strongly driven by the ion convection above approximately 300 km, and that the ramp-up time is very dependent upon the altitude. We show quantitative results of the ramp-up times and maximum neutral wind speeds for the different driving conditions.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J
2018-03-05
When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the <2-fold average error needed to guide lead optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.
Hazard Screening Methods for Nanomaterials: A Comparative Study
Murphy, Finbarr; Mullins, Martin; Furxhi, Irini; Costa, Anna L.; Simeone, Felice C.
2018-01-01
Hazard identification is the key step in risk assessment and management of manufactured nanomaterials (NM). However, the rapid commercialisation of nano-enabled products continues to out-pace the development of a prudent risk management mechanism that is widely accepted by the scientific community and enforced by regulators. However, a growing body of academic literature is developing promising quantitative methods. Two approaches have gained significant currency. Bayesian networks (BN) are a probabilistic, machine learning approach while the weight of evidence (WoE) statistical framework is based on expert elicitation. This comparative study investigates the efficacy of quantitative WoE and Bayesian methodologies in ranking the potential hazard of metal and metal-oxide NMs—TiO2, Ag, and ZnO. This research finds that hazard ranking is consistent for both risk assessment approaches. The BN and WoE models both utilize physico-chemical, toxicological, and study type data to infer the hazard potential. The BN exhibits more stability when the models are perturbed with new data. The BN has the significant advantage of self-learning with new data; however, this assumes all input data is equally valid. This research finds that a combination of WoE that would rank input data along with the BN is the optimal hazard assessment framework. PMID:29495342
Tham, S Y; Agatonovic-Kustrin, S
2002-05-15
Quantitative structure-retention relationship(QSRR) method was used to model reversed-phase high-performance liquid chromatography (RP-HPLC) separation of 18 selected amino acids. Retention data for phenylthiocarbamyl (PTC) amino acids derivatives were obtained using gradient elution on ODS column with mobile phase of varying acetonitrile, acetate buffer and containing 0.5 ml/l of triethylamine (TEA). Molecular structure of each amino acid was encoded with 36 calculated molecular descriptors. The correlation between the molecular descriptors and the retention time of the compounds in the calibration set was established using the genetic neural network method. A genetic algorithm (GA) was used to select important molecular descriptors and supervised artificial neural network (ANN) was used to correlate mobile phase composition and selected descriptors with the experimentally derived retention times. Retention time values were used as the network's output and calculated molecular descriptors and mobile phase composition as the inputs. The best model with five input descriptors was chosen, and the significance of the selected descriptors for amino acid separation was examined. Results confirmed the dominant role of the organic modifier in such chromatographic systems in addition to lipophilicity (log P) and molecular size and shape (topological indices) of investigated solutes.
Haarman, Juliet A M; Maartens, Erik; van der Kooij, Herman; Buurke, Jaap H; Reenalda, Jasper; Rietman, Johan S
2017-12-02
During gait training, physical therapists continuously supervise stroke survivors and provide physical support to their pelvis when they judge that the patient is unable to keep his balance. This paper is the first in providing quantitative data about the corrective forces that therapists use during gait training. It is assumed that changes in the acceleration of a patient's COM are a good predictor for therapeutic balance assistance during the training sessions Therefore, this paper provides a method that predicts the timing of therapeutic balance assistance, based on acceleration data of the sacrum. Eight sub-acute stroke survivors and seven therapists were included in this study. Patients were asked to perform straight line walking as well as slalom walking in a conventional training setting. Acceleration of the sacrum was captured by an Inertial Magnetic Measurement Unit. Balance-assisting corrective forces applied by the therapist were collected from two force sensors positioned on both sides of the patient's hips. Measures to characterize the therapeutic balance assistance were the amount of force, duration, impulse and the anatomical plane in which the assistance took place. Based on the acceleration data of the sacrum, an algorithm was developed to predict therapeutic balance assistance. To validate the developed algorithm, the predicted events of balance assistance by the algorithm were compared with the actual provided therapeutic assistance. The algorithm was able to predict the actual therapeutic assistance with a Positive Predictive Value of 87% and a True Positive Rate of 81%. Assistance mainly took place over the medio-lateral axis and corrective forces of about 2% of the patient's body weight (15.9 N (11), median (IQR)) were provided by therapists in this plane. Median duration of balance assistance was 1.1 s (0.6) (median (IQR)) and median impulse was 9.4Ns (8.2) (median (IQR)). Although therapists were specifically instructed to aim for the force sensors on the iliac crest, a different contact location was reported in 22% of the corrections. This paper presents insights into the behavior of therapists regarding their manual physical assistance during gait training. A quantitative dataset was presented, representing therapeutic balance-assisting force characteristics. Furthermore, an algorithm was developed that predicts events at which therapeutic balance assistance was provided. Prediction scores remain high when different therapists and patients were analyzed with the same algorithm settings. Both the quantitative dataset and the developed algorithm can serve as technical input in the development of (robot-controlled) balance supportive devices.
Moreno-Bote, Rubén; Parga, Néstor
2010-06-01
Delivery of neurotransmitter produces on a synapse a current that flows through the membrane and gets transmitted into the soma of the neuron, where it is integrated. The decay time of the current depends on the synaptic receptor's type and ranges from a few (e.g., AMPA receptors) to a few hundred milliseconds (e.g., NMDA receptors). The role of the variety of synaptic timescales, several of them coexisting in the same neuron, is at present not understood. A prime question to answer is which is the effect of temporal filtering at different timescales of the incoming spike trains on the neuron's response. Here, based on our previous work on linear synaptic filtering, we build a general theory for the stationary firing response of integrate-and-fire (IF) neurons receiving stochastic inputs filtered by one, two, or multiple synaptic channels, each characterized by an arbitrary timescale. The formalism applies to arbitrary IF model neurons and arbitrary forms of input noise (i.e., not required to be gaussian or to have small amplitude), as well as to any form of synaptic filtering (linear or nonlinear). The theory determines with exact analytical expressions the firing rate of an IF neuron for long synaptic time constants using the adiabatic approach. The correlated spiking (cross-correlations function) of two neurons receiving common as well as independent sources of noise is also described. The theory is illustrated using leaky, quadratic, and noise-thresholded IF neurons. Although the adiabatic approach is exact when at least one of the synaptic timescales is long, it provides a good prediction of the firing rate even when the timescales of the synapses are comparable to that of the leak of the neuron; it is not required that the synaptic time constants are longer than the mean interspike intervals or that the noise has small variance. The distribution of the potential for general IF neurons is also characterized. Our results provide powerful analytical tools that can allow a quantitative description of the dynamics of neuronal networks with realistic synaptic dynamics.
NASA Astrophysics Data System (ADS)
Titschack, J.; Baum, D.; Matsuyama, K.; Boos, K.; Färber, C.; Kahl, W.-A.; Ehrig, K.; Meinel, D.; Soriano, C.; Stock, S. R.
2018-06-01
During the last decades, X-ray (micro-)computed tomography has gained increasing attention for the description of porous skeletal and shell structures of various organism groups. However, their quantitative analysis is often hampered by the difficulty to discriminate cavities and pores within the object from the surrounding region. Herein, we test the ambient occlusion (AO) algorithm and newly implemented optimisations for the segmentation of cavities (implemented in the software Amira). The segmentation accuracy is evaluated as a function of (i) changes in the ray length input variable, and (ii) the usage of AO (scalar) field and other AO-derived (scalar) fields. The results clearly indicate that the AO field itself outperforms all other AO-derived fields in terms of segmentation accuracy and robustness against variations in the ray length input variable. The newly implemented optimisations improved the AO field-based segmentation only slightly, while the segmentations based on the AO-derived fields improved considerably. Additionally, we evaluated the potential of the AO field and AO-derived fields for the separation and classification of cavities as well as skeletal structures by comparing them with commonly used distance-map-based segmentations. For this, we tested the zooid separation within a bryozoan colony, the stereom classification of an ophiuroid tooth, the separation of bioerosion traces within a marble block and the calice (central cavity)-pore separation within a dendrophyllid coral. The obtained results clearly indicate that the ideal input field depends on the three-dimensional morphology of the object of interest. The segmentations based on the AO-derived fields often provided cavity separations and skeleton classifications that were superior to or impossible to obtain with commonly used distance-map-based segmentations. The combined usage of various AO-derived fields by supervised or unsupervised segmentation algorithms might provide a promising target for future research to further improve the results for this kind of high-end data segmentation and classification. Furthermore, the application of the developed segmentation algorithm is not restricted to X-ray (micro-)computed tomographic data but may potentially be useful for the segmentation of 3D volume data from other sources.
McLellan, Eileen; Schilling, Keith; Robertson, Dale M.
2015-01-01
We present conceptual and quantitative models that predict changes in fertilizer-derived nitrogen delivery from rowcrop landscapes caused by agricultural conservation efforts implemented to reduce nutrient inputs and transport and increase nutrient retention in the landscape. To evaluate the relative importance of changes in the sources, transport, and sinks of fertilizer-derived nitrogen across a region, we use the spatially explicit SPAtially Referenced Regression On Watershed attributes watershed model to map the distribution, at the small watershed scale within the Upper Mississippi-Ohio River Basin (UMORB), of: (1) fertilizer inputs; (2) nutrient attenuation during delivery of those inputs to the UMORB outlet; and (3) nitrogen export from the UMORB outlet. Comparing these spatial distributions suggests that the amount of fertilizer input and degree of nutrient attenuation are both important in determining the extent of nitrogen export. From a management perspective, this means that agricultural conservation efforts to reduce nitrogen export would benefit by: (1) expanding their focus to include activities that restore and enhance nutrient processing in these highly altered landscapes; and (2) targeting specific types of best management practices to watersheds where they will be most valuable. Doing so successfully may result in a shift in current approaches to conservation planning, outreach, and funding.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
Toward an inventory of nitrogen input to the United States
Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...
Happel, Max F K; Jeschke, Marcus; Ohl, Frank W
2010-08-18
Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.
40 CFR 60.4176 - Additional requirements to provide heat input data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Additional requirements to provide heat... Compliance Times for Coal-Fired Electric Steam Generating Units Monitoring and Reporting § 60.4176 Additional requirements to provide heat input data. The owner or operator of a Hg Budget unit that monitors and reports Hg...
NASA Astrophysics Data System (ADS)
Gallo, Emanuela Carolina Angela
Width increased dual-pump enhanced coherent anti-Stokes Raman spectroscopy (WIDECARS) measurements were conducted in a McKenna air-ethylene premixed burner, at nominal equivalence ratio range between 0.55 and 2.50 to provide quantitative measurements of six major combustion species (C2H 4, N2, O2, H2, CO, CO2) concentration and temperature simultaneously. The purpose of this test was to investigate the uncertainties in the experimental and spectral modeling methods in preparation for an subsequent scramjet C2H4/air combustion test at the University of Virginia-Aerospace Research Laboratory. A broadband Pyrromethene (PM) PM597 and PM650 dye laser mixture and optical cavity were studied and optimized to excite the Raman shift of all the target species. Two hundred single shot recorded spectra were processed, theoretically fitted and then compared to computational models, to verify where chemical equilibrium or adiabatic condition occurred, providing experimental flame location and formation, species concentrations, temperature, and heat losses inputs to computational kinetic models. The Stark effect, temperature, and concentration errors are discussed. Subsequently, WIDECARS measurements of a premixed air-ethylene flame were successfully acquired in a direct connect small-scale dual-mode scramjet combustor, at University of Virginia Supersonic Combustion Facility (UVaSCF). A nominal Mach 5 flight condition was simulated (stagnation pressure p0 = 300 kPa, temperature T0 = 1200 K, equivalence ratio range ER = 0.3 -- 0.4). The purpose of this test was to provide quantitative measurements of the six major combustion species concentration and temperature. Point-wise measurements were taken by mapping four two-dimensional orthogonal planes (before, within, and two planes after the cavity flame holder) with respect to the combustor freestream direction. Two hundred single shot recorded spectra were processed and theoretically fitted. Mean flow and standard deviation are provided for each investigated case. Within the flame limits tested, WIDECARS data were analyzed and compared with CFD simulations and OH-PLIF measurements.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Zarella, Mark D; Breen, David E; Plagov, Andrei; Garcia, Fernando U
2015-01-01
Hematoxylin and eosin (H&E) staining is ubiquitous in pathology practice and research. As digital pathology has evolved, the reliance of quantitative methods that make use of H&E images has similarly expanded. For example, cell counting and nuclear morphometry rely on the accurate demarcation of nuclei from other structures and each other. One of the major obstacles to quantitative analysis of H&E images is the high degree of variability observed between different samples and different laboratories. In an effort to characterize this variability, as well as to provide a substrate that can potentially mitigate this factor in quantitative image analysis, we developed a technique to project H&E images into an optimized space more appropriate for many image analysis procedures. We used a decision tree-based support vector machine learning algorithm to classify 44 H&E stained whole slide images of resected breast tumors according to the histological structures that are present. This procedure takes an H&E image as an input and produces a classification map of the image that predicts the likelihood of a pixel belonging to any one of a set of user-defined structures (e.g., cytoplasm, stroma). By reducing these maps into their constituent pixels in color space, an optimal reference vector is obtained for each structure, which identifies the color attributes that maximally distinguish one structure from other elements in the image. We show that tissue structures can be identified using this semi-automated technique. By comparing structure centroids across different images, we obtained a quantitative depiction of H&E variability for each structure. This measurement can potentially be utilized in the laboratory to help calibrate daily staining or identify troublesome slides. Moreover, by aligning reference vectors derived from this technique, images can be transformed in a way that standardizes their color properties and makes them more amenable to image processing.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Smart mobility solution with multiple input Output interface.
Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya
2017-07-01
Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.
2010-01-01
Background The goal of physiologically based pharmacokinetics (PBPK) is to predict drug kinetics from an understanding of the organ/blood exchange. The standard approach is to assume that the organ is "flow limited" which means that the venous blood leaving the organ equilibrates with the well-stirred tissue compartment. Although this assumption is valid for most solutes, it has been shown to be incorrect for several very highly fat soluble compounds which appear to be "diffusion limited". This paper describes the physical basis of this adipose diffusion limitation and its quantitative dependence on the blood/water (Kbld-wat) and octanol/water (Kow) partition coefficient. Methods Experimental measurements of the time dependent rat blood and adipose concentration following either intravenous or oral input were used to estimate the "apparent" adipose perfusion rate (FA) assuming that the tissue is flow limited. It is shown that the ratio of FA to the anatomic perfusion rate (F) provides a measure of the diffusion limitation. A quantitative relationship between this diffusion limitation and Kbld-wat and Kow is derived. This analysis was applied to previously published data, including the Oberg et. al. measurements of the rat plasma and adipose tissue concentration following an oral dose of a mixture of 13 different polychlorinated biphenyls. Results Solutes become diffusion limited at values of log Kow greater than about 5.6, with the adipose-blood exchange rate reduced by a factor of about 30 for a solute with a log Kow of 7.36. Quantitatively, a plot of FA/F versus Kow is well described assuming an adipose permeability-surface area product (PS) of 750/min. This PS corresponds to a 0.14 micron aqueous layer separating the well-stirred blood from the adipose lipid. This is approximately equal to the thickness of the rat adipose capillary endothelium. Conclusions These results can be used to quantitate the adipose-blood diffusion limitation as a function of Kow. This is especially important for the highly fat soluble persistent organic chemicals (e.g. polychlorinated biphenyls, dioxins) whose pharmacokinetics are primarily determined by the adipose-blood exchange kinetics. PMID:20055995